title
stringlengths 13
150
| url
stringlengths 97
97
| authors
stringlengths 8
467
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | AuthorFeedback
stringlengths 102
102
⌀ | Bibtex
stringlengths 53
54
| MetaReview
stringlengths 99
99
| Paper
stringlengths 93
93
| Review
stringlengths 95
95
| Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 53
2k
|
---|---|---|---|---|---|---|---|---|---|---|---|
On the Equivalence between Online and Private Learnability beyond Binary Classification | https://papers.nips.cc/paper_files/paper/2020/hash/c24fe9f765a44048868b5a620f05678e-Abstract.html | Young Jung, Baekjin Kim, Ambuj Tewari | https://papers.nips.cc/paper_files/paper/2020/hash/c24fe9f765a44048868b5a620f05678e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c24fe9f765a44048868b5a620f05678e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11125-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c24fe9f765a44048868b5a620f05678e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c24fe9f765a44048868b5a620f05678e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c24fe9f765a44048868b5a620f05678e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c24fe9f765a44048868b5a620f05678e-Supplemental.pdf | Alon et al. [2019] and Bun et al. [2020] recently showed that online learnability and private PAC learnability are equivalent in binary classification. We investigate whether this equivalence extends to multi-class classification and regression. First, we show that private learnability implies online learnability in both settings. Our extension involves studying a novel variant of the Littlestone dimension that depends on a tolerance parameter and on an appropriate generalization of the concept of threshold functions beyond binary classification. Second, we show that while online learnability continues to imply private learnability in multi-class classification, current proof techniques encounter significant hurdles in the regression setting. While the equivalence for regression remains open, we provide non-trivial sufficient conditions for an online learnable class to also be privately learnable. |
AViD Dataset: Anonymized Videos from Diverse Countries | https://papers.nips.cc/paper_files/paper/2020/hash/c28e5b0c9841b5ef396f9f519bf6c217-Abstract.html | AJ Piergiovanni, Michael Ryoo | https://papers.nips.cc/paper_files/paper/2020/hash/c28e5b0c9841b5ef396f9f519bf6c217-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c28e5b0c9841b5ef396f9f519bf6c217-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11126-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c28e5b0c9841b5ef396f9f519bf6c217-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c28e5b0c9841b5ef396f9f519bf6c217-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c28e5b0c9841b5ef396f9f519bf6c217-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c28e5b0c9841b5ef396f9f519bf6c217-Supplemental.pdf | We introduce a new public video dataset for action recognition: Anonymized Videos from Diverse countries (AViD). Unlike existing public video datasets, AViD is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries. Further, all the face identities in the AViD videos are properly anonymized to protect their privacy. It also is a static dataset where each video is licensed with the creative commons license. We confirm that most of the existing video datasets are statistically biased to only capture action videos from a limited number of countries. We experimentally illustrate that models trained with such biased datasets do not transfer perfectly to action videos from the other countries, and show that AViD addresses such problem. We also confirm that the new AViD dataset could serve as a good dataset for pretraining the models, performing comparably or better than prior datasets. The dataset is available at https://github.com/piergiaj/AViD |
Probably Approximately Correct Constrained Learning | https://papers.nips.cc/paper_files/paper/2020/hash/c291b01517f3e6797c774c306591cc32-Abstract.html | Luiz Chamon, Alejandro Ribeiro | https://papers.nips.cc/paper_files/paper/2020/hash/c291b01517f3e6797c774c306591cc32-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c291b01517f3e6797c774c306591cc32-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11127-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c291b01517f3e6797c774c306591cc32-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c291b01517f3e6797c774c306591cc32-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c291b01517f3e6797c774c306591cc32-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c291b01517f3e6797c774c306591cc32-Supplemental.pdf | As learning solutions reach critical applications in social, industrial, and medical domains, the need to curtail their behavior has become paramount. There is now ample evidence that without explicit tailoring, learning can lead to biased, unsafe, and prejudiced solutions. To tackle these problems, we develop a generalization theory of constrained learning based on the probably approximately correct (PAC) learning framework. In particular, we show that imposing requirements does not make a learning problem harder in the sense that any PAC learnable class is also PAC constrained learnable using a constrained counterpart of the empirical risk minimization (ERM) rule. For typical parametrized models, however, this learner involves solving a constrained non-convex optimization program for which even obtaining a feasible solution is challenging. To overcome this issue, we prove that under mild conditions the empirical dual problem of constrained learning is also a PAC constrained learner that now leads to a practical constrained learning algorithm based solely on solving unconstrained problems. We analyze the generalization properties of this solution and use it to illustrate how constrained learning can address problems in fair and robust classification. |
RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning | https://papers.nips.cc/paper_files/paper/2020/hash/c2964caac096f26db222cb325aa267cb-Abstract.html | Riccardo Del Chiaro, Bartłomiej Twardowski, Andrew Bagdanov, Joost van de Weijer | https://papers.nips.cc/paper_files/paper/2020/hash/c2964caac096f26db222cb325aa267cb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c2964caac096f26db222cb325aa267cb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11128-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c2964caac096f26db222cb325aa267cb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c2964caac096f26db222cb325aa267cb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c2964caac096f26db222cb325aa267cb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c2964caac096f26db222cb325aa267cb-Supplemental.pdf | Research on continual learning has led to a variety of approaches to
mitigating catastrophic forgetting in feed-forward classification networks.
Until now surprisingly little attention has been focused on continual learning
of recurrent models applied to problems like image captioning. In this paper
we take a systematic look at continual learning of LSTM-based models for image
captioning. We propose an attention-based approach that explicitly
accommodates the transient nature of vocabularies in continual image
captioning tasks -- i.e. that task vocabularies are not disjoint. We call our
method Recurrent Attention to Transient Tasks (RATT), and also show how to
adapt continual learning approaches based on weight regularization and
knowledge distillation to recurrent continual learning problems. We apply our
approaches to incremental image captioning problem on two new continual
learning benchmarks we define using the MS-COCO and Flickr30 datasets. Our
results demonstrate that RATT is able to sequentially learn five captioning
tasks while incurring no forgetting of previously learned ones. |
Decisions, Counterfactual Explanations and Strategic Behavior | https://papers.nips.cc/paper_files/paper/2020/hash/c2ba1bc54b239208cb37b901c0d3b363-Abstract.html | Stratis Tsirtsis, Manuel Gomez Rodriguez | https://papers.nips.cc/paper_files/paper/2020/hash/c2ba1bc54b239208cb37b901c0d3b363-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c2ba1bc54b239208cb37b901c0d3b363-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11129-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c2ba1bc54b239208cb37b901c0d3b363-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c2ba1bc54b239208cb37b901c0d3b363-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c2ba1bc54b239208cb37b901c0d3b363-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c2ba1bc54b239208cb37b901c0d3b363-Supplemental.pdf | As data-driven predictive models are increasingly used to inform decisions, it has been argued that decision makers should provide explanations that help individuals understand what would have to change for these decisions to be beneficial ones. However, there has been little discussion on the possibility that individuals may use the above counterfactual explanations to invest effort strategically and maximize their chances of receiving a beneficial decision. In this paper, our goal is to find policies and counterfactual explanations that are optimal in terms of utility in such a strategic setting. We first show that, given a pre-defined policy, the problem of finding the optimal set of counterfactual explanations is NP-hard. Then, we show that the corresponding objective is nondecreasing and satisfies submodularity and this allows a standard greedy algorithm to enjoy approximation guarantees. In addition, we further show that the problem of jointly finding both the optimal policy and set of counterfactual explanations reduces to maximizing a non-monotone submodular function. As a result, we can use a recent randomized algorithm to solve the problem, which also offers approximation guarantees. Finally, we demonstrate that, by incorporating a matroid constraint into the problem formulation, we can increase the diversity of the optimal set of counterfactual explanations and incentivize individuals across the whole spectrum of the population to self improve. Experiments on synthetic and real lending and credit card data illustrate our theoretical findings and show that the counterfactual explanations and decision policies found by our algorithms achieve higher utility than several competitive baselines. |
Hierarchical Patch VAE-GAN: Generating Diverse Videos from a Single Sample | https://papers.nips.cc/paper_files/paper/2020/hash/c2f32522a84d5e6357e6abac087f1b0b-Abstract.html | Shir Gur, Sagie Benaim, Lior Wolf | https://papers.nips.cc/paper_files/paper/2020/hash/c2f32522a84d5e6357e6abac087f1b0b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c2f32522a84d5e6357e6abac087f1b0b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11130-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c2f32522a84d5e6357e6abac087f1b0b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c2f32522a84d5e6357e6abac087f1b0b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c2f32522a84d5e6357e6abac087f1b0b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c2f32522a84d5e6357e6abac087f1b0b-Supplemental.zip | We consider the task of generating diverse and novel videos from a single video sample.
Recently, new hierarchical patch-GAN based approaches were proposed for generating diverse images, given only a single sample at training time. Moving to videos, these approaches fail to generate diverse samples, and often collapse into generating samples similar to the training video. We introduce a novel patch-based variational autoencoder (VAE) which allows for a much greater diversity in generation. Using this tool, a new hierarchical video generation scheme is constructed: at coarse scales, our patch-VAE is employed, ensuring samples are of high diversity. Subsequently, at finer scales, a patch-GAN renders the fine details, resulting in high quality videos.
Our experiments show that the proposed method produces diverse samples in both the image domain, and the more challenging video domain.
Our code and supplementary material (SM) with additional samples are available at https://shirgur.github.io/hp-vae-gan |
A Feasible Level Proximal Point Method for Nonconvex Sparse Constrained Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/c336346c777707e09cab2a3c79174d90-Abstract.html | Digvijay Boob, Qi Deng, Guanghui Lan, Yilin Wang | https://papers.nips.cc/paper_files/paper/2020/hash/c336346c777707e09cab2a3c79174d90-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c336346c777707e09cab2a3c79174d90-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11131-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c336346c777707e09cab2a3c79174d90-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c336346c777707e09cab2a3c79174d90-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c336346c777707e09cab2a3c79174d90-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c336346c777707e09cab2a3c79174d90-Supplemental.pdf | Nonconvex sparse models have received significant attention in high-dimensional machine learning. In this paper, we study a new model consisting of a general convex or nonconvex objectives and a variety of continuous nonconvex sparsity-inducing constraints. For this constrained model, we propose a novel proximal point algorithm that solves a sequence of convex subproblems with gradually relaxed constraint levels. Each subproblem, having a proximal point objective and a convex surrogate constraint, can be efficiently solved based on a fast routine for projection onto the surrogate constraint. We establish the asymptotic convergence of the proposed algorithm to the Karush-Kuhn-Tucker (KKT) solutions. We also establish new convergence complexities to achieve an approximate KKT solution when the objective can be smooth/nonsmooth, deterministic/stochastic and convex/nonconvex with complexity that is on a par with gradient descent for unconstrained optimization problems in respective cases. To the best of our knowledge, this is the first study of the first-order methods with complexity guarantee for nonconvex sparse-constrained problems. We perform numerical experiments to demonstrate the effectiveness of our new model and efficiency of the proposed algorithm for large scale problems. |
Reservoir Computing meets Recurrent Kernels and Structured Transforms | https://papers.nips.cc/paper_files/paper/2020/hash/c348616cd8a86ee661c7c98800678fad-Abstract.html | Jonathan Dong, Ruben Ohana, Mushegh Rafayelyan, Florent Krzakala | https://papers.nips.cc/paper_files/paper/2020/hash/c348616cd8a86ee661c7c98800678fad-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c348616cd8a86ee661c7c98800678fad-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11132-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c348616cd8a86ee661c7c98800678fad-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c348616cd8a86ee661c7c98800678fad-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c348616cd8a86ee661c7c98800678fad-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c348616cd8a86ee661c7c98800678fad-Supplemental.pdf | Reservoir Computing is a class of simple yet efficient Recurrent Neural Networks where internal weights are fixed at random and only a linear output layer is trained. In the large size limit, such random neural networks have a deep connection with kernel methods. Our contributions are threefold: a) We rigorously establish the recurrent kernel limit of Reservoir Computing and prove its convergence. b) We test our models on chaotic time series prediction, a classic but challenging benchmark in Reservoir Computing, and show how the Recurrent Kernel is competitive and computationally efficient when the number of data points remains moderate. c) When the number of samples is too large, we leverage the success of structured Random Features for kernel approximation by introducing Structured Reservoir Computing. The two proposed methods, Recurrent Kernel and Structured Reservoir Computing, turn out to be much faster and more memory-efficient than conventional Reservoir Computing. |
Comprehensive Attention Self-Distillation for Weakly-Supervised Object Detection | https://papers.nips.cc/paper_files/paper/2020/hash/c3535febaff29fcb7c0d20cbe94391c7-Abstract.html | Zeyi Huang, Yang Zou, B. V. K. Vijaya Kumar, Dong Huang | https://papers.nips.cc/paper_files/paper/2020/hash/c3535febaff29fcb7c0d20cbe94391c7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c3535febaff29fcb7c0d20cbe94391c7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11133-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c3535febaff29fcb7c0d20cbe94391c7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c3535febaff29fcb7c0d20cbe94391c7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c3535febaff29fcb7c0d20cbe94391c7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c3535febaff29fcb7c0d20cbe94391c7-Supplemental.pdf | Weakly Supervised Object Detection (WSOD) has emerged as an effective tool to train object detectors using only the image-level category labels. However, without object-level labels, WSOD detectors are prone to detect bounding boxes on salient objects, clustered objects and discriminative object parts. Moreover, the image-level category labels do not enforce consistent object detection across different transformations of the same images. To address the above issues, we propose a Comprehensive Attention Self-Distillation (CASD) training approach for WSOD. To balance feature learning among all object instances, CASD computes the comprehensive attention aggregated from multiple transformations and feature layers of the same images. To enforce consistent spatial supervision on objects, CASD conducts self-distillation on the WSOD networks, such that the comprehensive attention is approximated simultaneously by multiple transformations and feature layers of the same images. CASD produces new state-of-the-art WSOD results on standard benchmarks such as PASCAL VOC 2007/2012 and MS-COCO. |
Linear Dynamical Systems as a Core Computational Primitive | https://papers.nips.cc/paper_files/paper/2020/hash/c3581d2150ff68f3b33b22634b8adaea-Abstract.html | Shiva Kaul | https://papers.nips.cc/paper_files/paper/2020/hash/c3581d2150ff68f3b33b22634b8adaea-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c3581d2150ff68f3b33b22634b8adaea-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11134-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c3581d2150ff68f3b33b22634b8adaea-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c3581d2150ff68f3b33b22634b8adaea-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c3581d2150ff68f3b33b22634b8adaea-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c3581d2150ff68f3b33b22634b8adaea-Supplemental.pdf | Running nonlinear RNNs for T steps takes O(T) time. Our construction, called LDStack, approximately runs them in O(log T) parallel time, and obtains arbitrarily low error via repetition. First, we show nonlinear RNNs can be approximated by a stack of multiple-input, multiple-output (MIMO) LDS. This replaces nonlinearity across time with nonlinearity along depth. Next, we show that MIMO LDS can be approximated by an average or a concatenation of single-input, multiple-output (SIMO) LDS. Finally, we present an algorithm for running (and differentiating) SIMO LDS in O(log T) parallel time. On long sequences, LDStack is much faster than traditional RNNs, yet it achieves similar accuracy in our experiments. Furthermore, LDStack is amenable to linear systems theory. Therefore, it improves not only speed, but also interpretability and mathematical tractability. |
Ratio Trace Formulation of Wasserstein Discriminant Analysis | https://papers.nips.cc/paper_files/paper/2020/hash/c37f9e1283cbd4a6edfd778fc8b1c652-Abstract.html | Hexuan Liu, Yunfeng Cai, You-Lin Chen, Ping Li | https://papers.nips.cc/paper_files/paper/2020/hash/c37f9e1283cbd4a6edfd778fc8b1c652-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c37f9e1283cbd4a6edfd778fc8b1c652-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11135-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c37f9e1283cbd4a6edfd778fc8b1c652-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c37f9e1283cbd4a6edfd778fc8b1c652-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c37f9e1283cbd4a6edfd778fc8b1c652-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c37f9e1283cbd4a6edfd778fc8b1c652-Supplemental.pdf | We reformulate the Wasserstein Discriminant Analysis (WDA) as a ratio trace problem and present an eigensolver-based algorithm to compute the discriminative subspace of WDA. This new formulation, along with the proposed algorithm, can be served as an efficient and more stable alternative to the original trace ratio formulation and its gradient-based algorithm. We provide a rigorous convergence analysis for the proposed algorithm under the self-consistent field framework, which is crucial but missing in the literature. As an application, we combine WDA with low-dimensional clustering techniques, such as K-means, to perform subspace clustering. Numerical experiments on real datasets show promising results of the ratio trace formulation of WDA in both classification and clustering tasks. |
PAC-Bayes Analysis Beyond the Usual Bounds | https://papers.nips.cc/paper_files/paper/2020/hash/c3992e9a68c5ae12bd18488bc579b30d-Abstract.html | Omar Rivasplata, Ilja Kuzborskij, Csaba Szepesvari, John Shawe-Taylor | https://papers.nips.cc/paper_files/paper/2020/hash/c3992e9a68c5ae12bd18488bc579b30d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c3992e9a68c5ae12bd18488bc579b30d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11136-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c3992e9a68c5ae12bd18488bc579b30d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c3992e9a68c5ae12bd18488bc579b30d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c3992e9a68c5ae12bd18488bc579b30d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c3992e9a68c5ae12bd18488bc579b30d-Supplemental.pdf | Specifically, we present a basic PAC-Bayes inequality for stochastic kernels, from which one may derive extensions of various known PAC-Bayes bounds as well as novel bounds. We clarify the role of the requirements of fixed ‘data-free’ priors, bounded losses, and i.i.d. data. We highlight that those requirements were used to upper-bound an exponential moment term, while the basic PAC-Bayes theorem remains valid without those restrictions. We present three bounds that illustrate the use of data-dependent priors, including one for the unbounded square loss. |
Few-shot Visual Reasoning with Meta-Analogical Contrastive Learning | https://papers.nips.cc/paper_files/paper/2020/hash/c39e1a03859f9ee215bc49131d0caf33-Abstract.html | Youngsung Kim, Jinwoo Shin, Eunho Yang, Sung Ju Hwang | https://papers.nips.cc/paper_files/paper/2020/hash/c39e1a03859f9ee215bc49131d0caf33-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c39e1a03859f9ee215bc49131d0caf33-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11137-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c39e1a03859f9ee215bc49131d0caf33-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c39e1a03859f9ee215bc49131d0caf33-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c39e1a03859f9ee215bc49131d0caf33-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c39e1a03859f9ee215bc49131d0caf33-Supplemental.pdf | While humans can solve a visual puzzle that requires logical reasoning by observing only few samples, it would require training over a large number of samples for state-of-the-art deep reasoning models to obtain similar performance on the same task. In this work, we propose to solve such a few-shot (or low-shot) abstract visual reasoning problem by resorting to \emph{analogical reasoning}, which is a unique human ability to identify structural or relational similarity between two sets. Specifically, we construct analogical and non-analogical training pairs of two different problem instances, e.g., the latter is created by perturbing or shuffling the original (former) problem. Then, we extract the structural relations among elements in both domains in a pair by enforcing analogical ones to be as similar as possible, while minimizing similarities between non-analogical ones. This analogical contrastive learning allows to effectively learn the relational representations of given abstract reasoning tasks. We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce. We further meta-learn our analogical contrastive learning model over the same tasks with diverse attributes, and show that it generalizes to the same visual reasoning problem with unseen attributes. |
MPNet: Masked and Permuted Pre-training for Language Understanding | https://papers.nips.cc/paper_files/paper/2020/hash/c3a690be93aa602ee2dc0ccab5b7b67e-Abstract.html | Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu | https://papers.nips.cc/paper_files/paper/2020/hash/c3a690be93aa602ee2dc0ccab5b7b67e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c3a690be93aa602ee2dc0ccab5b7b67e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11138-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c3a690be93aa602ee2dc0ccab5b7b67e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c3a690be93aa602ee2dc0ccab5b7b67e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c3a690be93aa602ee2dc0ccab5b7b67e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c3a690be93aa602ee2dc0ccab5b7b67e-Supplemental.zip | BERT adopts masked language modeling (MLM) for pre-training and is one of the most successful pre-training models. Since BERT neglects dependency among predicted tokens, XLNet introduces permuted language modeling (PLM) for pre-training to address this problem. However, XLNet does not leverage the full position information of a sentence and thus suffers from position discrepancy between pre-training and fine-tuning. In this paper, we propose MPNet, a novel pre-training method that inherits the advantages of BERT and XLNet and avoids their limitations. MPNet leverages the dependency among predicted tokens through permuted language modeling (vs. MLM in BERT), and takes auxiliary position information as input to make the model see a full sentence and thus reducing the position discrepancy (vs. PLM in XLNet). We pre-train MPNet on a large-scale dataset (over 160GB text corpora) and fine-tune on a variety of down-streaming tasks (GLUE, SQuAD, etc). Experimental results show that MPNet outperforms MLM and PLM by a large margin, and achieves better results on these tasks compared with previous state-of-the-art pre-trained methods (e.g., BERT, XLNet, RoBERTa) under the same model setting. We attach the code in the supplemental materials. |
Reinforcement Learning with Feedback Graphs | https://papers.nips.cc/paper_files/paper/2020/hash/c41dd99a69df04044aa4e33ece9c9249-Abstract.html | Christoph Dann, Yishay Mansour, Mehryar Mohri, Ayush Sekhari, Karthik Sridharan | https://papers.nips.cc/paper_files/paper/2020/hash/c41dd99a69df04044aa4e33ece9c9249-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c41dd99a69df04044aa4e33ece9c9249-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11139-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c41dd99a69df04044aa4e33ece9c9249-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c41dd99a69df04044aa4e33ece9c9249-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c41dd99a69df04044aa4e33ece9c9249-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c41dd99a69df04044aa4e33ece9c9249-Supplemental.pdf | We study RL in the tabular MDP setting where the agent receives additional observations per step in the form of transitions samples. Such additional observations can be provided in many tasks by auxiliary sensors or by leveraging prior knowledge about the environment (e.g., when certain actions yield similar outcome). We formalize this setting using a feedback graph over state-action pairs and show that model-based algorithms can incorporate additional observations for more sample-efficient learning. We give a regret bound that predominantly depends on the size of the maximum acyclic subgraph of the feedback graph, in contrast with a polynomial dependency on the number of states and actions in the absence of side observations. Finally, we highlight fundamental challenges for leveraging a small dominating set of the feedback graph, as compared to the well-studied bandit setting, and propose a new algorithm that can use such a dominating set to learn a near-optimal policy faster. |
Zap Q-Learning With Nonlinear Function Approximation | https://papers.nips.cc/paper_files/paper/2020/hash/c42f891cebbc81aa59f8f183243ac2b9-Abstract.html | Shuhang Chen, Adithya M Devraj, Fan Lu, Ana Busic, Sean Meyn | https://papers.nips.cc/paper_files/paper/2020/hash/c42f891cebbc81aa59f8f183243ac2b9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c42f891cebbc81aa59f8f183243ac2b9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11140-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c42f891cebbc81aa59f8f183243ac2b9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c42f891cebbc81aa59f8f183243ac2b9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c42f891cebbc81aa59f8f183243ac2b9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c42f891cebbc81aa59f8f183243ac2b9-Supplemental.zip | Zap Q-learning is a recent class of reinforcement learning algorithms, motivated primarily as a means to accelerate convergence. Stability theory has been absent outside of two restrictive classes: the tabular setting, and optimal stopping. This paper introduces a new framework for analysis of a more general class of recursive algorithms known as stochastic approximation. Based on this general theory, it is shown that Zap Q-learning is consistent under a non-degeneracy assumption, even when the function approximation architecture is nonlinear. Zap Q-learning with neural network function approximation emerges as a special case, and is tested on examples from OpenAI Gym. Based on multiple experiments with a range of neural network sizes, it is found that the new algorithms converge quickly and are robust to choice of function approximation architecture. |
Lipschitz-Certifiable Training with a Tight Outer Bound | https://papers.nips.cc/paper_files/paper/2020/hash/c46482dd5d39742f0bfd417b492d0e8e-Abstract.html | Sungyoon Lee, Jaewook Lee, Saerom Park | https://papers.nips.cc/paper_files/paper/2020/hash/c46482dd5d39742f0bfd417b492d0e8e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c46482dd5d39742f0bfd417b492d0e8e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11141-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c46482dd5d39742f0bfd417b492d0e8e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c46482dd5d39742f0bfd417b492d0e8e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c46482dd5d39742f0bfd417b492d0e8e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c46482dd5d39742f0bfd417b492d0e8e-Supplemental.pdf | Verifiable training is a promising research direction for training a robust network. However, most verifiable training methods are slow or lack scalability. In this study, we propose a fast and scalable certifiable training algorithm based on Lipschitz analysis and interval arithmetic. Our certifiable training algorithm provides a tight propagated outer bound by introducing the box constraint propagation (BCP), and it efficiently computes the worst logit over the outer bound. In the experiments, we show that BCP achieves a tighter outer bound than the global Lipschitz-based outer bound. Moreover, our certifiable training algorithm is over 12 times faster than the state-of-the-art dual relaxation-based method; however, it achieves comparable or better verification performance, improving natural accuracy. Our fast certifiable training algorithm with the tight outer bound can scale to Tiny ImageNet with verification accuracy of 20.1\% ($\ell_2$-perturbation of $\epsilon=36/255$). Our code is available at \url{https://github.com/sungyoon-lee/bcp}. |
Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint | https://papers.nips.cc/paper_files/paper/2020/hash/c49e446a46fa27a6e18ffb6119461c3f-Abstract.html | Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi, Rebecca Reiffenhäuser | https://papers.nips.cc/paper_files/paper/2020/hash/c49e446a46fa27a6e18ffb6119461c3f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c49e446a46fa27a6e18ffb6119461c3f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11142-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c49e446a46fa27a6e18ffb6119461c3f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c49e446a46fa27a6e18ffb6119461c3f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c49e446a46fa27a6e18ffb6119461c3f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c49e446a46fa27a6e18ffb6119461c3f-Supplemental.pdf | Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a $5.83$ approximation and runs in $O(n \log n)$ time, i.e., at least a factor $n$ faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a 9-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data. |
Conformal Symplectic and Relativistic Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/c4b108f53550f1d5967305a9a8140ddd-Abstract.html | Guilherme Franca, Jeremias Sulam, Daniel Robinson, Rene Vidal | https://papers.nips.cc/paper_files/paper/2020/hash/c4b108f53550f1d5967305a9a8140ddd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c4b108f53550f1d5967305a9a8140ddd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11143-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c4b108f53550f1d5967305a9a8140ddd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c4b108f53550f1d5967305a9a8140ddd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c4b108f53550f1d5967305a9a8140ddd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c4b108f53550f1d5967305a9a8140ddd-Supplemental.pdf | Arguably, the two most popular accelerated or momentum-based optimization
methods are Nesterov's accelerated gradient and Polyaks's heavy ball,
both corresponding to different discretizations of a particular second order
differential equation with a friction term. Such connections with
continuous-time dynamical systems have been instrumental in demystifying
acceleration phenomena in optimization.
Here we study structure-preserving discretizations for a certain class of
dissipative (conformal) Hamiltonian systems, allowing us to analyze the
symplectic structure of both Nesterov and heavy ball, besides providing
several new insights into these methods.
Moreover, we propose a new algorithm based on a dissipative relativistic
system that normalizes the momentum and may result in more stable/faster
optimization. Importantly, such a method generalizes both Nesterov and heavy
ball, each being recovered as distinct limiting cases, and has potential
advantages at no additional cost. |
Bayes Consistency vs. H-Consistency: The Interplay between Surrogate Loss Functions and the Scoring Function Class | https://papers.nips.cc/paper_files/paper/2020/hash/c4c28b367e14df88993ad475dedf6b77-Abstract.html | Mingyuan Zhang, Shivani Agarwal | https://papers.nips.cc/paper_files/paper/2020/hash/c4c28b367e14df88993ad475dedf6b77-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c4c28b367e14df88993ad475dedf6b77-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11144-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c4c28b367e14df88993ad475dedf6b77-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c4c28b367e14df88993ad475dedf6b77-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c4c28b367e14df88993ad475dedf6b77-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c4c28b367e14df88993ad475dedf6b77-Supplemental.pdf | A fundamental question in multiclass classification concerns understanding the consistency properties of surrogate risk minimization algorithms, which minimize a (often convex) surrogate to the multiclass 0-1 loss. In particular, the framework of calibrated surrogates has played an important role in analyzing the Bayes consistency properties of such algorithms, i.e. in studying convergence to a Bayes optimal classifier (Zhang, 2004; Tewari and Bartlett, 2007). However, follow-up work has suggested this framework can be of limited value when studying H-consistency; in particular, concerns have been raised that even when the data comes from an underlying linear model, minimizing certain convex calibrated surrogates over linear scoring functions fails to recover the true model (Long and Servedio, 2013). In this paper, we investigate this apparent conundrum. We find that while some calibrated surrogates can indeed fail to provide H-consistency when minimized over a natural-looking but naively chosen scoring function class F, the situation can potentially be remedied by minimizing them over a more carefully chosen class of scoring functions F. In particular, for the popular one-vs-all hinge and logistic surrogates, both of which are calibrated (and therefore provide Bayes consistency) under realizable models, but were previously shown to pose problems for realizable H-consistency, we derive a form of scoring function class F that enables H-consistency. When H is the class of linear models, the class F consists of certain piecewise linear scoring functions that are characterized by the same number of parameters as in the linear case, and minimization over which can be performed using an adaptation of the min-pooling idea from neural network training. Our experiments confirm that the one-vs-all surrogates, when trained over this class of nonlinear scoring functions F, yield better linear multiclass classifiers than when trained over standard linear scoring functions. |
Inverting Gradients - How easy is it to break privacy in federated learning? | https://papers.nips.cc/paper_files/paper/2020/hash/c4ede56bbd98819ae6112b20ac6bf145-Abstract.html | Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, Michael Moeller | https://papers.nips.cc/paper_files/paper/2020/hash/c4ede56bbd98819ae6112b20ac6bf145-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c4ede56bbd98819ae6112b20ac6bf145-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11145-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c4ede56bbd98819ae6112b20ac6bf145-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c4ede56bbd98819ae6112b20ac6bf145-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c4ede56bbd98819ae6112b20ac6bf145-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c4ede56bbd98819ae6112b20ac6bf145-Supplemental.zip | The idea of federated learning is to collaboratively train a neural network on a server. Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data. This protocol has been designed not only to train neural networks data-efficiently, but also to provide privacy benefits for users, as their input data remains on device and only parameter gradients are shared.
But how secure is sharing parameter gradients? Previous attacks have provided a false sense of security, by succeeding only in contrived settings - even for a single image. However, by exploiting a magnitude-invariant loss along with optimization strategies based on adversarial attacks, we show that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and demonstrate that such a break of privacy is possible even for trained deep networks.
We analyze the effects of architecture as well as parameters on the difficulty of reconstructing an input image and prove that any input to a fully connected layer can be reconstructed analytically independent of the remaining architecture. Finally we discuss settings encountered in practice and show that even averaging gradients over several iterations or several images does not protect the user's privacy in federated learning applications. |
Dynamic allocation of limited memory resources in reinforcement learning | https://papers.nips.cc/paper_files/paper/2020/hash/c4fac8fb3c9e17a2f4553a001f631975-Abstract.html | Nisheet Patel, Luigi Acerbi, Alexandre Pouget | https://papers.nips.cc/paper_files/paper/2020/hash/c4fac8fb3c9e17a2f4553a001f631975-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c4fac8fb3c9e17a2f4553a001f631975-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11146-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c4fac8fb3c9e17a2f4553a001f631975-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c4fac8fb3c9e17a2f4553a001f631975-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c4fac8fb3c9e17a2f4553a001f631975-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c4fac8fb3c9e17a2f4553a001f631975-Supplemental.pdf | Biological brains are inherently limited in their capacity to process and store information, but are nevertheless capable of solving complex tasks with apparent ease. Intelligent behavior is related to these limitations, since resource constraints drive the need to generalize and assign importance differentially to features in the environment or memories of past experiences. Recently, there have been parallel efforts in reinforcement learning and neuroscience to understand strategies adopted by artificial and biological agents to circumvent limitations in information storage. However, the two threads have been largely separate. In this article, we propose a dynamical framework to maximize expected reward under constraints of limited resources, which we implement with a cost function that penalizes precise representations of action-values in memory, each of which may vary in its precision. We derive from first principles an algorithm, Dynamic Resource Allocator (DRA), which we apply to two standard tasks in reinforcement learning and a model-based planning task, and find that it allocates more resources to items in memory that have a higher impact on cumulative rewards. Moreover, DRA learns faster when starting with a higher resource budget than what it eventually allocates for performing well on tasks, which may explain why frontal cortical areas in biological brains appear more engaged in early stages of learning before settling to lower asymptotic levels of activity. Our work provides a normative solution to the problem of learning how to allocate costly resources to a collection of uncertain memories in a manner that is capable of adapting to changes in the environment. |
CryptoNAS: Private Inference on a ReLU Budget | https://papers.nips.cc/paper_files/paper/2020/hash/c519d47c329c79537fbb2b6f1c551ff0-Abstract.html | Zahra Ghodsi, Akshaj Kumar Veldanda, Brandon Reagen, Siddharth Garg | https://papers.nips.cc/paper_files/paper/2020/hash/c519d47c329c79537fbb2b6f1c551ff0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c519d47c329c79537fbb2b6f1c551ff0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11147-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c519d47c329c79537fbb2b6f1c551ff0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c519d47c329c79537fbb2b6f1c551ff0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c519d47c329c79537fbb2b6f1c551ff0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c519d47c329c79537fbb2b6f1c551ff0-Supplemental.pdf | Machine learning as a service has given raise to privacy concerns surrounding clients' data and providers' models and has catalyzed research in private inference (PI): methods to process inferences without disclosing inputs.
Recently, researchers have adapted cryptographic techniques to show PI is possible, however all solutions increase inference latency beyond practical limits.
This paper makes the observation that existing models are ill-suited for PI and proposes a novel NAS method, named CryptoNAS, for finding and tailoring models to the needs of PI. The key insight is that in PI operator latency cost are inverted:
non-linear operations (e.g., ReLU) dominate latency, while linear layers become effectively free. We develop the idea of a ReLU budget as a proxy for inference latency and use CryptoNAS to build models that maximize accuracy within a given budget. CryptoNAS improves accuracy by 3.4% and latency by 2.4x over the state-of-the-art. |
A Stochastic Path Integral Differential EstimatoR Expectation Maximization Algorithm | https://papers.nips.cc/paper_files/paper/2020/hash/c589c3a8f99401b24b9380e86d939842-Abstract.html | Gersende Fort, Eric Moulines, Hoi-To Wai | https://papers.nips.cc/paper_files/paper/2020/hash/c589c3a8f99401b24b9380e86d939842-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c589c3a8f99401b24b9380e86d939842-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11148-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c589c3a8f99401b24b9380e86d939842-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c589c3a8f99401b24b9380e86d939842-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c589c3a8f99401b24b9380e86d939842-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c589c3a8f99401b24b9380e86d939842-Supplemental.pdf | The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations. This paper introduces a novel EM algorithm, called {\tt SPIDER-EM}, for inference from a training set of size $n$, $n \gg 1$. At the core of our algorithm is an estimator of the full conditional expectation in the {\sf E}-step, adapted from the stochastic path integral differential estimator ({\tt SPIDER}) technique. We derive finite-time complexity bounds for smooth non-convex likelihood: we show that for convergence to an $\epsilon$-approximate stationary point, the complexity scales as $K_{Opt} (n,\epsilon )={\cal O}(\epsilon^{-1})$ and $K_{CE}( n,\epsilon ) = n+ \sqrt{n} {\cal O}( \epsilon^{-1} )$, where $K_{Opt}( n,\epsilon )$ and $K_{CE}(n, \epsilon )$ are respectively the number of {\sf M}-steps and the number of per-sample conditional expectations evaluations. This improves over the state-of-the-art algorithms. Numerical results support our findings. |
CHIP: A Hawkes Process Model for Continuous-time Networks with Scalable and Consistent Estimation | https://papers.nips.cc/paper_files/paper/2020/hash/c5a0ac0e2f48af1a4e619e7036fe5977-Abstract.html | Makan Arastuie, Subhadeep Paul, Kevin Xu | https://papers.nips.cc/paper_files/paper/2020/hash/c5a0ac0e2f48af1a4e619e7036fe5977-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c5a0ac0e2f48af1a4e619e7036fe5977-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11149-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c5a0ac0e2f48af1a4e619e7036fe5977-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c5a0ac0e2f48af1a4e619e7036fe5977-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c5a0ac0e2f48af1a4e619e7036fe5977-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c5a0ac0e2f48af1a4e619e7036fe5977-Supplemental.pdf | In many application settings involving networks, such as messages between users of an on-line social network or transactions between traders in financial markets, the observed data consist of timestamped relational events, which form a continuous-time network. We propose the Community Hawkes Independent Pairs (CHIP) generative model for such networks. We show that applying spectral clustering to an aggregated adjacency matrix constructed from the CHIP model provides consistent community detection for a growing number of nodes and time duration. We also develop consistent and computationally efficient estimators for the model parameters. We demonstrate that our proposed CHIP model and estimation procedure scales to large networks with tens of thousands of nodes and provides superior fits than existing continuous-time network models on several real networks. |
SAC: Accelerating and Structuring Self-Attention via Sparse Adaptive Connection | https://papers.nips.cc/paper_files/paper/2020/hash/c5c1bda1194f9423d744e0ef67df94ee-Abstract.html | Xiaoya Li, Yuxian Meng, Mingxin Zhou, Qinghong Han, Fei Wu, Jiwei Li | https://papers.nips.cc/paper_files/paper/2020/hash/c5c1bda1194f9423d744e0ef67df94ee-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c5c1bda1194f9423d744e0ef67df94ee-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11150-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c5c1bda1194f9423d744e0ef67df94ee-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c5c1bda1194f9423d744e0ef67df94ee-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c5c1bda1194f9423d744e0ef67df94ee-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c5c1bda1194f9423d744e0ef67df94ee-Supplemental.pdf | While the self-attention mechanism has been widely used in a wide variety of tasks, it has the unfortunate property of a quadratic cost with respect to the input length, which makes it difficult to deal with long inputs. In this paper, we present a method for accelerating and structuring self-attentions: Sparse Adaptive Connection (SAC). In SAC, we regard the input sequence as a graph and attention operations are performed between linked nodes. In contrast with previous self-attention models with pre-defined structures (edges), the model learns to construct attention edges to improve task-specific performances.
In this way, the model is able to select the most salient nodes and reduce the quadratic complexity regardless of the sequence length. Based on SAC, we show that previous variants of self-attention models are its special cases. Through extensive experiments on neural machine translation, language modeling, graph representation learning and image classification, we demonstrate SAC is competitive with state-of-the-art models while significantly reducing memory cost. |
Design Space for Graph Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/c5c3d4fe6b2cc463c7d7ecba17cc9de7-Abstract.html | Jiaxuan You, Zhitao Ying, Jure Leskovec | https://papers.nips.cc/paper_files/paper/2020/hash/c5c3d4fe6b2cc463c7d7ecba17cc9de7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c5c3d4fe6b2cc463c7d7ecba17cc9de7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11151-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c5c3d4fe6b2cc463c7d7ecba17cc9de7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c5c3d4fe6b2cc463c7d7ecba17cc9de7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c5c3d4fe6b2cc463c7d7ecba17cc9de7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c5c3d4fe6b2cc463c7d7ecba17cc9de7-Supplemental.pdf | The rapid evolution of Graph Neural Networks (GNNs) has led to a growing number of new architectures as well as novel applications. However, current research focuses on proposing and evaluating specific architectural designs of GNNs, such as GCN, GIN, or GAT, as opposed to studying the more general design space of GNNs that consists of a Cartesian product of different design dimensions, such as the number of layers or the type of the aggregation function. Additionally, GNN designs are often specialized to a single task, yet few efforts have been made to understand how to quickly find the best GNN design for a novel task or a novel dataset. Here we define and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks. Our approach features three key innovations: (1) A general GNN design space; (2) a GNN task space with a similarity metric, so that for a given novel task/dataset, we can quickly identify/transfer the best performing architecture; (3) an efficient and effective design space evaluation method which allows insights to be distilled from a huge number of model-task combinations. Our key results include: (1) A comprehensive set of guidelines for designing well-performing GNNs; (2) while best GNN designs for different tasks vary significantly, the GNN task space allows for transferring the best designs across different tasks; (3) models discovered using our design space achieve state-of-the-art performance. Overall, our work offers a principled and scalable approach to transition from studying individual GNN designs for specific tasks, to systematically studying the GNN design space and the task space. Finally, we release GraphGym, a powerful platform for exploring different GNN designs and tasks. GraphGym features modularized GNN implementation, standardized GNN evaluation, and reproducible and scalable experiment management. |
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis | https://papers.nips.cc/paper_files/paper/2020/hash/c5d736809766d46260d816d8dbc9eb44-Abstract.html | Jungil Kong, Jaehyeon Kim, Jaekyoung Bae | https://papers.nips.cc/paper_files/paper/2020/hash/c5d736809766d46260d816d8dbc9eb44-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c5d736809766d46260d816d8dbc9eb44-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11152-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c5d736809766d46260d816d8dbc9eb44-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c5d736809766d46260d816d8dbc9eb44-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c5d736809766d46260d816d8dbc9eb44-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c5d736809766d46260d816d8dbc9eb44-Supplemental.pdf | Several recent work on speech synthesis have employed generative adversarial networks (GANs) to produce raw waveforms. Although such methods improve the sampling efficiency and memory usage, their sample quality has not yet reached that of autoregressive and flow-based generative models. In this work, we propose HiFi-GAN, which achieves both efficient and high-fidelity speech synthesis. As speech audio consists of sinusoidal signals with various periods, we demonstrate that modeling periodic patterns of an audio is crucial for enhancing sample quality. A subjective human evaluation (mean opinion score, MOS) of a single speaker dataset indicates that our proposed method demonstrates similarity to human quality while generating 22.05 kHz high-fidelity audio 167.9 times faster than real-time on a single V100 GPU. We further show the generality of HiFi-GAN to the mel-spectrogram inversion of unseen speakers and end-to-end speech synthesis. Finally, a small footprint version of HiFi-GAN generates samples 13.4 times faster than real-time on CPU with comparable quality to an autoregressive counterpart. |
Unbalanced Sobolev Descent | https://papers.nips.cc/paper_files/paper/2020/hash/c5f5c23be1b71adb51ea9dc8e9d444a8-Abstract.html | Youssef Mroueh, Mattia Rigotti | https://papers.nips.cc/paper_files/paper/2020/hash/c5f5c23be1b71adb51ea9dc8e9d444a8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c5f5c23be1b71adb51ea9dc8e9d444a8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11153-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c5f5c23be1b71adb51ea9dc8e9d444a8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c5f5c23be1b71adb51ea9dc8e9d444a8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c5f5c23be1b71adb51ea9dc8e9d444a8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c5f5c23be1b71adb51ea9dc8e9d444a8-Supplemental.pdf | We introduce Unbalanced Sobolev Descent (USD), a particle descent algorithm for transporting a high dimensional source distribution to a target distribution that does not necessarily have the same mass. We define the Sobolev-Fisher discrepancy between distributions and show that it relates to advection-reaction transport equations and the Wasserstein-Fisher-Rao metric between distributions. USD transports particles along gradient flows of the witness function of the Sobolev-Fisher discrepancy (advection step) and reweighs the mass of particles with respect to this witness function (reaction step). The reaction step can be thought of as a birth-death process of the particles with rate of growth proportional to the witness function. When the Sobolev-Fisher witness function is estimated in a Reproducing Kernel Hilbert Space (RKHS), under mild assumptions we show that USD converges asymptotically (in the limit of infinite particles) to the target distribution in the Maximum Mean Discrepancy (MMD) sense. We then give two methods to estimate the Sobolev-Fisher witness with neural networks, resulting in two Neural USD algorithms. The first one implements the reaction step with mirror descent on the weights, while the second implements it through a birth-death process of particles. We show on synthetic examples that USD transports distributions with or without conservation of mass faster than previous particle descent algorithms, and finally demonstrate its use for molecular biology analyses where our method is naturally suited to match developmental stages of populations of differentiating cells based on their single-cell RNA sequencing profile. Code is available at http://github.com/ibm/usd. |
Identifying Mislabeled Data using the Area Under the Margin Ranking | https://papers.nips.cc/paper_files/paper/2020/hash/c6102b3727b2a7d8b1bb6981147081ef-Abstract.html | Geoff Pleiss, Tianyi Zhang, Ethan Elenberg, Kilian Q. Weinberger | https://papers.nips.cc/paper_files/paper/2020/hash/c6102b3727b2a7d8b1bb6981147081ef-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c6102b3727b2a7d8b1bb6981147081ef-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11154-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c6102b3727b2a7d8b1bb6981147081ef-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c6102b3727b2a7d8b1bb6981147081ef-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c6102b3727b2a7d8b1bb6981147081ef-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c6102b3727b2a7d8b1bb6981147081ef-Supplemental.pdf | Not all data in a typical training set help with generalization; some samples can be overly ambiguous or outrightly mislabeled. This paper introduces a new method to identify such samples and mitigate their impact when training neural networks. At the heart of our algorithm is the Area Under the Margin (AUM) statistic, which exploits differences in the training dynamics of clean and mislabeled samples. A simple procedure - adding an extra class populated with purposefully mislabeled threshold samples - learns a AUM upper bound that isolates mislabeled data. This approach consistently improves upon prior work on synthetic and real-world datasets. On the WebVision50 classification task our method removes 17% of training data, yielding a 1.6% (absolute) improvement in test error. On CIFAR100 removing 13% of the data leads to a 1.2% drop in error. |
Combining Deep Reinforcement Learning and Search for Imperfect-Information Games | https://papers.nips.cc/paper_files/paper/2020/hash/c61f571dbd2fb949d3fe5ae1608dd48b-Abstract.html | Noam Brown, Anton Bakhtin, Adam Lerer, Qucheng Gong | https://papers.nips.cc/paper_files/paper/2020/hash/c61f571dbd2fb949d3fe5ae1608dd48b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c61f571dbd2fb949d3fe5ae1608dd48b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11155-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c61f571dbd2fb949d3fe5ae1608dd48b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c61f571dbd2fb949d3fe5ae1608dd48b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c61f571dbd2fb949d3fe5ae1608dd48b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c61f571dbd2fb949d3fe5ae1608dd48b-Supplemental.pdf | The combination of deep reinforcement learning and search at both training and test time is a powerful paradigm that has led to a number of successes in single-agent settings and perfect-information games, best exemplified by AlphaZero. However, prior algorithms of this form cannot cope with imperfect-information games. This paper presents ReBeL, a general framework for self-play reinforcement learning and search that provably converges to a Nash equilibrium in any two-player zero-sum game. In the simpler setting of perfect-information games, ReBeL reduces to an algorithm similar to AlphaZero. Results in two different imperfect-information games show ReBeL converges to an approximate Nash equilibrium. We also show ReBeL achieves superhuman performance in heads-up no-limit Texas hold'em poker, while using far less domain knowledge than any prior poker AI. |
High-Throughput Synchronous Deep RL | https://papers.nips.cc/paper_files/paper/2020/hash/c6447300d99fdbf4f3f7966295b8b5be-Abstract.html | Iou-Jen Liu, Raymond Yeh, Alexander Schwing | https://papers.nips.cc/paper_files/paper/2020/hash/c6447300d99fdbf4f3f7966295b8b5be-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c6447300d99fdbf4f3f7966295b8b5be-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11156-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c6447300d99fdbf4f3f7966295b8b5be-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c6447300d99fdbf4f3f7966295b8b5be-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c6447300d99fdbf4f3f7966295b8b5be-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c6447300d99fdbf4f3f7966295b8b5be-Supplemental.zip | Various parallel actor-learner methods reduce long training times for deep reinforcement learning. Synchronous methods enjoy training stability while having lower data throughput. In contrast, asynchronous methods achieve high throughput but suffer from stability issues and lower sample efficiency due to ‘stale policies.’ To combine the advantages of both methods we propose High-Throughput Synchronous Deep Reinforcement Learning (HTS-RL). In HTS-RL, we perform learning and rollouts concurrently, devise a system design which avoids ‘stale policies’ and ensure that actors interact with environment replicas in an asynchronous manner while maintaining full determinism. We evaluate our approach on Atari games and the Google Research Football environment. Compared to synchronous baselines, HTS-RL is 2−6X faster. Compared to state-of-the-art asynchronous methods, HTS-RL has competitive throughput and consistently achieves higher average episode rewards. |
Contrastive Learning with Adversarial Examples | https://papers.nips.cc/paper_files/paper/2020/hash/c68c9c8258ea7d85472dd6fd0015f047-Abstract.html | Chih-Hui Ho, Nuno Nvasconcelos | https://papers.nips.cc/paper_files/paper/2020/hash/c68c9c8258ea7d85472dd6fd0015f047-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c68c9c8258ea7d85472dd6fd0015f047-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11157-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c68c9c8258ea7d85472dd6fd0015f047-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c68c9c8258ea7d85472dd6fd0015f047-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c68c9c8258ea7d85472dd6fd0015f047-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c68c9c8258ea7d85472dd6fd0015f047-Supplemental.pdf | Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations. It uses pairs of augmentations of unlabeled training examples to define a classification task for pretext learning of a deep embedding. Despite extensive works in augmentation procedures, prior works do not address the selection of challenging negative pairs, as images within a sampled batch are treated independently. This paper addresses the problem, by introducing a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE. When compared to standard CL, the use of adversarial examples creates more challenging positive pairs and adversarial training produces harder negative pairs by accounting for all images in a batch during the optimization. CLAE is compatible with many CL methods in the literature. Experiments show that it improves the performance of several existing CL baselines on multiple datasets. |
Mixed Hamiltonian Monte Carlo for Mixed Discrete and Continuous Variables | https://papers.nips.cc/paper_files/paper/2020/hash/c6a01432c8138d46ba39957a8250e027-Abstract.html | Guangyao Zhou | https://papers.nips.cc/paper_files/paper/2020/hash/c6a01432c8138d46ba39957a8250e027-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c6a01432c8138d46ba39957a8250e027-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11158-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c6a01432c8138d46ba39957a8250e027-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c6a01432c8138d46ba39957a8250e027-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c6a01432c8138d46ba39957a8250e027-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c6a01432c8138d46ba39957a8250e027-Supplemental.zip | Hamiltonian Monte Carlo (HMC) has emerged as a powerful Markov Chain Monte Carlo (MCMC) method to sample from complex continuous distributions. However, a fundamental limitation of HMC is that it can not be applied to distributions with mixed discrete and continuous variables. In this paper, we propose mixed HMC (M-HMC) as a general framework to address this limitation. M-HMC is a novel family of MCMC algorithms that evolves the discrete and continuous variables in tandem, allowing more frequent updates of discrete variables while maintaining HMC's ability to suppress random-walk behavior. We establish M-HMC's theoretical properties, and present an efficient implementation with Laplace momentum that introduces minimal overhead compared to existing HMC methods. The superior performances of M-HMC over existing methods are demonstrated with numerical experiments on Gaussian mixture models (GMMs), variable selection in Bayesian logistic regression (BLR), and correlated topic models (CTMs). |
Adversarial Sparse Transformer for Time Series Forecasting | https://papers.nips.cc/paper_files/paper/2020/hash/c6b8c8d762da15fa8dbbdfb6baf9e260-Abstract.html | Sifan Wu, Xi Xiao, Qianggang Ding, Peilin Zhao, Ying Wei, Junzhou Huang | https://papers.nips.cc/paper_files/paper/2020/hash/c6b8c8d762da15fa8dbbdfb6baf9e260-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c6b8c8d762da15fa8dbbdfb6baf9e260-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11159-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c6b8c8d762da15fa8dbbdfb6baf9e260-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c6b8c8d762da15fa8dbbdfb6baf9e260-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c6b8c8d762da15fa8dbbdfb6baf9e260-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c6b8c8d762da15fa8dbbdfb6baf9e260-Supplemental.pdf | Many approaches have been proposed for time series forecasting, in light of its significance in wide applications including business demand prediction.
However, the existing methods suffer from two key limitations. Firstly, most point prediction models only predict an exact value of each time step without flexibility, which can hardly capture the stochasticity of data. Even probabilistic prediction using the likelihood estimation suffers these problems in the same way. Besides, most of them use the auto-regressive generative mode, where ground-truth is provided during training and replaced by the network’s own one-step ahead output during inference, causing the error accumulation in inference. Thus they may fail to forecast time series for long time horizon due to the error accumulation. To solve these issues, in this paper, we propose a new time series forecasting model -- Adversarial Sparse Transformer (AST), based on Generated Adversarial Networks (GANs). Specifically, AST adopts a Sparse Transformer as the generator to learn a sparse attention map for time series forecasting, and uses a discriminator to improve the prediction performance from sequence level. Extensive experiments on several real-world datasets show the effectiveness and efficiency of our method. |
The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/c6dfc6b7c601ac2978357b7a81e2d7ae-Abstract.html | Wei Hu, Lechao Xiao, Ben Adlam, Jeffrey Pennington | https://papers.nips.cc/paper_files/paper/2020/hash/c6dfc6b7c601ac2978357b7a81e2d7ae-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c6dfc6b7c601ac2978357b7a81e2d7ae-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11160-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c6dfc6b7c601ac2978357b7a81e2d7ae-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c6dfc6b7c601ac2978357b7a81e2d7ae-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c6dfc6b7c601ac2978357b7a81e2d7ae-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c6dfc6b7c601ac2978357b7a81e2d7ae-Supplemental.pdf | Modern neural networks are often regarded as complex black-box functions whose behavior is difficult to understand owing to their nonlinear dependence on the data and the nonconvexity in their loss landscapes. In this work, we show that these common perceptions can be completely false in the early phase of learning. In particular, we formally prove that, for a class of well-behaved input distributions, the early-time learning dynamics of a two-layer fully-connected neural network can be mimicked by training a simple linear model on the inputs. We additionally argue that this surprising simplicity can persist in networks with more layers and with convolutional architecture, which we verify empirically. Key to our analysis is to bound the spectral norm of the difference between the Neural Tangent Kernel (NTK) and an affine transform of the data kernel; however, unlike many previous results utilizing the NTK, we do not require the network to have disproportionately large width, and the network is allowed to escape the kernel regime later in training. |
CLEARER: Multi-Scale Neural Architecture Search for Image Restoration | https://papers.nips.cc/paper_files/paper/2020/hash/c6e81542b125c36346d9167691b8bd09-Abstract.html | Yuanbiao Gou, Boyun Li, Zitao Liu, Songfan Yang, Xi Peng | https://papers.nips.cc/paper_files/paper/2020/hash/c6e81542b125c36346d9167691b8bd09-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c6e81542b125c36346d9167691b8bd09-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11161-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c6e81542b125c36346d9167691b8bd09-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c6e81542b125c36346d9167691b8bd09-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c6e81542b125c36346d9167691b8bd09-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c6e81542b125c36346d9167691b8bd09-Supplemental.pdf | Multi-scale neural networks have shown effectiveness in image restoration tasks, which are usually designed and integrated in a handcrafted manner. Different from the existing labor-intensive handcrafted architecture design paradigms, we present a novel method, termed as multi-sCaLe nEural ARchitecture sEarch for image Restoration (CLEARER), which is a specifically designed neural architecture search (NAS) for image restoration. Our contributions are twofold. On one hand, we design a multi-scale search space that consists of three task-flexible modules. Namely, 1) Parallel module that connects multi-resolution neural blocks in parallel, while preserving the channels and spatial-resolution in each neural block, 2) Transition module remains the existing multi-resolution features while extending them to a lower resolution, 3) Fusion module integrates multi-resolution features by passing the features of the parallel neural blocks to the current neural blocks. On the other hand, we present novel losses which could 1) balance the tradeoff between the model complexity and performance, which is highly expected to image restoration; and 2) relax the discrete architecture parameters into a continuous distribution which approximates to either 0 or 1. As a result, a differentiable strategy could be employed to search when to fuse or extract multi-resolution features, while the discretization issue faced by the gradient-based NAS could be alleviated. The proposed CLEARER could search a promising architecture in two GPU hours. Extensive experiments show the promising performance of our method comparing with nine image denoising methods and eight image deraining approaches in quantitative and qualitative evaluations. The codes are available at https://github.com/limit-scu. |
Hierarchical Gaussian Process Priors for Bayesian Neural Network Weights | https://papers.nips.cc/paper_files/paper/2020/hash/c70341de2c112a6b3496aec1f631dddd-Abstract.html | Theofanis Karaletsos, Thang D. Bui | https://papers.nips.cc/paper_files/paper/2020/hash/c70341de2c112a6b3496aec1f631dddd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c70341de2c112a6b3496aec1f631dddd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11162-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c70341de2c112a6b3496aec1f631dddd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c70341de2c112a6b3496aec1f631dddd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c70341de2c112a6b3496aec1f631dddd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c70341de2c112a6b3496aec1f631dddd-Supplemental.pdf | Probabilistic neural networks are typically modeled with independent weight priors, which do not capture weight correlations in the prior and do not provide a parsimonious interface to express properties in function space.
A desirable class of priors would represent weights compactly, capture correlations between weights, facilitate calibrated reasoning about uncertainty, and allow inclusion of prior knowledge about the function space such as periodicity or dependence on contexts such as inputs.
To this end, this paper introduces two innovations: (i) a Gaussian process-based hierarchical model for network weights based on unit embeddings that can flexibly encode correlated weight structures, and (ii) input-dependent versions of these weight priors that can provide convenient ways to regularize the function space through the use of kernels defined on contextual inputs.
We show these models provide desirable test-time uncertainty estimates on out-of-distribution data, demonstrate cases of modeling inductive biases for neural networks with kernels which help both interpolation and extrapolation from training data, and demonstrate competitive predictive performance on an active learning benchmark. |
Compositional Explanations of Neurons | https://papers.nips.cc/paper_files/paper/2020/hash/c74956ffb38ba48ed6ce977af6727275-Abstract.html | Jesse Mu, Jacob Andreas | https://papers.nips.cc/paper_files/paper/2020/hash/c74956ffb38ba48ed6ce977af6727275-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c74956ffb38ba48ed6ce977af6727275-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11163-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c74956ffb38ba48ed6ce977af6727275-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c74956ffb38ba48ed6ce977af6727275-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c74956ffb38ba48ed6ce977af6727275-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c74956ffb38ba48ed6ce977af6727275-Supplemental.pdf | We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts that closely approximate neuron behavior. Compared to prior work that uses atomic labels as explanations, analyzing neurons compositionally allows us to more precisely and expressively characterize their behavior. We use this procedure to answer several questions on interpretability in models for vision and natural language processing. First, we examine the kinds of abstractions learned by neurons. In image classification, we find that many neurons learn highly abstract but semantically coherent visual concepts, while other polysemantic neurons detect multiple unrelated features; in natural language inference (NLI), neurons learn shallow lexical heuristics from dataset biases. Second, we see whether compositional explanations give us insight into model performance: vision neurons that detect human-interpretable concepts are positively correlated with task performance, while NLI neurons that fire for shallow heuristics are negatively correlated with task performance. Finally, we show how compositional explanations provide an accessible way for end users to produce simple "copy-paste" adversarial examples that change model behavior in predictable ways. |
Calibrated Reliable Regression using Maximum Mean Discrepancy | https://papers.nips.cc/paper_files/paper/2020/hash/c74c4bf0dad9cbae3d80faa054b7d8ca-Abstract.html | Peng Cui, Wenbo Hu, Jun Zhu | https://papers.nips.cc/paper_files/paper/2020/hash/c74c4bf0dad9cbae3d80faa054b7d8ca-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c74c4bf0dad9cbae3d80faa054b7d8ca-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11164-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c74c4bf0dad9cbae3d80faa054b7d8ca-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c74c4bf0dad9cbae3d80faa054b7d8ca-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c74c4bf0dad9cbae3d80faa054b7d8ca-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c74c4bf0dad9cbae3d80faa054b7d8ca-Supplemental.pdf | Accurate quantification of uncertainty is crucial for real-world applications of machine learning. However, modern deep neural networks still produce unreliable predictive uncertainty, often yielding over-confident predictions. In this paper, we are concerned with getting well-calibrated predictions in regression tasks. We propose the calibrated regression method using the maximum mean discrepancy by minimizing the kernel embedding measure. Theoretically, the calibration error of our method asymptotically converges to zero when the sample size is large enough. Experiments on non-trivial real datasets show that our method can produce well-calibrated and sharp prediction intervals, which outperforms the related state-of-the-art methods. |
Directional convergence and alignment in deep learning | https://papers.nips.cc/paper_files/paper/2020/hash/c76e4b2fa54f8506719a5c0dc14c2eb9-Abstract.html | Ziwei Ji, Matus Telgarsky | https://papers.nips.cc/paper_files/paper/2020/hash/c76e4b2fa54f8506719a5c0dc14c2eb9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c76e4b2fa54f8506719a5c0dc14c2eb9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11165-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c76e4b2fa54f8506719a5c0dc14c2eb9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c76e4b2fa54f8506719a5c0dc14c2eb9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c76e4b2fa54f8506719a5c0dc14c2eb9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c76e4b2fa54f8506719a5c0dc14c2eb9-Supplemental.pdf | In this paper, we show that although the minimizers of cross-entropy and related classification losses are off at infinity, network weights learned by gradient flow converge in direction, with an immediate corollary that network predictions, training errors, and the margin distribution also converge. This proof holds for deep homogeneous networks — a broad class of networks allowing for ReLU, max-pooling, linear, and convolutional layers — and we additionally provide empirical support not just close to the theory (e.g., the AlexNet), but also on non-homogeneous networks (e.g., the DenseNet). If the network further has locally Lipschitz gradients, we show that these gradients also converge in direction, and asymptotically align with the gradient flow path, with consequences on margin maximization, convergence of saliency maps, and a few other settings. Our analysis complements and is distinct from the well-known neural tangent and mean-field theories, and in particular makes no requirements on network width and initialization, instead merely requiring perfect classification accuracy. The proof proceeds by developing a theory of unbounded nonsmooth Kurdyka-Łojasiewicz inequalities for functions definable in an o-minimal structure, and is also applicable outside deep learning. |
Functional Regularization for Representation Learning: A Unified Theoretical Perspective | https://papers.nips.cc/paper_files/paper/2020/hash/c793b3be8f18731f2a4c627fb3c6c63d-Abstract.html | Siddhant Garg, Yingyu Liang | https://papers.nips.cc/paper_files/paper/2020/hash/c793b3be8f18731f2a4c627fb3c6c63d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c793b3be8f18731f2a4c627fb3c6c63d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11166-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c793b3be8f18731f2a4c627fb3c6c63d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c793b3be8f18731f2a4c627fb3c6c63d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c793b3be8f18731f2a4c627fb3c6c63d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c793b3be8f18731f2a4c627fb3c6c63d-Supplemental.zip | Unsupervised and self-supervised learning approaches have become a crucial tool to learn representations for downstream prediction tasks. While these approaches are widely used in practice and achieve impressive empirical gains, their theoretical understanding largely lags behind. Towards bridging this gap, we present a unifying perspective where several such approaches can be viewed as imposing a regularization on the representation via a learnable function using unlabeled data. We propose a discriminative theoretical framework for analyzing the sample complexity of these approaches, which generalizes the framework of (Balcan and Blum, 2010) to allow learnable regularization functions. Our sample complexity bounds show that, with carefully chosen hypothesis classes to exploit the structure in the data, these learnable regularization functions can prune the hypothesis space, and help reduce the amount of labeled data needed. We then provide two concrete examples of functional regularization, one using auto-encoders and the other using masked self-supervision, and apply our framework to quantify the reduction in the sample complexity bound of labeled data. We also provide complementary empirical results to support our analysis. |
Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits | https://papers.nips.cc/paper_files/paper/2020/hash/c7af0926b294e47e52e46cfebe173f20-Abstract.html | Jack Parker-Holder, Vu Nguyen, Stephen J. Roberts | https://papers.nips.cc/paper_files/paper/2020/hash/c7af0926b294e47e52e46cfebe173f20-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c7af0926b294e47e52e46cfebe173f20-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11167-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c7af0926b294e47e52e46cfebe173f20-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c7af0926b294e47e52e46cfebe173f20-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c7af0926b294e47e52e46cfebe173f20-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c7af0926b294e47e52e46cfebe173f20-Supplemental.pdf | Many of the recent triumphs in machine learning are dependent on well-tuned hyperparameters. This is particularly prominent in reinforcement learning (RL) where a small change in the configuration can lead to failure. Despite the importance of tuning hyperparameters, it remains expensive and is often done in a naive and laborious way. A recent solution to this problem is Population Based Training (PBT) which updates both weights and hyperparameters in a \emph{single training run} of a population of agents. PBT has been shown to be particularly effective in RL, leading to widespread use in the field. However, PBT lacks theoretical guarantees since it relies on random heuristics to explore the hyperparameter space. This inefficiency means it typically requires vast computational resources, which is prohibitive for many small and medium sized labs. In this work, we introduce the first provably efficient PBT-style algorithm, Population-Based Bandits (PB2). PB2 uses a probabilistic model to guide the search in an efficient way, making it possible to discover high performing hyperparameter configurations with far fewer agents than typically required by PBT. We show in a series of RL experiments that PB2 is able to achieve high performance with a modest computational budget. |
Understanding Global Feature Contributions With Additive Importance Measures | https://papers.nips.cc/paper_files/paper/2020/hash/c7bf0b7c1a86d5eb3be2c722cf2cf746-Abstract.html | Ian Covert, Scott M. Lundberg, Su-In Lee | https://papers.nips.cc/paper_files/paper/2020/hash/c7bf0b7c1a86d5eb3be2c722cf2cf746-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c7bf0b7c1a86d5eb3be2c722cf2cf746-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11168-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c7bf0b7c1a86d5eb3be2c722cf2cf746-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c7bf0b7c1a86d5eb3be2c722cf2cf746-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c7bf0b7c1a86d5eb3be2c722cf2cf746-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c7bf0b7c1a86d5eb3be2c722cf2cf746-Supplemental.pdf | Understanding the inner workings of complex machine learning models is a long-standing problem and most recent research has focused on local interpretability. To assess the role of individual input features in a global sense, we explore the perspective of defining feature importance through the predictive power associated with each feature. We introduce two notions of predictive power (model-based and universal) and formalize this approach with a framework of additive importance measures, which unifies numerous methods in the literature. We then propose SAGE, a model-agnostic method that quantifies predictive power while accounting for feature interactions. Our experiments show that SAGE can be calculated efficiently and that it assigns more accurate importance values than other methods. |
Online Non-Convex Optimization with Imperfect Feedback | https://papers.nips.cc/paper_files/paper/2020/hash/c7c46d4baf816bfb07c7f3bf96d88544-Abstract.html | Amélie Héliou, Matthieu Martin, Panayotis Mertikopoulos, Thibaud Rahier | https://papers.nips.cc/paper_files/paper/2020/hash/c7c46d4baf816bfb07c7f3bf96d88544-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c7c46d4baf816bfb07c7f3bf96d88544-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11169-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c7c46d4baf816bfb07c7f3bf96d88544-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c7c46d4baf816bfb07c7f3bf96d88544-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c7c46d4baf816bfb07c7f3bf96d88544-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c7c46d4baf816bfb07c7f3bf96d88544-Supplemental.pdf | We consider the problem of online learning with non-convex losses. In terms of feedback, we assume that the learner observes – or otherwise constructs – an inexact model for the loss function encountered at each stage, and we propose a mixed-strategy learning policy based on dual averaging. In this general context, we derive a series of tight regret minimization guarantees, both for the learner’s static (external) regret, as well as the regret incurred against the best dynamic policy in hindsight. Subsequently, we apply this general template to the case where the learner only has access to the actual loss incurred at each stage of the process. This is achieved by means of a kernel-based estimator which generates an inexact model for each round’s loss function using only the learner’s realized losses as input. |
Co-Tuning for Transfer Learning | https://papers.nips.cc/paper_files/paper/2020/hash/c8067ad1937f728f51288b3eb986afaa-Abstract.html | Kaichao You, Zhi Kou, Mingsheng Long, Jianmin Wang | https://papers.nips.cc/paper_files/paper/2020/hash/c8067ad1937f728f51288b3eb986afaa-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c8067ad1937f728f51288b3eb986afaa-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11170-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c8067ad1937f728f51288b3eb986afaa-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c8067ad1937f728f51288b3eb986afaa-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c8067ad1937f728f51288b3eb986afaa-Review.html | null | Fine-tuning pre-trained deep neural networks (DNNs) to a target dataset, also known as transfer learning, is widely used in computer vision and NLP. Because task-specific layers mainly contain categorical information and categories vary with datasets, practitioners only \textit{partially} transfer pre-trained models by discarding task-specific layers and fine-tuning bottom layers. However, it is a reckless loss to simply discard task-specific parameters who take up as many as $20\%$ of the total parameters in pre-trained models. To \textit{fully} transfer pre-trained models, we propose a two-step framework named \textbf{Co-Tuning}: (i) learn the relationship between source categories and target categories from the pre-trained model and calibrated predictions; (ii) target labels (one-hot labels), as well as source labels (probabilistic labels) translated by the category relationship, collaboratively supervise the fine-tuning process. A simple instantiation of the framework shows strong empirical results in four visual classification tasks and one NLP classification task, bringing up to $20\%$ relative improvement. While state-of-the-art fine-tuning techniques mainly focus on how to impose regularization when data are not abundant, Co-Tuning works not only in medium-scale datasets (100 samples per class) but also in large-scale datasets (1000 samples per class) where regularization-based methods bring no gains over the vanilla fine-tuning. Co-Tuning relies on a typically valid assumption that the pre-trained dataset is diverse enough, implying its broad application area. |
Multifaceted Uncertainty Estimation for Label-Efficient Deep Learning | https://papers.nips.cc/paper_files/paper/2020/hash/c80d9ba4852b67046bee487bcd9802c0-Abstract.html | Weishi Shi, Xujiang Zhao, Feng Chen, Qi Yu | https://papers.nips.cc/paper_files/paper/2020/hash/c80d9ba4852b67046bee487bcd9802c0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c80d9ba4852b67046bee487bcd9802c0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11171-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c80d9ba4852b67046bee487bcd9802c0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c80d9ba4852b67046bee487bcd9802c0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c80d9ba4852b67046bee487bcd9802c0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c80d9ba4852b67046bee487bcd9802c0-Supplemental.pdf | We present a novel multi-source uncertainty prediction approach that enables deep learning (DL) models to be actively trained with much less labeled data. By leveraging the second-order uncertainty representation provided by subjective logic (SL), we conduct evidence-based theoretical analysis and formally decompose the predicted entropy over multiple classes into two distinct sources of uncertainty: vacuity and dissonance, caused by lack of evidence and conflict of strong evidence, respectively. The evidence based entropy decomposition provides deeper insights on the nature of uncertainty, which can help effectively explore a large and high-dimensional unlabeled data space. We develop a novel loss function that augments DL based evidence prediction with uncertainty anchor sample identification. The accurately estimated multiple sources of uncertainty are systematically integrated and dynamically balanced using a data sampling function for label-efficient active deep learning (ADL). Experiments conducted over both synthetic and real data and comparison with competitive AL methods demonstrate the effectiveness of the proposed ADL model. |
Continuous Surface Embeddings | https://papers.nips.cc/paper_files/paper/2020/hash/c81e728d9d4c2f636f067f89cc14862c-Abstract.html | Natalia Neverova, David Novotny, Marc Szafraniec, Vasil Khalidov, Patrick Labatut, Andrea Vedaldi | https://papers.nips.cc/paper_files/paper/2020/hash/c81e728d9d4c2f636f067f89cc14862c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c81e728d9d4c2f636f067f89cc14862c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11172-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c81e728d9d4c2f636f067f89cc14862c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c81e728d9d4c2f636f067f89cc14862c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c81e728d9d4c2f636f067f89cc14862c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c81e728d9d4c2f636f067f89cc14862c-Supplemental.pdf | In this work, we focus on the task of learning and representing dense correspondences in deformable object categories. While this problem has been considered before, solutions so far have been rather ad-hoc for specific object types (i.e., humans), often with significant manual work involved. However, scaling the geometry understanding to all objects in nature requires more automated approaches that can also express correspondences between related, but geometrically different objects. To this end, we propose a new, learnable image-based representation of dense correspondences. Our model predicts, for each pixel in a 2D image, an embedding vector of the corresponding vertex in the object mesh, therefore establishing dense correspondences between image pixels and 3D object geometry. We demonstrate that the proposed approach performs on par or better than the state-of-the-art methods for dense pose estimation for humans, while being conceptually simpler. We also collect a new in-the-wild dataset of dense correspondences for animal classes and demonstrate that our framework scales naturally to the new deformable object categories. |
Succinct and Robust Multi-Agent Communication With Temporal Message Control | https://papers.nips.cc/paper_files/paper/2020/hash/c82b013313066e0702d58dc70db033ca-Abstract.html | Sai Qian Zhang, Qi Zhang, Jieyu Lin | https://papers.nips.cc/paper_files/paper/2020/hash/c82b013313066e0702d58dc70db033ca-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c82b013313066e0702d58dc70db033ca-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11173-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c82b013313066e0702d58dc70db033ca-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c82b013313066e0702d58dc70db033ca-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c82b013313066e0702d58dc70db033ca-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c82b013313066e0702d58dc70db033ca-Supplemental.zip | Recent studies have shown that introducing communication between agents can significantly improve overall performance in cooperative Multi-agent reinforcement learning (MARL). However, existing communication schemes often require agents to exchange an excessive number of messages at run-time under a reliable communication channel, which hinders its practicality in many real-world situations. In this paper, we present \textit{Temporal Message Control} (TMC), a simple yet effective approach for achieving succinct and robust communication in MARL. TMC applies a temporal smoothing technique to drastically reduce the amount of information exchanged between agents. Experiments show that TMC can significantly reduce inter-agent communication overhead without impacting accuracy. Furthermore, TMC demonstrates much better robustness against transmission loss than existing approaches in lossy networking environments. |
Big Bird: Transformers for Longer Sequences | https://papers.nips.cc/paper_files/paper/2020/hash/c8512d142a2d849725f31a9a7a361ab9-Abstract.html | Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed | https://papers.nips.cc/paper_files/paper/2020/hash/c8512d142a2d849725f31a9a7a361ab9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c8512d142a2d849725f31a9a7a361ab9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11174-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c8512d142a2d849725f31a9a7a361ab9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c8512d142a2d849725f31a9a7a361ab9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c8512d142a2d849725f31a9a7a361ab9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c8512d142a2d849725f31a9a7a361ab9-Supplemental.pdf | Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data. |
Neural Execution Engines: Learning to Execute Subroutines | https://papers.nips.cc/paper_files/paper/2020/hash/c8b9abffb45bf79a630fb613dcd23449-Abstract.html | Yujun Yan, Kevin Swersky, Danai Koutra, Parthasarathy Ranganathan, Milad Hashemi | https://papers.nips.cc/paper_files/paper/2020/hash/c8b9abffb45bf79a630fb613dcd23449-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c8b9abffb45bf79a630fb613dcd23449-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11175-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c8b9abffb45bf79a630fb613dcd23449-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c8b9abffb45bf79a630fb613dcd23449-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c8b9abffb45bf79a630fb613dcd23449-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c8b9abffb45bf79a630fb613dcd23449-Supplemental.pdf | A significant effort has been made to train neural networks that replicate algorithmic reasoning, but they often fail to learn the abstract concepts underlying these algorithms. This is evidenced by their inability to generalize to data distributions that are outside of their restricted training sets, namely larger inputs and unseen data. We study these generalization issues at the level of numerical subroutines that comprise common algorithms like sorting, shortest paths, and minimum spanning trees. First, we observe that transformer-based sequence-to-sequence models can learn subroutines like sorting a list of numbers, but their performance rapidly degrades as the length of lists grows beyond those found in the training set. We demonstrate that this is due to attention weights that lose fidelity with longer sequences, particularly when the input numbers are numerically similar. To address the issue, we propose a learned conditional masking mechanism, which enables the model to strongly generalize far outside of its training range with near-perfect accuracy on a variety of algorithms. Second, to generalize to unseen data, we show that encoding numbers with a binary representation leads to embeddings with rich structure once trained on downstream tasks like addition or multiplication. This allows the embedding to handle missing data by faithfully interpolating numbers not seen during training. |
Random Reshuffling: Simple Analysis with Vast Improvements | https://papers.nips.cc/paper_files/paper/2020/hash/c8cc6e90ccbff44c9cee23611711cdc4-Abstract.html | Konstantin Mishchenko, Ahmed Khaled, Peter Richtarik | https://papers.nips.cc/paper_files/paper/2020/hash/c8cc6e90ccbff44c9cee23611711cdc4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c8cc6e90ccbff44c9cee23611711cdc4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11176-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c8cc6e90ccbff44c9cee23611711cdc4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c8cc6e90ccbff44c9cee23611711cdc4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c8cc6e90ccbff44c9cee23611711cdc4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c8cc6e90ccbff44c9cee23611711cdc4-Supplemental.pdf | Random Reshuffling (RR) is an algorithm for minimizing finite-sum functions that utilizes iterative gradient descent steps in conjunction with data reshuffling. Often contrasted with its sibling Stochastic Gradient Descent (SGD), RR is usually faster in practice and enjoys significant popularity in convex and non-convex optimization. The convergence rate of RR has attracted substantial attention recently and, for strongly convex and smooth functions, it was shown to converge faster than SGD if 1) the stepsize is small, 2) the gradients are bounded, and 3) the number of epochs is large. We remove these 3 assumptions, improve the dependence on the condition number from $\kappa^2$ to $\kappa$ (resp.\ from $\kappa$ to $\sqrt{\kappa}$) and, in addition, show that RR has a different type of variance. We argue through theory and experiments that the new variance type gives an additional justification of the superior performance of RR. To go beyond strong convexity, we present several results for non-strongly convex and non-convex objectives. We show that in all cases, our theory improves upon existing literature. Finally, we prove fast convergence of the Shuffle-Once (SO) algorithm, which shuffles the data only once, at the beginning of the optimization process. Our theory for strongly convex objectives tightly matches the known lower bounds for both RR and SO and substantiates the common practical heuristic of shuffling once or only a few times. As a byproduct of our analysis, we also get new results for the Incremental Gradient algorithm (IG), which does not shuffle the data at all. |
Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors | https://papers.nips.cc/paper_files/paper/2020/hash/c8d3a760ebab631565f8509d84b3b3f1-Abstract.html | Karl Pertsch, Oleh Rybkin, Frederik Ebert, Shenghao Zhou, Dinesh Jayaraman, Chelsea Finn, Sergey Levine | https://papers.nips.cc/paper_files/paper/2020/hash/c8d3a760ebab631565f8509d84b3b3f1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c8d3a760ebab631565f8509d84b3b3f1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11177-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c8d3a760ebab631565f8509d84b3b3f1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c8d3a760ebab631565f8509d84b3b3f1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c8d3a760ebab631565f8509d84b3b3f1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c8d3a760ebab631565f8509d84b3b3f1-Supplemental.pdf | The ability to predict and plan into the future is fundamental for agents acting in the world. To reach a faraway goal, we predict trajectories at multiple timescales, first devising a coarse plan towards the goal and then gradually filling in details. In contrast, current learning approaches for visual prediction and planning fail on long-horizon tasks as they generate predictions (1)~without considering goal information, and (2)~at the finest temporal resolution, one step at a time. In this work we propose a framework for visual prediction and planning that is able to overcome both of these limitations. First, we formulate the problem of predicting towards a goal and propose the corresponding class of latent space goal-conditioned predictors (GCPs). GCPs significantly improve planning efficiency by constraining the search space to only those trajectories that reach the goal. Further, we show how GCPs can be naturally formulated as hierarchical models that, given two observations, predict an observation between them, and by recursively subdividing each part of the trajectory generate complete sequences. This divide-and-conquer strategy is effective at long-term prediction, and enables us to design an effective hierarchical planning algorithm that optimizes trajectories in a coarse-to-fine manner. We show that by using both goal-conditioning and hierarchical prediction, GCPs enable us to solve visual planning tasks with much longer horizon than previously possible. See prediction and planning videos on the supplementary website: sites.google.com/view/video-gcp. |
Statistical Optimal Transport posed as Learning Kernel Embedding | https://papers.nips.cc/paper_files/paper/2020/hash/c8ecfaea0b7e3aa83b017a786d53b9e8-Abstract.html | Saketha Nath Jagarlapudi, Pratik Kumar Jawanpuria | https://papers.nips.cc/paper_files/paper/2020/hash/c8ecfaea0b7e3aa83b017a786d53b9e8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c8ecfaea0b7e3aa83b017a786d53b9e8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11178-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c8ecfaea0b7e3aa83b017a786d53b9e8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c8ecfaea0b7e3aa83b017a786d53b9e8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c8ecfaea0b7e3aa83b017a786d53b9e8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c8ecfaea0b7e3aa83b017a786d53b9e8-Supplemental.zip | The objective in statistical Optimal Transport (OT) is to consistently estimate the optimal transport plan/map solely using samples from the given source and target marginal distributions. This work takes the novel approach of posing statistical OT as that of learning the transport plan's kernel mean embedding from sample based estimates of marginal embeddings. The proposed estimator controls overfitting by employing maximum mean discrepancy based regularization, which is complementary to $\phi$-divergence (entropy) based regularization popularly employed in existing estimators. A key result is that, under very mild conditions, $\epsilon$-optimal recovery of the transport plan as well as the Barycentric-projection based transport map is possible with a sample complexity that is completely dimension-free. Moreover, the implicit smoothing in the kernel mean embeddings enables out-of-sample estimation. An appropriate representer theorem is proved leading to a kernelized convex formulation for the estimator, which can then be potentially used to perform OT even in non-standard domains. Empirical results illustrate the efficacy of the proposed approach.
|
Dual-Resolution Correspondence Networks | https://papers.nips.cc/paper_files/paper/2020/hash/c91591a8d461c2869b9f535ded3e213e-Abstract.html | Xinghui Li, Kai Han, Shuda Li, Victor Prisacariu | https://papers.nips.cc/paper_files/paper/2020/hash/c91591a8d461c2869b9f535ded3e213e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c91591a8d461c2869b9f535ded3e213e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11179-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c91591a8d461c2869b9f535ded3e213e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c91591a8d461c2869b9f535ded3e213e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c91591a8d461c2869b9f535ded3e213e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c91591a8d461c2869b9f535ded3e213e-Supplemental.pdf | We tackle the problem of establishing dense pixel-wise correspondences between a pair of images. In this work, we introduce Dual-Resolution Correspondence Networks (DualRC-Net), to obtain pixel-wise correspondences in a coarse-to-fine manner. DualRC-Net extracts both coarse- and fine- resolution feature maps. The coarse maps are used to produce a full but coarse 4D correlation tensor, which is then refined by a learnable neighbourhood consensus module. The fine-resolution feature maps are used to obtain the final dense correspondences guided by the refined coarse 4D correlation tensor. The selected coarse-resolution matching scores allow the fine-resolution features to focus only on a limited number of possible matches with high confidence. In this way, DualRC-Net dramatically increases matching reliability and localisation accuracy, while avoiding to apply the expensive 4D convolution kernels on fine-resolution feature maps. We comprehensively evaluate our method on large-scale public benchmarks including HPatches, InLoc, and Aachen Day-Night. It achieves state-of-the-art results on all of them. |
Advances in Black-Box VI: Normalizing Flows, Importance Weighting, and Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/c91e3483cf4f90057d02aa492d2b25b1-Abstract.html | Abhinav Agrawal, Daniel R. Sheldon, Justin Domke | https://papers.nips.cc/paper_files/paper/2020/hash/c91e3483cf4f90057d02aa492d2b25b1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c91e3483cf4f90057d02aa492d2b25b1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11180-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c91e3483cf4f90057d02aa492d2b25b1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c91e3483cf4f90057d02aa492d2b25b1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c91e3483cf4f90057d02aa492d2b25b1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c91e3483cf4f90057d02aa492d2b25b1-Supplemental.pdf | Recent research has seen several advances relevant to black-box VI, but the current state of automatic posterior inference is unclear. One such advance is the use of normalizing flows to define flexible posterior densities for deep latent variable models. Another direction is the integration of Monte-Carlo methods to serve two purposes; first, to obtain tighter variational objectives for optimization, and second, to define enriched variational families through sampling. However, both flows and variational Monte-Carlo methods remain relatively unexplored for black-box VI. Moreover, on a pragmatic front, there are several optimization considerations like step-size scheme, parameter initialization, and choice of gradient estimators, for which there are no clear guidance in the existing literature. In this paper, we postulate that black-box VI is best addressed through a careful combination of numerous algorithmic components. We evaluate components relating to optimization, flows, and Monte-Carlo methods on a benchmark of 30 models from the Stan model library. The combination of these algorithmic components significantly advances the state-of-the-art "out of the box" variational inference. |
f-Divergence Variational Inference | https://papers.nips.cc/paper_files/paper/2020/hash/c928d86ff00aeb89a39bd4a80e652a38-Abstract.html | Neng Wan, Dapeng Li, NAIRA HOVAKIMYAN | https://papers.nips.cc/paper_files/paper/2020/hash/c928d86ff00aeb89a39bd4a80e652a38-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c928d86ff00aeb89a39bd4a80e652a38-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11181-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c928d86ff00aeb89a39bd4a80e652a38-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c928d86ff00aeb89a39bd4a80e652a38-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c928d86ff00aeb89a39bd4a80e652a38-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c928d86ff00aeb89a39bd4a80e652a38-Supplemental.pdf | This paper introduces the f-divergence variational inference (f-VI) that generalizes variational inference to all f-divergences. Initiated from minimizing a crafty surrogate f-divergence that shares the statistical consistency with the f-divergence, the f-VI framework not only unifies a number of existing VI methods, e.g. Kullback–Leibler VI, Renyi's alpha-VI, and chi-VI, but offers a standardized toolkit for VI subject to arbitrary divergences from f-divergence family. A general f-variational bound is derived and provides a sandwich estimate of marginal likelihood (or evidence). The development of the f-VI unfolds with a stochastic optimization scheme that utilizes the reparameterization trick, importance weighting and Monte Carlo approximation; a mean-field approximation scheme that generalizes the well-known coordinate ascent variational inference (CAVI) is also proposed for f-VI. Empirical examples, including variational autoencoders and Bayesian neural networks, are provided to demonstrate the effectiveness and the wide applicability of f-VI. |
Unfolding recurrence by Green’s functions for optimized reservoir computing | https://papers.nips.cc/paper_files/paper/2020/hash/c94a589bdd47870b1d74b258d1ce3b33-Abstract.html | Sandra Nestler, Christian Keup, David Dahmen, Matthieu Gilson, Holger Rauhut, Moritz Helias | https://papers.nips.cc/paper_files/paper/2020/hash/c94a589bdd47870b1d74b258d1ce3b33-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c94a589bdd47870b1d74b258d1ce3b33-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11182-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c94a589bdd47870b1d74b258d1ce3b33-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c94a589bdd47870b1d74b258d1ce3b33-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c94a589bdd47870b1d74b258d1ce3b33-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c94a589bdd47870b1d74b258d1ce3b33-Supplemental.zip | Cortical networks are strongly recurrent, and neurons have intrinsic temporal dynamics. This sets them apart from deep feed-forward networks. Despite the tremendous progress in the application of deep feed-forward networks and their theoretical understanding, it remains unclear how the interplay of recurrence and non-linearities in recurrent cortical networks contributes to their function. The purpose of this work is to present a solvable recurrent network model that links to feed forward networks. By perturbative methods we transform the time-continuous, recurrent dynamics into an effective feed-forward structure of linear and non-linear temporal kernels. The resulting analytical expressions allow us to build optimal time-series classifiers from random reservoir networks. Firstly, this allows us to optimize not only the readout vectors, but also the input projection, demonstrating a strong potential performance gain. Secondly, the analysis exposes how the second order stimulus statistics is a crucial element that interacts with the non-linearity of the dynamics and boosts performance. |
The Dilemma of TriHard Loss and an Element-Weighted TriHard Loss for Person Re-Identification | https://papers.nips.cc/paper_files/paper/2020/hash/c96c08f8bb7960e11a1239352a479053-Abstract.html | Yihao Lv, Youzhi Gu, Liu Xinggao | https://papers.nips.cc/paper_files/paper/2020/hash/c96c08f8bb7960e11a1239352a479053-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c96c08f8bb7960e11a1239352a479053-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11183-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c96c08f8bb7960e11a1239352a479053-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c96c08f8bb7960e11a1239352a479053-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c96c08f8bb7960e11a1239352a479053-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c96c08f8bb7960e11a1239352a479053-Supplemental.pdf | Triplet loss with batch hard mining (TriHard loss) is an important variation of triplet loss inspired by the idea that hard triplets improve the performance of metric leaning networks. However, there is a dilemma in the training process. The hard negative samples contain various quite similar characteristics compared with anchors and positive samples in a batch. Features of these characteristics should be clustered between anchors and positive samples while are also utilized to repel between anchors and hard negative samples. It is harmful for learning mutual features within classes. Several methods to alleviate the dilemma are designed and tested. In the meanwhile, an element-weighted TriHard loss is emphatically proposed to enlarge the distance between partial elements of feature vectors selectively which represent the different characteristics between anchors and hard negative samples. Extensive evaluations are conducted on Market1501 and MSMT17 datasets and the results achieve state-of-the-art on public baselines. |
Disentangling by Subspace Diffusion | https://papers.nips.cc/paper_files/paper/2020/hash/c9f029a6a1b20a8408f372351b321dd8-Abstract.html | David Pfau, Irina Higgins, Alex Botev, Sébastien Racanière | https://papers.nips.cc/paper_files/paper/2020/hash/c9f029a6a1b20a8408f372351b321dd8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c9f029a6a1b20a8408f372351b321dd8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11184-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c9f029a6a1b20a8408f372351b321dd8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c9f029a6a1b20a8408f372351b321dd8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c9f029a6a1b20a8408f372351b321dd8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c9f029a6a1b20a8408f372351b321dd8-Supplemental.zip | We present a novel nonparametric algorithm for symmetry-based disentangling of data manifolds, the Geometric Manifold Component Estimator (GEOMANCER). GEOMANCER provides a partial answer to the question posed by Higgins et al.(2018): is it possible to learn how to factorize a Lie group solely from observations of the orbit of an object it acts on? We show that fully unsupervised factorization of a data manifold is possible if the true metric of the manifold is known and each factor manifold has nontrivial holonomy – for example, rotation in 3D. Our algorithm works by estimating the subspaces that are invariant under random walk diffusion, giving an approximation to the de Rham decomposition from differential geometry. We demonstrate the efficacy of GEOMANCER on several complex synthetic manifolds. Our work reduces the question of whether unsupervised disentangling is possible to the question of whether unsupervised metric learning is possible, providing a unifying insight into the geometric nature of representation learning. |
Towards Neural Programming Interfaces | https://papers.nips.cc/paper_files/paper/2020/hash/c9f06bc7b46d0247a91c8fc665c13d0e-Abstract.html | Zachary Brown, Nathaniel Robinson, David Wingate, Nancy Fulda | https://papers.nips.cc/paper_files/paper/2020/hash/c9f06bc7b46d0247a91c8fc665c13d0e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c9f06bc7b46d0247a91c8fc665c13d0e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11185-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c9f06bc7b46d0247a91c8fc665c13d0e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c9f06bc7b46d0247a91c8fc665c13d0e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c9f06bc7b46d0247a91c8fc665c13d0e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c9f06bc7b46d0247a91c8fc665c13d0e-Supplemental.zip | It is notoriously difficult to control the behavior of artificial neural networks such as generative neural language models. We recast the problem of controlling natural language generation as that of learning to interface with a pretrained language model, just as Application Programming Interfaces (APIs) control the behavior of programs by altering hyperparameters. In this new paradigm, a specialized neural network (called a Neural Programming Interface or NPI) learns to interface with a pretrained language model by manipulating the hidden activations of the pretrained model to produce desired outputs. Importantly, no permanent changes are made to the weights of the original model, allowing us to re-purpose pretrained models for new tasks without overwriting any aspect of the language model. We also contribute a new data set construction algorithm and GAN-inspired loss function that allows us to train NPI models to control outputs of autoregressive transformers. In experiments against other state-of-the-art approaches, we demonstrate the efficacy of our methods using OpenAI’s GPT-2 model, successfully controlling noun selection, topic aversion, offensive speech filtering, and other aspects of language while largely maintaining the controlled model's fluency under deterministic settings. |
Discovering Symbolic Models from Deep Learning with Inductive Biases | https://papers.nips.cc/paper_files/paper/2020/hash/c9f2f917078bd2db12f23c3b413d9cba-Abstract.html | Miles Cranmer, Alvaro Sanchez Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, Shirley Ho | https://papers.nips.cc/paper_files/paper/2020/hash/c9f2f917078bd2db12f23c3b413d9cba-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c9f2f917078bd2db12f23c3b413d9cba-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11186-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c9f2f917078bd2db12f23c3b413d9cba-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c9f2f917078bd2db12f23c3b413d9cba-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c9f2f917078bd2db12f23c3b413d9cba-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c9f2f917078bd2db12f23c3b413d9cba-Supplemental.pdf | We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs). The technique works as follows: we first encourage sparse latent representations when we train a GNN in a supervised setting, then we apply symbolic regression to components of the learned model to extract explicit physical relations. We find the correct known equations, including force laws and Hamiltonians, can be extracted from the neural network. We then apply our method to a non-trivial cosmology example—a detailed dark matter simulation—and discover a new analytic formula which can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. The symbolic expressions extracted from the GNN using our technique also generalized to out-of-distribution-data better than the GNN itself. Our approach offers alternative directions for interpreting neural networks and discovering novel physical principles from the representations they learn. |
Real World Games Look Like Spinning Tops | https://papers.nips.cc/paper_files/paper/2020/hash/ca172e964907a97d5ebd876bfdd4adbd-Abstract.html | Wojciech M. Czarnecki, Gauthier Gidel, Brendan Tracey, Karl Tuyls, Shayegan Omidshafiei, David Balduzzi, Max Jaderberg | https://papers.nips.cc/paper_files/paper/2020/hash/ca172e964907a97d5ebd876bfdd4adbd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ca172e964907a97d5ebd876bfdd4adbd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11187-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ca172e964907a97d5ebd876bfdd4adbd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ca172e964907a97d5ebd876bfdd4adbd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ca172e964907a97d5ebd876bfdd4adbd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ca172e964907a97d5ebd876bfdd4adbd-Supplemental.zip | This paper investigates the geometrical properties of real world games (e.g. Tic-Tac-Toe, Go, StarCraft II).
We hypothesise that their geometrical structure resembles a spinning top, with the upright axis representing transitive strength, and the radial axis representing the non-transitive dimension, which corresponds to the number of cycles that exist at a particular transitive strength.
We prove the existence of this geometry for a wide class of real world games by exposing their temporal nature.
Additionally, we show that this unique structure also has consequences for learning - it clarifies why populations of strategies are necessary for training of agents, and how population size relates to the structure of the game.
Finally, we empirically validate these claims by using a selection of nine real world two-player zero-sum symmetric games, showing 1) the spinning top structure is revealed and can be easily reconstructed by using a new method of Nash clustering to measure the interaction between transitive and cyclical strategy behaviour, and 2) the effect that population size has on the convergence of learning in these games. |
Cooperative Heterogeneous Deep Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/ca3a9be77f7e88708afb20c8cdf44b60-Abstract.html | Han Zheng, Pengfei Wei, Jing Jiang, Guodong Long, Qinghua Lu, Chengqi Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/ca3a9be77f7e88708afb20c8cdf44b60-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ca3a9be77f7e88708afb20c8cdf44b60-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11188-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ca3a9be77f7e88708afb20c8cdf44b60-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ca3a9be77f7e88708afb20c8cdf44b60-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ca3a9be77f7e88708afb20c8cdf44b60-Review.html | null | Numerous deep reinforcement learning agents have been proposed, and each
of them has its strengths and flaws. In this work, we present a Cooperative
Heterogeneous Deep Reinforcement Learning (CHDRL) framework that can learn
a policy by integrating the advantages of heterogeneous agents. Specifically, we
propose a cooperative learning framework that classifies heterogeneous agents into two classes: global agents and local agents. Global agents are off-policy agents that can utilize experiences from the other agents. Local agents are either on-policy agents or population-based evolutionary algorithms (EAs) agents that can explore the local area effectively. We employ global agents, which are sample-efficient, to guide the learning of local agents so that local agents can benefit from the sample-efficient agents and simultaneously maintain their advantages, e.g., stability. Global agents also benefit from effective local searches. Experimental studies on a range of continuous control tasks from the Mujoco benchmark show that CHDRL achieves better performance compared with state-of-the-art baselines. |
Mitigating Forgetting in Online Continual Learning via Instance-Aware Parameterization | https://papers.nips.cc/paper_files/paper/2020/hash/ca4b5656b7e193e6bb9064c672ac8dce-Abstract.html | Hung-Jen Chen, An-Chieh Cheng, Da-Cheng Juan, Wei Wei, Min Sun | https://papers.nips.cc/paper_files/paper/2020/hash/ca4b5656b7e193e6bb9064c672ac8dce-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11189-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-Supplemental.pdf | Online continual learning is a challenging scenario where a model needs to learn from a continuous stream of data without revisiting any previously encountered data instances. The phenomenon of catastrophic forgetting is worsened since the model should not only address the forgetting at the task-level but also at the data instance-level within the same task. To mitigate this, we leverage the concept of "instance awareness" in the neural network, where each data instance is classified by a path in the network searched by the controller from a meta-graph. To preserve the knowledge we learn from previous instances, we proposed a method to protect the path by restricting the gradient updates of one instance from overriding past updates calculated from previous instances if these instances are not similar. On the other hand, it also encourages fine-tuning the path if the incoming instance shares the similarity with previous instances. The mechanism of selecting paths according to instances similarity is naturally determined by the controller, which is compact and online updated. Experimental results show that the proposed method outperforms state-of-the-arts in online continual learning. Furthermore, the proposed method is evaluated against a realistic setting where the boundaries between tasks are blurred. Experimental results confirm that the proposed method outperforms the state-of-the-arts on CIFAR-10, CIFAR-100, and Tiny-ImageNet. |
ImpatientCapsAndRuns: Approximately Optimal Algorithm Configuration from an Infinite Pool | https://papers.nips.cc/paper_files/paper/2020/hash/ca5520b5672ea120b23bde75c46e76c6-Abstract.html | Gellert Weisz, András György, Wei-I Lin, Devon Graham, Kevin Leyton-Brown, Csaba Szepesvari, Brendan Lucier | https://papers.nips.cc/paper_files/paper/2020/hash/ca5520b5672ea120b23bde75c46e76c6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ca5520b5672ea120b23bde75c46e76c6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11190-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ca5520b5672ea120b23bde75c46e76c6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ca5520b5672ea120b23bde75c46e76c6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ca5520b5672ea120b23bde75c46e76c6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ca5520b5672ea120b23bde75c46e76c6-Supplemental.pdf | Algorithm configuration procedures optimize parameters of a given algorithm to perform well over a distribution of inputs. Recent theoretical work focused on the case of selecting between a small number of alternatives. In practice, parameter spaces are often very large or infinite, and so successful heuristic procedures discard parameters ``impatiently'', based on very few observations. Inspired by this idea, we introduce ImpatientCapsAndRuns, which quickly discards less promising configurations, significantly speeding up the search procedure compared to previous algorithms with theoretical guarantees, while still achieving optimal runtime up to logarithmic factors under mild assumptions. Experimental results demonstrate a practical improvement. |
Dense Correspondences between Human Bodies via Learning Transformation Synchronization on Graphs | https://papers.nips.cc/paper_files/paper/2020/hash/ca7be8306ecc3f5fa30ff2c41e64fa7b-Abstract.html | Xiangru Huang, Haitao Yang, Etienne Vouga, Qixing Huang | https://papers.nips.cc/paper_files/paper/2020/hash/ca7be8306ecc3f5fa30ff2c41e64fa7b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ca7be8306ecc3f5fa30ff2c41e64fa7b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11191-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ca7be8306ecc3f5fa30ff2c41e64fa7b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ca7be8306ecc3f5fa30ff2c41e64fa7b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ca7be8306ecc3f5fa30ff2c41e64fa7b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ca7be8306ecc3f5fa30ff2c41e64fa7b-Supplemental.pdf | We introduce an approach for establishing dense correspondences between partial scans of human models and a complete template model. Our approach's key novelty lies in formulating dense correspondence computation as initializing and synchronizing local transformations between the scan and the template model. We introduce an optimization formulation for synchronizing transformations among a graph of the input scan, which automatically enforces smoothness of correspondences and recovers the underlying articulated deformations. We then show how to convert the iterative optimization procedure among a graph of the input scan into an end-to-end trainable network. The network design utilizes additional trainable parameters to break the barrier of the original optimization formulation's exact and robust recovery conditions. Experimental results on benchmark datasets demonstrate that our approach considerably outperforms baseline approaches in accuracy and robustness. |
Reasoning about Uncertainties in Discrete-Time Dynamical Systems using Polynomial Forms. | https://papers.nips.cc/paper_files/paper/2020/hash/ca886eb9edb61a42256192745c72cd79-Abstract.html | Sriram Sankaranarayanan, Yi Chou, Eric Goubault, Sylvie Putot | https://papers.nips.cc/paper_files/paper/2020/hash/ca886eb9edb61a42256192745c72cd79-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ca886eb9edb61a42256192745c72cd79-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11192-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ca886eb9edb61a42256192745c72cd79-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ca886eb9edb61a42256192745c72cd79-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ca886eb9edb61a42256192745c72cd79-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ca886eb9edb61a42256192745c72cd79-Supplemental.pdf | In this paper, we propose polynomial forms to represent distributions of state
variables over time for discrete-time stochastic dynamical systems. This
problem arises in a variety of applications in areas ranging from biology to
robotics. Our approach allows us to rigorously represent the probability
distribution of state variables over time, and provide guaranteed bounds on
the expectations, moments and probabilities of tail events involving the state
variables. First, we recall ideas from interval arithmetic, and use them to
rigorously represent the state variables at time t as a function of the
initial state variables and noise symbols that model the random
exogenous inputs encountered before time t. Next, we show how concentration
of measure inequalities can be employed to prove rigorous bounds on the tail
probabilities of these state variables. We demonstrate interesting
applications that demonstrate how our approach can be useful in some
situations to establish mathematically guaranteed bounds that are of a
different nature from those obtained through simulations with pseudo-random
numbers. |
Applications of Common Entropy for Causal Inference | https://papers.nips.cc/paper_files/paper/2020/hash/cae7115f44837c806c9b23ed00a1a28a-Abstract.html | Murat Kocaoglu, Sanjay Shakkottai, Alexandros G. Dimakis, Constantine Caramanis, Sriram Vishwanath | https://papers.nips.cc/paper_files/paper/2020/hash/cae7115f44837c806c9b23ed00a1a28a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cae7115f44837c806c9b23ed00a1a28a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11193-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cae7115f44837c806c9b23ed00a1a28a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cae7115f44837c806c9b23ed00a1a28a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cae7115f44837c806c9b23ed00a1a28a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cae7115f44837c806c9b23ed00a1a28a-Supplemental.pdf | We study the problem of discovering the simplest latent variable that can make two observed discrete variables conditionally independent. The minimum entropy required for such a latent is known as common entropy in information theory. We extend this notion to Renyi common entropy by minimizing the Renyi entropy of the latent variable. To efficiently compute common entropy, we propose an iterative algorithm that can be used to discover the trade-off between the entropy of the latent variable and the conditional mutual information of the observed variables. We show two applications of common entropy in causal inference: First, under the assumption that there are no low-entropy mediators, it can be used to distinguish direct causation from spurious correlation among almost all joint distributions on simple causal graphs with two observed variables. Second, common entropy can be used to improve constraint-based methods such as PC or FCI algorithms in the small-sample regime, where these methods are known to struggle. We propose a modification to these constraint-based methods to assess if a separating set found by these algorithms are valid using common entropy. We finally evaluate our algorithms on synthetic and real data to establish their performance. |
SGD with shuffling: optimal rates without component convexity and large epoch requirements | https://papers.nips.cc/paper_files/paper/2020/hash/cb8acb1dc9821bf74e6ca9068032d623-Abstract.html | Kwangjun Ahn, Chulhee Yun, Suvrit Sra | https://papers.nips.cc/paper_files/paper/2020/hash/cb8acb1dc9821bf74e6ca9068032d623-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cb8acb1dc9821bf74e6ca9068032d623-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11194-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cb8acb1dc9821bf74e6ca9068032d623-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cb8acb1dc9821bf74e6ca9068032d623-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cb8acb1dc9821bf74e6ca9068032d623-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cb8acb1dc9821bf74e6ca9068032d623-Supplemental.pdf | We study without-replacement SGD for solving finite-sum optimization problems. Specifically, depending on how the indices of the finite-sum are shuffled, we consider the RandomShuffle (shuffle at the beginning of each epoch) and SingleShuffle (shuffle only once) algorithms. First, we establish minimax optimal convergence rates of these algorithms up to poly-log factors. Notably, our analysis is general enough to cover gradient dominated nonconvex costs, and does not rely on the convexity of individual component functions unlike existing optimal convergence results. Secondly, assuming convexity of the individual components, we further sharpen the tight convergence results for RandomShuffle by removing the drawbacks common to all prior arts: large number of epochs required for the results to hold, and extra poly-log factor gaps to the lower bound. |
Unsupervised Joint k-node Graph Representations with Compositional Energy-Based Models | https://papers.nips.cc/paper_files/paper/2020/hash/cba0a4ee5ccd02fda0fe3f9a3e7b89fe-Abstract.html | Leonardo Cotta, Carlos H. C. Teixeira, Ananthram Swami, Bruno Ribeiro | https://papers.nips.cc/paper_files/paper/2020/hash/cba0a4ee5ccd02fda0fe3f9a3e7b89fe-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cba0a4ee5ccd02fda0fe3f9a3e7b89fe-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11195-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cba0a4ee5ccd02fda0fe3f9a3e7b89fe-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cba0a4ee5ccd02fda0fe3f9a3e7b89fe-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cba0a4ee5ccd02fda0fe3f9a3e7b89fe-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cba0a4ee5ccd02fda0fe3f9a3e7b89fe-Supplemental.pdf | Existing Graph Neural Network (GNN) methods that learn inductive unsupervised graph representations focus on learning node and edge representations by predicting observed edges in the graph. Although such approaches have shown advances in downstream node classification tasks, they are ineffective in jointly representing larger k-node sets, k{>}2. We propose MHM-GNN, an inductive unsupervised graph representation approach that combines joint k-node representations with energy-based models (hypergraph Markov networks) and GNNs. To address the intractability of the loss that arises from this combination, we endow our optimization with a loss upper bound using a finite-sample unbiased Markov Chain Monte Carlo estimator. Our experiments show that the unsupervised joint k-node representations of MHM-GNN produce better unsupervised representations than existing approaches from the literature. |
Neural Manifold Ordinary Differential Equations | https://papers.nips.cc/paper_files/paper/2020/hash/cbf8710b43df3f2c1553e649403426df-Abstract.html | Aaron Lou, Derek Lim, Isay Katsman, Leo Huang, Qingxuan Jiang, Ser Nam Lim, Christopher M. De Sa | https://papers.nips.cc/paper_files/paper/2020/hash/cbf8710b43df3f2c1553e649403426df-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cbf8710b43df3f2c1553e649403426df-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11196-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cbf8710b43df3f2c1553e649403426df-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cbf8710b43df3f2c1553e649403426df-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cbf8710b43df3f2c1553e649403426df-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cbf8710b43df3f2c1553e649403426df-Supplemental.pdf | To better conform to data geometry, recent deep generative modelling techniques adapt Euclidean constructions to non-Euclidean spaces. In this paper, we study normalizing flows on manifolds. Previous work has developed flow models for specific cases; however, these advancements hand craft layers on a manifold-by-manifold basis, restricting generality and inducing cumbersome design constraints. We overcome these issues by introducing Neural Manifold Ordinary Differential Equations, a manifold generalization of Neural ODEs, which enables the construction of Manifold Continuous Normalizing Flows (MCNFs). MCNFs require only local geometry (therefore generalizing to arbitrary manifolds) and compute probabilities with continuous change of variables (allowing for a simple and expressive flow construction). We find that leveraging continuous manifold dynamics produces a marked improvement for both density estimation and downstream tasks. |
CO-Optimal Transport | https://papers.nips.cc/paper_files/paper/2020/hash/cc384c68ad503482fb24e6d1e3b512ae-Abstract.html | Vayer Titouan, Ievgen Redko, Rémi Flamary, Nicolas Courty | https://papers.nips.cc/paper_files/paper/2020/hash/cc384c68ad503482fb24e6d1e3b512ae-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cc384c68ad503482fb24e6d1e3b512ae-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11197-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cc384c68ad503482fb24e6d1e3b512ae-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cc384c68ad503482fb24e6d1e3b512ae-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cc384c68ad503482fb24e6d1e3b512ae-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cc384c68ad503482fb24e6d1e3b512ae-Supplemental.pdf | Optimal transport (OT) is a powerful geometric and probabilistic tool for finding correspondences and measuring similarity between two distributions. Yet, its original formulation relies on the existence of a cost function between the samples of the two distributions, which makes it impractical when they are supported on different spaces. To circumvent this limitation, we propose a novel OT problem, named COOT for CO-Optimal Transport, that simultaneously optimizes two transport maps between both samples and features, contrary to other approaches that either discard the individual features by focusing on pairwise distances between samples or need to model explicitly the relations between them. We provide a thorough theoretical analysis of our problem, establish its rich connections with other OT-based distances and demonstrate its versatility with two machine learning applications in heterogeneous domain adaptation and co-clustering/data summarization, where COOT leads to performance improvements over the state-of-the-art methods. |
Continuous Meta-Learning without Tasks | https://papers.nips.cc/paper_files/paper/2020/hash/cc3f5463bc4d26bc38eadc8bcffbc654-Abstract.html | James Harrison, Apoorva Sharma, Chelsea Finn, Marco Pavone | https://papers.nips.cc/paper_files/paper/2020/hash/cc3f5463bc4d26bc38eadc8bcffbc654-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cc3f5463bc4d26bc38eadc8bcffbc654-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11198-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cc3f5463bc4d26bc38eadc8bcffbc654-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cc3f5463bc4d26bc38eadc8bcffbc654-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cc3f5463bc4d26bc38eadc8bcffbc654-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cc3f5463bc4d26bc38eadc8bcffbc654-Supplemental.pdf | Meta-learning is a promising strategy for learning to efficiently learn using data gathered from a distribution of tasks. However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task. In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with unsegmented time series data. We present meta-learning via online changepoint analysis (MOCA), an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme. The framework allows both training and testing directly on time series data without segmenting it into discrete tasks. We demonstrate the utility of this approach on three nonlinear meta-regression benchmarks as well as two meta-image-classification benchmarks. |
A mathematical theory of cooperative communication | https://papers.nips.cc/paper_files/paper/2020/hash/cc58f7abf0b0cf2d5ac95ab60e4f14e9-Abstract.html | Pei Wang, Junqi Wang, Pushpi Paranamana, Patrick Shafto | https://papers.nips.cc/paper_files/paper/2020/hash/cc58f7abf0b0cf2d5ac95ab60e4f14e9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cc58f7abf0b0cf2d5ac95ab60e4f14e9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11199-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cc58f7abf0b0cf2d5ac95ab60e4f14e9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cc58f7abf0b0cf2d5ac95ab60e4f14e9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cc58f7abf0b0cf2d5ac95ab60e4f14e9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cc58f7abf0b0cf2d5ac95ab60e4f14e9-Supplemental.pdf | Cooperative communication plays a central role in theories of human cognition, language, development, culture, and human-robot interaction. Prior models of cooperative communication are algorithmic in nature and do not shed light on why cooperation may yield effective belief transmission and what limitations may arise due to differences between beliefs of agents. Through a connection to the theory of optimal transport, we establishing a mathematical framework for cooperative communication. We derive prior models as special cases, statistical interpretations of belief transfer plans, and proofs of robustness and instability. Computational simulations support and elaborate our theoretical results, and demonstrate fit to human behavior. The results show that cooperative communication provably enables effective, robust belief transmission which is required to explain feats of human learning and improve human-machine interaction. |
Penalized Langevin dynamics with vanishing penalty for smooth and log-concave targets | https://papers.nips.cc/paper_files/paper/2020/hash/cc75c256acc04ce25a291c4b7a9856c0-Abstract.html | Avetik Karagulyan, Arnak Dalalyan | https://papers.nips.cc/paper_files/paper/2020/hash/cc75c256acc04ce25a291c4b7a9856c0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cc75c256acc04ce25a291c4b7a9856c0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11200-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cc75c256acc04ce25a291c4b7a9856c0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cc75c256acc04ce25a291c4b7a9856c0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cc75c256acc04ce25a291c4b7a9856c0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cc75c256acc04ce25a291c4b7a9856c0-Supplemental.pdf | We study the problem of sampling from a probability distribution
on $\mathbb R^p$ defined via a convex and smooth potential function.
We first consider a continuous-time diffusion-type process, termed
Penalized Langevin dynamics (PLD), the drift of which is the negative
gradient of the potential plus a linear penalty that vanishes when time
goes to infinity. An upper bound on the Wasserstein-2 distance between
the distribution of the PLD at time $t$ and the target is established.
This upper bound highlights the influence of the speed of decay of the
penalty on the accuracy of approximation. As a consequence, in the case
of low-temperature limit we infer a new result on the convergence of the
penalized gradient flow for the optimization problem. |
Learning Invariances in Neural Networks from Training Data | https://papers.nips.cc/paper_files/paper/2020/hash/cc8090c4d2791cdd9cd2cb3c24296190-Abstract.html | Gregory Benton, Marc Finzi, Pavel Izmailov, Andrew G. Wilson | https://papers.nips.cc/paper_files/paper/2020/hash/cc8090c4d2791cdd9cd2cb3c24296190-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cc8090c4d2791cdd9cd2cb3c24296190-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11201-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cc8090c4d2791cdd9cd2cb3c24296190-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cc8090c4d2791cdd9cd2cb3c24296190-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cc8090c4d2791cdd9cd2cb3c24296190-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cc8090c4d2791cdd9cd2cb3c24296190-Supplemental.zip | Invariances to translations have imbued convolutional neural networks with powerful generalization properties. However, we often do not know a priori what invariances are present in the data, or to what extent a model should be invariant to a given augmentation. We show how to learn invariances by parameterizing a distribution over augmentations and optimizing the training loss simultaneously with respect to the network parameters and augmentation parameters. With this simple procedure we can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations, on training data alone. We show our approach is competitive with methods that are specialized to each task with the appropriate hard-coded invariances, without providing any prior knowledge of which invariance is needed. |
A Finite-Time Analysis of Two Time-Scale Actor-Critic Methods | https://papers.nips.cc/paper_files/paper/2020/hash/cc9b3c69b56df284846bf2432f1cba90-Abstract.html | Yue Frank Wu, Weitong ZHANG, Pan Xu, Quanquan Gu | https://papers.nips.cc/paper_files/paper/2020/hash/cc9b3c69b56df284846bf2432f1cba90-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cc9b3c69b56df284846bf2432f1cba90-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11202-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cc9b3c69b56df284846bf2432f1cba90-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cc9b3c69b56df284846bf2432f1cba90-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cc9b3c69b56df284846bf2432f1cba90-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cc9b3c69b56df284846bf2432f1cba90-Supplemental.pdf | Actor-critic (AC) methods have exhibited great empirical success compared with other reinforcement learning algorithms, where the actor uses the policy gradient to improve the learning policy and the critic uses temporal difference learning to estimate the policy gradient. Under the two time-scale learning rate schedule, the asymptotic convergence of AC has been well studied in the literature.
However, the non-asymptotic convergence and finite sample complexity of actor-critic methods are largely open.
In this work, we provide a non-asymptotic analysis for two time-scale actor-critic methods under non-i.i.d. setting. We prove that the actor-critic method is guaranteed to find a first-order stationary point (i.e., $\|\nabla J(\bm{\theta})\|_2^2 \le \epsilon$) of the non-concave performance function $J(\bm{\theta})$, with $\mathcal{\tilde{O}}(\epsilon^{-2.5})$ sample complexity. To the best of our knowledge, this is the first work providing finite-time analysis and sample complexity bound for two time-scale actor-critic methods. |
Pruning Filter in Filter | https://papers.nips.cc/paper_files/paper/2020/hash/ccb1d45fb76f7c5a0bf619f979c6cf36-Abstract.html | Fanxu Meng, Hao Cheng, Ke Li, Huixiang Luo, Xiaowei Guo, Guangming Lu, Xing Sun | https://papers.nips.cc/paper_files/paper/2020/hash/ccb1d45fb76f7c5a0bf619f979c6cf36-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ccb1d45fb76f7c5a0bf619f979c6cf36-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11203-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ccb1d45fb76f7c5a0bf619f979c6cf36-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ccb1d45fb76f7c5a0bf619f979c6cf36-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ccb1d45fb76f7c5a0bf619f979c6cf36-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ccb1d45fb76f7c5a0bf619f979c6cf36-Supplemental.pdf | Pruning has become a very powerful and effective technique to compress and accelerate modern neural networks. Existing pruning methods can be grouped into two categories: filter pruning (FP) and weight pruning (WP). FP wins at hardware compatibility but loses at the compression ratio compared with WP. To converge the strength of both methods, we propose to prune the filter in the filter. Specifically, we treat a filter F, whose size is CKK, as KK stripes, i.e., 11 filters, then by pruning the stripes instead of the whole filter, we can achieves finer granularity than traditional FP while being hardware friendly. We term our method as SWP (Stripe-Wise Pruning). SWP is implemented by introducing a novel learnable matrix called Filter Skeleton, whose values reflect the optimal shape of each filter. As some recent work has shown that the pruned architecture is more crucial than the inherited important weights, we argue that the architecture of a single filter, i.e., the Filter Skeleton, also matters. Through extensive experiments, we demonstrate that SWP is more effective compared to the previous FP-based methods and achieves the state-of-art pruning ratio on CIFAR-10 and ImageNet datasets without obvious accuracy drop. |
Learning to Mutate with Hypergradient Guided Population | https://papers.nips.cc/paper_files/paper/2020/hash/ccb421d5f36c5a412816d494b15ca9f6-Abstract.html | Zhiqiang Tao, Yaliang Li, Bolin Ding, Ce Zhang, Jingren Zhou, Yun Fu | https://papers.nips.cc/paper_files/paper/2020/hash/ccb421d5f36c5a412816d494b15ca9f6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ccb421d5f36c5a412816d494b15ca9f6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11204-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ccb421d5f36c5a412816d494b15ca9f6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ccb421d5f36c5a412816d494b15ca9f6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ccb421d5f36c5a412816d494b15ca9f6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ccb421d5f36c5a412816d494b15ca9f6-Supplemental.pdf | Computing the gradient of model hyperparameters, i.e., hypergradient, enables a promising and natural way to solve the hyperparameter optimization task. However, gradient-based methods could lead to suboptimal solutions due to the non-convex nature of optimization in a complex hyperparameter space. In this study, we propose a hyperparameter mutation (HPM) algorithm to explicitly consider a learnable trade-off between using global and local search, where we adopt a population of student models to simultaneously explore the hyperparameter space guided by hypergradient and leverage a teacher model to mutate the underperforming students by exploiting the top ones. The teacher model is implemented with an attention mechanism and is used to learn a mutation schedule for different hyperparameters on the fly. Empirical evidence on synthetic functions is provided to show that HPM outperforms hypergradient significantly. Experiments on two benchmark datasets are also conducted to validate the effectiveness of the proposed HPM algorithm for training deep neural networks compared with several strong baselines. |
A convex optimization formulation for multivariate regression | https://papers.nips.cc/paper_files/paper/2020/hash/ccd2d123f4ec4d777fc6ef757d0fb642-Abstract.html | Yunzhang Zhu | https://papers.nips.cc/paper_files/paper/2020/hash/ccd2d123f4ec4d777fc6ef757d0fb642-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ccd2d123f4ec4d777fc6ef757d0fb642-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11205-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ccd2d123f4ec4d777fc6ef757d0fb642-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ccd2d123f4ec4d777fc6ef757d0fb642-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ccd2d123f4ec4d777fc6ef757d0fb642-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ccd2d123f4ec4d777fc6ef757d0fb642-Supplemental.pdf | Multivariate regression (or multi-task learning) concerns the task of predicting the value of multiple responses from a set of covariates. In this article, we propose a convex optimization formulation for high-dimensional multivariate linear regression under a general error covariance structure. The main difficulty with simultaneous estimation of the regression coefficients and the error covariance matrix lies in the fact that the negative log-likelihood function is not convex. To overcome this difficulty, a new parameterization is proposed, under which the negative log-likelihood function is proved to be convex. For faster computation, two other alternative loss functions are also considered, and proved to be convex under the proposed parameterization. This new parameterization is also useful for
covariate-adjusted Gaussian graphical modeling in which the inverse of the error
covariance matrix is of interest. A joint non-asymptotic analysis of the regression coefficients and the error covariance matrix is carried out under the new parameterization. In particular, we show that the proposed method recovers the oracle estimator under sharp scaling conditions, and rates of convergence in terms of vector $\ell_\infty$ norm are also established. Empirically, the proposed methods outperform existing high-dimensional multivariate linear regression
methods that are based on either minimizing certain non-convex criteria or certain two-step procedures. |
Online Meta-Critic Learning for Off-Policy Actor-Critic Methods | https://papers.nips.cc/paper_files/paper/2020/hash/cceff8faa855336ad53b3325914caea2-Abstract.html | Wei Zhou, Yiying Li, Yongxin Yang, Huaimin Wang, Timothy Hospedales | https://papers.nips.cc/paper_files/paper/2020/hash/cceff8faa855336ad53b3325914caea2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cceff8faa855336ad53b3325914caea2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11206-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cceff8faa855336ad53b3325914caea2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cceff8faa855336ad53b3325914caea2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cceff8faa855336ad53b3325914caea2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cceff8faa855336ad53b3325914caea2-Supplemental.pdf | Off-Policy Actor-Critic (OffP-AC) methods have proven successful in a variety of continuous control tasks. Normally, the critic's action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return. In this paper, we introduce a flexible and augmented meta-critic that observes the learning process and meta-learns an additional loss for the actor that accelerates and improves actor-critic learning. Compared to existing meta-learning algorithms, meta-critic is rapidly learned online for a single task, rather than slowly over a family of tasks. Crucially, our meta-critic is designed for off-policy based learners, which currently provide state-of-the-art reinforcement learning sample efficiency. We demonstrate that online meta-critic learning benefits to a variety of continuous control tasks when combined with contemporary OffP-AC methods DDPG, TD3 and SAC. |
The All-or-Nothing Phenomenon in Sparse Tensor PCA | https://papers.nips.cc/paper_files/paper/2020/hash/cd0b43eac0392accf3624b7372dec36e-Abstract.html | Jonathan Niles-Weed, Ilias Zadik | https://papers.nips.cc/paper_files/paper/2020/hash/cd0b43eac0392accf3624b7372dec36e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cd0b43eac0392accf3624b7372dec36e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11207-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cd0b43eac0392accf3624b7372dec36e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cd0b43eac0392accf3624b7372dec36e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cd0b43eac0392accf3624b7372dec36e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cd0b43eac0392accf3624b7372dec36e-Supplemental.pdf | We study the statistical problem of estimating a rank-one sparse tensor corrupted by additive gaussian noise, a Gaussian additive model also known as sparse tensor PCA. We show that for Bernoulli and Bernoulli-Rademacher distributed signals and \emph{for all} sparsity levels which are sublinear in the dimension of the signal, the sparse tensor PCA model exhibits a phase transition called the \emph{all-or-nothing phenomenon}. This is the property that for some signal-to-noise ratio (SNR) $\mathrm{SNR_c}$ and any fixed $\epsilon>0$, if the SNR of the model is below $\left(1-\epsilon\right)\mathrm{SNR_c}$, then it is impossible to achieve any arbitrarily small constant correlation with the hidden signal, while if the SNR is above $\left(1+\epsilon \right)\mathrm{SNR_c}$, then it is possible to achieve almost perfect correlation with the hidden signal. The all-or-nothing phenomenon was initially established in the context of sparse linear regression, and over the last year also in the context of sparse 2-tensor (matrix) PCA and Bernoulli group testing. Our results follow from a more general result showing that for any Gaussian additive model with a discrete uniform prior, the all-or-nothing phenomenon follows as a direct outcome of an appropriately defined ``near-orthogonality" property of the support of the prior distribution. |
Synthesize, Execute and Debug: Learning to Repair for Neural Program Synthesis | https://papers.nips.cc/paper_files/paper/2020/hash/cd0f74b5955dc87fd0605745c4b49ee8-Abstract.html | Kavi Gupta, Peter Ebert Christensen, Xinyun Chen, Dawn Song | https://papers.nips.cc/paper_files/paper/2020/hash/cd0f74b5955dc87fd0605745c4b49ee8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cd0f74b5955dc87fd0605745c4b49ee8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11208-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cd0f74b5955dc87fd0605745c4b49ee8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cd0f74b5955dc87fd0605745c4b49ee8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cd0f74b5955dc87fd0605745c4b49ee8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cd0f74b5955dc87fd0605745c4b49ee8-Supplemental.pdf | The use of deep learning techniques has achieved significant progress for program synthesis from input-output examples. However, when the program semantics become more complex, it still remains a challenge to synthesize programs that are consistent with the specification. In this work, we propose SED, a neural program generation framework that incorporates synthesis, execution, and debugging stages. Instead of purely relying on the neural program synthesizer to generate the final program, SED first produces initial programs using the neural program synthesizer component, then utilizes a neural program debugger to iteratively repair the generated programs. The integration of the debugger component enables SED to modify the programs based on the execution results and specification, which resembles the coding process of human programmers. On Karel, a challenging input-output program synthesis benchmark, SED reduces the error rate of the neural program synthesizer itself by a considerable margin, and outperforms the standard beam search for decoding. |
ARMA Nets: Expanding Receptive Field for Dense Prediction | https://papers.nips.cc/paper_files/paper/2020/hash/cd10c7f376188a4a2ca3e8fea2c03aeb-Abstract.html | Jiahao Su, Shiqi Wang, Furong Huang | https://papers.nips.cc/paper_files/paper/2020/hash/cd10c7f376188a4a2ca3e8fea2c03aeb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cd10c7f376188a4a2ca3e8fea2c03aeb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11209-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cd10c7f376188a4a2ca3e8fea2c03aeb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cd10c7f376188a4a2ca3e8fea2c03aeb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cd10c7f376188a4a2ca3e8fea2c03aeb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cd10c7f376188a4a2ca3e8fea2c03aeb-Supplemental.pdf | Global information is essential for dense prediction problems, whose goal is to compute a discrete or continuous label for each pixel in the images. Traditional convolutional layers in neural networks, initially designed for image classification, are restrictive in these problems since the filter size limits their receptive fields. In this work, we propose to replace any traditional convolutional layer with an autoregressive moving-average (ARMA) layer, a novel module with an adjustable receptive field controlled by the learnable autoregressive coefficients. Compared with traditional convolutional layers, our ARMA layer enables explicit interconnections of the output neurons and learns its receptive field by adapting the autoregressive coefficients of the interconnections. ARMA layer is adjustable to different types of tasks: for tasks where global information is crucial, it is capable of learning relatively large autoregressive coefficients to allow for an output neuron's receptive field covering the entire input; for tasks where only local information is required, it can learn small or near zero autoregressive coefficients and automatically reduces to a traditional convolutional layer. We show both theoretically and empirically that the effective receptive field of networks with ARMA layers (named ARMA networks) expands with larger autoregressive coefficients. We also provably solve the instability problem of learning and prediction in the ARMA layer through a re-parameterization mechanism. Additionally, we demonstrate that ARMA networks substantially improve their baselines on challenging dense prediction tasks, including video prediction and semantic segmentation. |
Diversity-Guided Multi-Objective Bayesian Optimization With Batch Evaluations | https://papers.nips.cc/paper_files/paper/2020/hash/cd3109c63bf4323e6b987a5923becb96-Abstract.html | Mina Konakovic Lukovic, Yunsheng Tian, Wojciech Matusik | https://papers.nips.cc/paper_files/paper/2020/hash/cd3109c63bf4323e6b987a5923becb96-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cd3109c63bf4323e6b987a5923becb96-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11210-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cd3109c63bf4323e6b987a5923becb96-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cd3109c63bf4323e6b987a5923becb96-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cd3109c63bf4323e6b987a5923becb96-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cd3109c63bf4323e6b987a5923becb96-Supplemental.zip | Many science, engineering, and design optimization problems require balancing the trade-offs between several conflicting objectives. The objectives are often black-box functions whose evaluations are time-consuming and costly. Multi-objective Bayesian optimization can be used to automate the process of discovering the set of optimal solutions, called Pareto-optimal, while minimizing the number of performed evaluations. To further reduce the evaluation time in the optimization process, testing of several samples in parallel can be deployed. We propose a novel multi-objective Bayesian optimization algorithm that iteratively selects the best batch of samples to be evaluated in parallel. Our algorithm approximates and analyzes a piecewise-continuous Pareto set representation. This representation allows us to introduce a batch selection strategy that optimizes for both hypervolume improvement and diversity of selected samples in order to efficiently advance promising regions of the Pareto front. Experiments on both synthetic test functions and real-world benchmark problems show that our algorithm predominantly outperforms relevant state-of-the-art methods. Code is available at https://github.com/yunshengtian/DGEMO. |
SOLOv2: Dynamic and Fast Instance Segmentation | https://papers.nips.cc/paper_files/paper/2020/hash/cd3afef9b8b89558cd56638c3631868a-Abstract.html | Xinlong Wang, Rufeng Zhang, Tao Kong, Lei Li, Chunhua Shen | https://papers.nips.cc/paper_files/paper/2020/hash/cd3afef9b8b89558cd56638c3631868a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cd3afef9b8b89558cd56638c3631868a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11211-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cd3afef9b8b89558cd56638c3631868a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cd3afef9b8b89558cd56638c3631868a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cd3afef9b8b89558cd56638c3631868a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cd3afef9b8b89558cd56638c3631868a-Supplemental.zip | In this work, we design a simple, direct, and fast framework for instance segmentation with strong performance. To this end, we propose a novel and effective approach, termed SOLOv2, following the principle of the SOLO method [32]. First, our new framework is empowered by an efficient and holistic instance mask representation scheme, which dynamically segments each instance in the image, without resorting to bounding box detection. Specifically, the object mask generation is decoupled into a mask kernel prediction and mask feature learning, which are responsible for generating convolution kernels and the feature maps to be convolved with, respectively. Second, SOLOv2 significantly reduces inference overhead with our novel matrix non-maximum suppression (NMS) technique. Our Matrix NMS performs NMS with parallel matrix operations in one shot, and yields better results. We demonstrate that the proposed SOLOv2 achieves the state-of-the- art performance with high efficiency, making it suitable for both mobile and cloud applications. A light-weight version of SOLOv2 executes at 31.3 FPS and yields 37.1% AP on COCO test-dev. Moreover, our state-of-the-art results in object detection (from our mask byproduct) and panoptic segmentation show the potential of SOLOv2 to serve as a new strong baseline for many instance-level recognition tasks. Code is available at https://git.io/AdelaiDet |
Robust Recovery via Implicit Bias of Discrepant Learning Rates for Double Over-parameterization | https://papers.nips.cc/paper_files/paper/2020/hash/cd42c963390a9cd025d007dacfa99351-Abstract.html | Chong You, Zhihui Zhu, Qing Qu, Yi Ma | https://papers.nips.cc/paper_files/paper/2020/hash/cd42c963390a9cd025d007dacfa99351-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cd42c963390a9cd025d007dacfa99351-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11212-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cd42c963390a9cd025d007dacfa99351-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cd42c963390a9cd025d007dacfa99351-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cd42c963390a9cd025d007dacfa99351-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cd42c963390a9cd025d007dacfa99351-Supplemental.zip | Recent advances have shown that implicit bias of gradient descent on over-parameterized models enables the recovery of low-rank matrices from linear measurements, even with no prior knowledge on the intrinsic rank. In contrast, for {\em robust} low-rank matrix recovery from {\em grossly corrupted} measurements, over-parameterization leads to overfitting without prior knowledge on both the intrinsic rank and sparsity of corruption. This paper shows that with a {\em double over-parameterization} for both the low-rank matrix and sparse corruption, gradient descent with {\em discrepant learning rates} provably recovers the underlying matrix even without prior knowledge on neither rank of the matrix nor sparsity of the corruption. We further extend our approach for the robust recovery of natural images by over-parameterizing images with deep convolutional networks. Experiments show that our method handles different test images and varying corruption levels with a single learning pipeline where the network width and termination conditions do not need to be adjusted on a case-by-case basis. Underlying the success is again the implicit bias with discrepant learning rates on different over-parameterized parameters, which may bear on broader applications. |
Axioms for Learning from Pairwise Comparisons | https://papers.nips.cc/paper_files/paper/2020/hash/cdaa9b682e10c291d3bbadca4c96f5de-Abstract.html | Ritesh Noothigattu, Dominik Peters, Ariel D. Procaccia | https://papers.nips.cc/paper_files/paper/2020/hash/cdaa9b682e10c291d3bbadca4c96f5de-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cdaa9b682e10c291d3bbadca4c96f5de-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11213-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cdaa9b682e10c291d3bbadca4c96f5de-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cdaa9b682e10c291d3bbadca4c96f5de-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cdaa9b682e10c291d3bbadca4c96f5de-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cdaa9b682e10c291d3bbadca4c96f5de-Supplemental.pdf | To be well-behaved, systems that process preference data must satisfy certain conditions identified by economic decision theory and by social choice theory. In ML, preferences and rankings are commonly learned by fitting a probabilistic model to noisy preference data. The behavior of this learning process from the view of economic theory has previously been studied for the case where the data consists of rankings. In practice, it is more common to have only pairwise comparison data, and the formal properties of the associated learning problem are more challenging to analyze. We show that a large class of random utility models (including the Thurstone–Mosteller Model), when estimated using the MLE, satisfy a Pareto efficiency condition. These models also satisfy a strong monotonicity property, which implies that the learning process is responsive to input data. On the other hand, we show that these models fail certain other consistency conditions from social choice theory, and in particular do not always follow the majority opinion. Our results inform existing and future applications of random utility models for societal decision making. |
Continuous Regularized Wasserstein Barycenters | https://papers.nips.cc/paper_files/paper/2020/hash/cdf1035c34ec380218a8cc9a43d438f9-Abstract.html | Lingxiao Li, Aude Genevay, Mikhail Yurochkin, Justin M. Solomon | https://papers.nips.cc/paper_files/paper/2020/hash/cdf1035c34ec380218a8cc9a43d438f9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cdf1035c34ec380218a8cc9a43d438f9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11214-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cdf1035c34ec380218a8cc9a43d438f9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cdf1035c34ec380218a8cc9a43d438f9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cdf1035c34ec380218a8cc9a43d438f9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cdf1035c34ec380218a8cc9a43d438f9-Supplemental.pdf | Wasserstein barycenters provide a geometrically meaningful way to aggregate probability distributions, built on the theory of optimal transport. They are difficult to compute in practice, however, leading previous work to restrict their supports to finite sets of points. Leveraging a new dual formulation for the regularized Wasserstein barycenter problem, we introduce a stochastic algorithm that constructs a continuous approximation of the barycenter. We establish strong duality and use the corresponding primal-dual relationship to parametrize the barycenter implicitly using the dual potentials of regularized transport problems. The resulting problem can be solved with stochastic gradient descent, which yields an efficient online algorithm to approximate the barycenter of continuous distributions given sample access. We demonstrate the effectiveness of our approach and compare against previous work on synthetic examples and real-world applications. |
Spectral Temporal Graph Neural Network for Multivariate Time-series Forecasting | https://papers.nips.cc/paper_files/paper/2020/hash/cdf6581cb7aca4b7e19ef136c6e601a5-Abstract.html | Defu Cao, Yujing Wang, Juanyong Duan, Ce Zhang, Xia Zhu, Congrui Huang, Yunhai Tong, Bixiong Xu, Jing Bai, Jie Tong, Qi Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/cdf6581cb7aca4b7e19ef136c6e601a5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cdf6581cb7aca4b7e19ef136c6e601a5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11215-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cdf6581cb7aca4b7e19ef136c6e601a5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cdf6581cb7aca4b7e19ef136c6e601a5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cdf6581cb7aca4b7e19ef136c6e601a5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cdf6581cb7aca4b7e19ef136c6e601a5-Supplemental.pdf | In this paper, we propose Spectral Temporal Graph Neural Network (StemGNN) to further improve the accuracy of multivariate time-series forecasting. StemGNN captures inter-series correlations and temporal dependencies jointly in the spectral domain. It combines Graph Fourier Transform (GFT) which models inter-series correlations and Discrete Fourier Transform (DFT) which models temporal dependencies in an end-to-end framework. After passing through GFT and DFT, the spectral representations hold clear patterns and can be predicted effectively by convolution and sequential learning modules. Moreover, StemGNN learns inter-series correlations automatically from the data without using pre-defined priors. We conduct extensive experiments on ten real-world datasets to demonstrate the effectiveness of StemGNN. |
Online Multitask Learning with Long-Term Memory | https://papers.nips.cc/paper_files/paper/2020/hash/cdfa4c42f465a5a66871587c69fcfa34-Abstract.html | Mark Herbster, Stephen Pasteris, Lisa Tse | https://papers.nips.cc/paper_files/paper/2020/hash/cdfa4c42f465a5a66871587c69fcfa34-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cdfa4c42f465a5a66871587c69fcfa34-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11216-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cdfa4c42f465a5a66871587c69fcfa34-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cdfa4c42f465a5a66871587c69fcfa34-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cdfa4c42f465a5a66871587c69fcfa34-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cdfa4c42f465a5a66871587c69fcfa34-Supplemental.pdf | We introduce a novel online multitask setting. In this setting each task is partitioned into a sequence of segments that is unknown to the learner. Associated with each segment is a hypothesis from some hypothesis class. We give algorithms that are designed to exploit the scenario where there are many such segments but significantly fewer associated hypotheses. We prove regret bounds that hold for any segmentation of the tasks and any association of hypotheses to the segments. In the single-task setting this is equivalent to switching with long-term memory in the sense of [Bousquet and Warmuth 2011]. We provide an algorithm that predicts on each trial in time linear in the number of hypotheses when the hypothesis class is finite. We also consider infinite hypothesis classes from reproducing kernel Hilbert spaces for which we give an algorithm whose per trial time complexity is cubic in the number of cumulative trials. In the single-task special case this is the first example of an efficient regret-bounded switching algorithm with long-term memory for a non-parametric hypothesis class. |
Fewer is More: A Deep Graph Metric Learning Perspective Using Fewer Proxies | https://papers.nips.cc/paper_files/paper/2020/hash/ce016f59ecc2366a43e1c96a4774d167-Abstract.html | Yuehua Zhu, Muli Yang, Cheng Deng, Wei Liu | https://papers.nips.cc/paper_files/paper/2020/hash/ce016f59ecc2366a43e1c96a4774d167-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ce016f59ecc2366a43e1c96a4774d167-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11217-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ce016f59ecc2366a43e1c96a4774d167-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ce016f59ecc2366a43e1c96a4774d167-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ce016f59ecc2366a43e1c96a4774d167-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ce016f59ecc2366a43e1c96a4774d167-Supplemental.pdf | Deep metric learning plays a key role in various machine learning tasks. Most of the previous works have been confined to sampling from a mini-batch, which cannot precisely characterize the global geometry of the embedding space. Although researchers have developed proxy- and classification-based methods to tackle the sampling issue, those methods inevitably incur a redundant computational cost. In this paper, we propose a novel Proxy-based deep Graph Metric Learning (ProxyGML) approach from the perspective of graph classification, which uses fewer proxies yet achieves better comprehensive performance. Specifically, multiple global proxies are leveraged to collectively approximate the original data points for each class. To efficiently capture local neighbor relationships, a small number of such proxies are adaptively selected to construct similarity subgraphs between these proxies and each data point. Further, we design a novel reverse label propagation algorithm, by which the neighbor relationships are adjusted according to ground-truth labels, so that a discriminative metric space can be learned during the process of subgraph classification. Extensive experiments carried out on widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate the superiority of the proposed ProxyGML over the state-of-the-art methods in terms of both effectiveness and efficiency. The source code is publicly available at \url{https://github.com/YuehuaZhu/ProxyGML}. |
Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting | https://papers.nips.cc/paper_files/paper/2020/hash/ce1aad92b939420fc17005e5461e6f48-Abstract.html | LEI BAI, Lina Yao, Can Li, Xianzhi Wang, Can Wang | https://papers.nips.cc/paper_files/paper/2020/hash/ce1aad92b939420fc17005e5461e6f48-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ce1aad92b939420fc17005e5461e6f48-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11218-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ce1aad92b939420fc17005e5461e6f48-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ce1aad92b939420fc17005e5461e6f48-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ce1aad92b939420fc17005e5461e6f48-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ce1aad92b939420fc17005e5461e6f48-Supplemental.zip | Modeling complex spatial and temporal correlations in the correlated time series data is indispensable for understanding the traffic dynamics and predicting the future status of an evolving traffic system. Recent works focus on designing complicated graph neural network architectures to capture shared patterns with the help of pre-defined graphs. In this paper, we argue that learning node-specific patterns is essential for traffic forecasting while pre-defined graph is avoidable.
To this end, we propose two adaptive modules for enhancing Graph Convolutional Network (GCN) with new capabilities: 1) a Node Adaptive Parameter Learning (NAPL) module to capture node-specific patterns; 2) a Data Adaptive Graph Generation (DAGG) module to infer the inter-dependencies among different traffic series automatically. We further propose an Adaptive Graph Convolutional Recurrent Network (AGCRN) to capture fine-grained spatial and temporal correlations in traffic series automatically based on the two modules and recurrent networks. Our experiments on two real-world traffic datasets show AGCRN outperforms state-of-the-art by a significant margin without pre-defined graphs about spatial connections. |
On Reward-Free Reinforcement Learning with Linear Function Approximation | https://papers.nips.cc/paper_files/paper/2020/hash/ce4449660c6523b377b22a1dc2da5556-Abstract.html | Ruosong Wang, Simon S. Du, Lin Yang, Russ R. Salakhutdinov | https://papers.nips.cc/paper_files/paper/2020/hash/ce4449660c6523b377b22a1dc2da5556-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ce4449660c6523b377b22a1dc2da5556-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11219-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ce4449660c6523b377b22a1dc2da5556-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ce4449660c6523b377b22a1dc2da5556-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ce4449660c6523b377b22a1dc2da5556-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ce4449660c6523b377b22a1dc2da5556-Supplemental.pdf | Reward-free reinforcement learning (RL) is a framework which is suitable for both the batch RL setting and the setting where there are many reward functions of interest. During the exploration phase, an agent collects samples without using a pre-specified reward function. After the exploration phase, a reward function is given, and the agent uses samples collected during the exploration phase to compute a near-optimal policy. Jin et al. [2020] showed that in the tabular setting, the agent only needs to collect polynomial number of samples (in terms of the number states, the number of actions, and the planning horizon) for reward-free RL. However, in practice, the number of states and actions can be large, and thus function approximation schemes are required for generalization. In this work, we give both positive and negative results for reward-free RL with linear function approximation. We give an algorithm for reward-free RL in the linear Markov decision process setting where both the transition and the reward admit linear representations. The sample complexity of our algorithm is polynomial in the feature dimension and the planning horizon, and is completely independent of the number of states and actions. We further give an exponential lower bound for reward-free RL in the setting where only the optimal $Q$-function admits a linear representation. Our results imply several interesting exponential separations on the sample complexity of reward-free RL. |
Robustness of Community Detection to Random Geometric Perturbations | https://papers.nips.cc/paper_files/paper/2020/hash/ce46f09027b218b46063eb2b858f622d-Abstract.html | Sandrine Peche, Vianney Perchet | https://papers.nips.cc/paper_files/paper/2020/hash/ce46f09027b218b46063eb2b858f622d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ce46f09027b218b46063eb2b858f622d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11220-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ce46f09027b218b46063eb2b858f622d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ce46f09027b218b46063eb2b858f622d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ce46f09027b218b46063eb2b858f622d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ce46f09027b218b46063eb2b858f622d-Supplemental.pdf | We consider the stochastic block model where connection between vertices is perturbed by some latent (and unobserved) random geometric graph. The objective is to prove that spectral methods are robust to this type of noise, even if they are agnostic to the presence (or not) of the random graph. We provide explicit regimes where the second eigenvector of the adjacency matrix is highly correlated to the true community vector (and therefore when weak/exact recovery is possible). This is possible thanks to a detailed analysis of the spectrum of the latent random graph, of its own interest. |
Learning outside the Black-Box: The pursuit of interpretable models | https://papers.nips.cc/paper_files/paper/2020/hash/ce758408f6ef98d7c7a7b786eca7b3a8-Abstract.html | Jonathan Crabbe, Yao Zhang, William Zame, Mihaela van der Schaar | https://papers.nips.cc/paper_files/paper/2020/hash/ce758408f6ef98d7c7a7b786eca7b3a8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ce758408f6ef98d7c7a7b786eca7b3a8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11221-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ce758408f6ef98d7c7a7b786eca7b3a8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ce758408f6ef98d7c7a7b786eca7b3a8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ce758408f6ef98d7c7a7b786eca7b3a8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ce758408f6ef98d7c7a7b786eca7b3a8-Supplemental.pdf | Machine learning has proved its ability to produce accurate models -- but the deployment of these models outside the machine learning community has been hindered by the difficulties of interpreting these models. This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function. Our algorithm employs a variation of projection pursuit in which the ridge functions are chosen to be Meijer G-functions, rather than the usual polynomial splines. Because Meijer G-functions are differentiable in their parameters, we can "tune" the parameters of the representation by gradient descent; as a consequence, our algorithm is efficient. Using five familiar data sets from the UCI repository and two familiar machine learning algorithms, we demonstrate that our algorithm produces global interpretations that are both faithful (highly accurate) and parsimonious (involve a small number of terms). Our interpretations permit easy understanding of the relative importance of features and feature interactions. Our interpretation algorithm represents a leap forward from the previous state of the art. |
Breaking Reversibility Accelerates Langevin Dynamics for Non-Convex Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/cebd648f9146a6345d604ab093b02c73-Abstract.html | Xuefeng GAO, Mert Gurbuzbalaban, Lingjiong Zhu | https://papers.nips.cc/paper_files/paper/2020/hash/cebd648f9146a6345d604ab093b02c73-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cebd648f9146a6345d604ab093b02c73-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11222-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cebd648f9146a6345d604ab093b02c73-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cebd648f9146a6345d604ab093b02c73-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cebd648f9146a6345d604ab093b02c73-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cebd648f9146a6345d604ab093b02c73-Supplemental.pdf | Langevin dynamics (LD) has been proven to be a powerful technique for optimizing a non-convex objective as an efficient algorithm to find local minima while eventually visiting a global minimum on longer time-scales. LD is based on the first-order Langevin diffusion which is reversible in time. We study two variants that are based on non-reversible Langevin diffusions: the underdamped Langevin dynamics (ULD) and the Langevin dynamics with a non-symmetric drift (NLD). Adopting the techniques of Tzen et al. (2018) for LD to non-reversible diffusions, we show that for a given local minimum that is within an arbitrary distance from the initialization, with high probability, either the ULD trajectory ends up somewhere outside a small neighborhood of this local minimum within a recurrence time which depends on the smallest eigenvalue of the Hessian at the local minimum or they enter this neighborhood by the recurrence time and stay there for a potentially exponentially long escape time. The ULD algorithm improves upon the recurrence time obtained for LD in Tzen et al. (2018) with respect to the dependency on the smallest eigenvalue of the Hessian at the local minimum. Similar results and improvements are obtained for the NLD algorithm. We also show that non-reversible variants can exit the basin of attraction of a local minimum faster in discrete time when the objective has two local minima separated by a saddle point and quantify the amount of improvement. Our analysis suggests that non-reversible Langevin algorithms are more efficient to locate a local minimum as well as exploring the state space. |
Robust large-margin learning in hyperbolic space | https://papers.nips.cc/paper_files/paper/2020/hash/cec6f62cfb44b1be110b7bf70c8362d8-Abstract.html | Melanie Weber, Manzil Zaheer, Ankit Singh Rawat, Aditya K. Menon, Sanjiv Kumar | https://papers.nips.cc/paper_files/paper/2020/hash/cec6f62cfb44b1be110b7bf70c8362d8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cec6f62cfb44b1be110b7bf70c8362d8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11223-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cec6f62cfb44b1be110b7bf70c8362d8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cec6f62cfb44b1be110b7bf70c8362d8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cec6f62cfb44b1be110b7bf70c8362d8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cec6f62cfb44b1be110b7bf70c8362d8-Supplemental.zip | Recently, there has been a surge of interest in representation learning in hyperbolic spaces, driven by their ability to represent hierarchical data with significantly fewer dimensions than standard Euclidean spaces. However, the viability and benefits of hyperbolic spaces for downstream machine learning tasks have received less attention. In this paper, we present, to our knowledge, the first theoretical guarantees for learning a classifier in hyperbolic rather than Euclidean space. Specifically, we consider the problem of learning a large-margin classifier for data possessing a hierarchical structure. Our first contribution is a hyperbolic perceptron algorithm, which provably converges to a separating hyperplane. We then provide an algorithm to efficiently learn a large-margin hyperplane, relying on the careful injection of adversarial examples. Finally, we prove that for hierarchical data that embeds well into hyperbolic space, the low embedding dimension ensures superior guarantees when learning the classifier directly in hyperbolic space. |
Replica-Exchange Nos\'e-Hoover Dynamics for Bayesian Learning on Large Datasets | https://papers.nips.cc/paper_files/paper/2020/hash/cfd382c5eb817d52c7faf45a96f20b81-Abstract.html | Rui Luo, Qiang Zhang, Yaodong Yang, Jun Wang | https://papers.nips.cc/paper_files/paper/2020/hash/cfd382c5eb817d52c7faf45a96f20b81-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/cfd382c5eb817d52c7faf45a96f20b81-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11224-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/cfd382c5eb817d52c7faf45a96f20b81-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/cfd382c5eb817d52c7faf45a96f20b81-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/cfd382c5eb817d52c7faf45a96f20b81-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/cfd382c5eb817d52c7faf45a96f20b81-Supplemental.zip | In this paper, we present a new practical method for Bayesian learning that can rapidly draw representative samples from complex posterior distributions with multiple isolated modes in the presence of mini-batch noise.
This is achieved by simulating a collection of replicas in parallel with different temperatures and periodically swapping them.
When evolving the replicas' states, the Nos\'e-Hoover dynamics is applied, which adaptively neutralizes the mini-batch noise.
To perform proper exchanges, a new protocol is developed with a noise-aware test of acceptance, by which the detailed balance is reserved in an asymptotic way.
While its efficacy on complex multimodal posteriors has been illustrated by testing over synthetic distributions, experiments with deep Bayesian neural networks on large-scale datasets have shown its significant improvements over strong baselines. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.