title
stringlengths 13
150
| url
stringlengths 97
97
| authors
stringlengths 8
467
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | AuthorFeedback
stringlengths 102
102
⌀ | Bibtex
stringlengths 53
54
| MetaReview
stringlengths 99
99
| Paper
stringlengths 93
93
| Review
stringlengths 95
95
| Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 53
2k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation | https://papers.nips.cc/paper_files/paper/2020/hash/243be2818a23c980ad664f30f48e5d19-Abstract.html | Guoliang Kang, Yunchao Wei, Yi Yang, Yueting Zhuang, Alexander Hauptmann | https://papers.nips.cc/paper_files/paper/2020/hash/243be2818a23c980ad664f30f48e5d19-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/243be2818a23c980ad664f30f48e5d19-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10025-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/243be2818a23c980ad664f30f48e5d19-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/243be2818a23c980ad664f30f48e5d19-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/243be2818a23c980ad664f30f48e5d19-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/243be2818a23c980ad664f30f48e5d19-Supplemental.pdf | Domain adaptive semantic segmentation aims to train a model performing satisfactory pixel-level predictions on the target with only out-of-domain (source) annotations. The conventional solution to this task is to minimize the discrepancy between source and target to enable effective knowledge transfer. Previous domain discrepancy minimization methods are mainly based on the adversarial training. They tend to consider the domain discrepancy globally, which ignore the pixel-wise relationships and are less discriminative. In this paper, we propose to build the pixel-level cycle association between source and target pixel pairs and contrastively strengthen their connections to diminish the domain gap and make the features more discriminative. To the best of our knowledge, this is a new perspective for tackling such a challenging task. Experiment results on two representative domain adaptation benchmarks, i.e. GTAV $\rightarrow$ Cityscapes and SYNTHIA $\rightarrow$ Cityscapes, verify the effectiveness of our proposed method and demonstrate that our method performs favorably against previous state-of-the-arts. Our method can be trained end-to-end in one stage and introduce no additional parameters, which is expected to serve as a general framework and help ease future research in domain adaptive semantic segmentation. Code is available at https://github.com/kgl-prml/Pixel-Level-Cycle-Association. |
Classification with Valid and Adaptive Coverage | https://papers.nips.cc/paper_files/paper/2020/hash/244edd7e85dc81602b7615cd705545f5-Abstract.html | Yaniv Romano, Matteo Sesia, Emmanuel Candes | https://papers.nips.cc/paper_files/paper/2020/hash/244edd7e85dc81602b7615cd705545f5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/244edd7e85dc81602b7615cd705545f5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10026-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/244edd7e85dc81602b7615cd705545f5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/244edd7e85dc81602b7615cd705545f5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/244edd7e85dc81602b7615cd705545f5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/244edd7e85dc81602b7615cd705545f5-Supplemental.pdf | Conformal inference, cross-validation+, and the jackknife+ are hold-out methods that can be combined with virtually any machine learning algorithm to construct prediction sets with guaranteed marginal coverage. In this paper, we develop specialized versions of these techniques for categorical and unordered response labels that, in addition to providing marginal coverage, are also fully adaptive to complex data distributions, in the sense that they perform favorably in terms of approximate conditional coverage compared to alternative methods. The heart of our contribution is a novel conformity score, which we explicitly demonstrate to be powerful and intuitive for classification problems, but whose underlying principle is potentially far more general. Experiments on synthetic and real data demonstrate the practical value of our theoretical guarantees, as well as the statistical advantages of the proposed methods over the existing alternatives. |
Learning Global Transparent Models consistent with Local Contrastive Explanations | https://papers.nips.cc/paper_files/paper/2020/hash/24aef8cb3281a2422a59b51659f1ad2e-Abstract.html | Tejaswini Pedapati, Avinash Balakrishnan, Karthikeyan Shanmugam, Amit Dhurandhar | https://papers.nips.cc/paper_files/paper/2020/hash/24aef8cb3281a2422a59b51659f1ad2e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/24aef8cb3281a2422a59b51659f1ad2e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10027-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/24aef8cb3281a2422a59b51659f1ad2e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/24aef8cb3281a2422a59b51659f1ad2e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/24aef8cb3281a2422a59b51659f1ad2e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/24aef8cb3281a2422a59b51659f1ad2e-Supplemental.pdf | There is a rich and growing literature on producing local contrastive/counterfactual explanations for black-box models (e.g. neural networks). In these methods, for an input, an explanation is in the form of a contrast point differing in very few features from the original input and lying in a different class. Other works try to build globally interpretable models like decision trees and rule lists based on the data using actual labels or based on the black-box models predictions. Although these interpretable global models can be useful, they may not be consistent with local explanations from a specific black-box of choice. In this work, we explore the question: Can we produce a transparent global model that is simultaneously accurate and consistent with the local (contrastive) explanations of the black-box model? We introduce a local consistency metric that quantifies if the local explanations for the black-box model are also applicable to the proxy/surrogate globally transparent model. Based on a key insight we propose a novel method where we create custom boolean features from local contrastive explanations of the black-box model and then train a globally transparent model that has higher local consistency compared with other known strategies in addition to being accurate. |
Learning to Approximate a Bregman Divergence | https://papers.nips.cc/paper_files/paper/2020/hash/24bcb4d0caa4120575bb45c8a156b651-Abstract.html | Ali Siahkamari, XIDE XIA, Venkatesh Saligrama, David Castañón, Brian Kulis | https://papers.nips.cc/paper_files/paper/2020/hash/24bcb4d0caa4120575bb45c8a156b651-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/24bcb4d0caa4120575bb45c8a156b651-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10028-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/24bcb4d0caa4120575bb45c8a156b651-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/24bcb4d0caa4120575bb45c8a156b651-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/24bcb4d0caa4120575bb45c8a156b651-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/24bcb4d0caa4120575bb45c8a156b651-Supplemental.pdf | Bregman divergences generalize measures such as the squared Euclidean distance and the KL divergence, and arise throughout many areas of machine learning. In this paper, we focus on the problem of approximating an arbitrary Bregman divergence from supervision, and we provide a well-principled approach to analyzing such approximations. We develop a formulation and algorithm for learning arbitrary Bregman divergences based on approximating their underlying convex generating function via a piecewise linear function. We provide theoretical approximation bounds using our parameterization and show that the generalization error $O_p(m^{-1/2})$ for metric learning using our framework matches the known generalization error in the strictly less general Mahalanobis metric learning setting. We further demonstrate empirically that our method performs well in comparison to existing metric learning methods, particularly for clustering and ranking problems. |
Diverse Image Captioning with Context-Object Split Latent Spaces | https://papers.nips.cc/paper_files/paper/2020/hash/24bea84d52e6a1f8025e313c2ffff50a-Abstract.html | Shweta Mahajan, Stefan Roth | https://papers.nips.cc/paper_files/paper/2020/hash/24bea84d52e6a1f8025e313c2ffff50a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/24bea84d52e6a1f8025e313c2ffff50a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10029-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/24bea84d52e6a1f8025e313c2ffff50a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/24bea84d52e6a1f8025e313c2ffff50a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/24bea84d52e6a1f8025e313c2ffff50a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/24bea84d52e6a1f8025e313c2ffff50a-Supplemental.pdf | Diverse image captioning models aim to learn one-to-many mappings that are innate to cross-domain datasets, such as of images and texts. Current methods for this task are based on generative latent variable models, eg. VAEs with structured latent spaces. Yet, the amount of multimodality captured by prior work is limited to that of the paired training data -- the true diversity of the underlying generative process is not fully captured. To address this limitation, we leverage the contextual descriptions in the dataset that explain similar contexts in different visual scenes. To this end, we introduce a novel factorization of the latent space, termed context-object split, to model diversity in contextual descriptions across images and texts within the dataset. Our framework not only enables diverse captioning through context-based pseudo supervision, but extends this to images with novel objects and without paired captions in the training data. We evaluate our COS-CVAE approach on the standard COCO dataset and on the held-out COCO dataset consisting of images with novel objects, showing significant gains in accuracy and diversity. |
Learning Disentangled Representations of Videos with Missing Data | https://papers.nips.cc/paper_files/paper/2020/hash/24f2f931f12a4d9149876a5bef93e96a-Abstract.html | Armand Comas, Chi Zhang, Zlatan Feric, Octavia Camps, Rose Yu | https://papers.nips.cc/paper_files/paper/2020/hash/24f2f931f12a4d9149876a5bef93e96a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/24f2f931f12a4d9149876a5bef93e96a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10030-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/24f2f931f12a4d9149876a5bef93e96a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/24f2f931f12a4d9149876a5bef93e96a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/24f2f931f12a4d9149876a5bef93e96a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/24f2f931f12a4d9149876a5bef93e96a-Supplemental.zip | Missing data poses significant challenges while learning representations of video sequences. We present Disentangled Imputed Video autoEncoder (DIVE), a deep generative model that imputes and predicts future video frames in the presence of missing data. Specifically, DIVE introduces a missingness latent variable, disentangles the hidden video representations into static and dynamic appearance, pose, and missingness factors for each object, while it imputes each object trajectory where data is missing.
On a moving MNIST dataset with various missing scenarios, DIVE outperforms the state of the art baselines by a substantial margin. We also present comparisons on a real-world MOTSChallenge pedestrian dataset, which demonstrates the practical value of our method in a more realistic setting. Our code can be found in https://github.com/Rose-STL-Lab/DIVE. |
Natural Graph Networks | https://papers.nips.cc/paper_files/paper/2020/hash/2517756c5a9be6ac007fe9bb7fb92611-Abstract.html | Pim de Haan, Taco S. Cohen, Max Welling | https://papers.nips.cc/paper_files/paper/2020/hash/2517756c5a9be6ac007fe9bb7fb92611-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2517756c5a9be6ac007fe9bb7fb92611-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10031-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2517756c5a9be6ac007fe9bb7fb92611-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2517756c5a9be6ac007fe9bb7fb92611-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2517756c5a9be6ac007fe9bb7fb92611-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2517756c5a9be6ac007fe9bb7fb92611-Supplemental.pdf | A key requirement for graph neural networks is that they must process a graph in a way that does not depend on how the graph is described. Traditionally this has been taken to mean that a graph network must be equivariant to node permutations. Here we show that instead of equivariance, the more general concept of naturality is sufficient for a graph network to be well-defined, opening up a larger class of graph networks. We define global and local natural graph networks, the latter of which are as scalable as conventional message passing graph neural networks while being more flexible. We give one practical instantiation of a natural network on graphs which uses an equivariant message network parameterization, yielding good performance on several benchmarks. |
Continual Learning with Node-Importance based Adaptive Group Sparse Regularization | https://papers.nips.cc/paper_files/paper/2020/hash/258be18e31c8188555c2ff05b4d542c3-Abstract.html | Sangwon Jung, Hongjoon Ahn, Sungmin Cha, Taesup Moon | https://papers.nips.cc/paper_files/paper/2020/hash/258be18e31c8188555c2ff05b4d542c3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/258be18e31c8188555c2ff05b4d542c3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10032-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/258be18e31c8188555c2ff05b4d542c3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/258be18e31c8188555c2ff05b4d542c3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/258be18e31c8188555c2ff05b4d542c3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/258be18e31c8188555c2ff05b4d542c3-Supplemental.pdf | We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties. Our method selectively employs the two penalties when learning each neural network node based on its the importance, which is adaptively updated after learning each task. By utilizing the proximal gradient descent method, the exact sparsity and freezing of the model is guaranteed during the learning process, and thus, the learner explicitly controls the model capacity. Furthermore, as a critical detail, we re-initialize the weights associated with unimportant nodes after learning each task in order to facilitate efficient learning and prevent the negative transfer. Throughout the extensive experimental results, we show that our AGS-CL uses orders of magnitude less memory space for storing the regularization parameters, and it significantly outperforms several state-of-the-art baselines on representative benchmarks for both supervised and reinforcement learning. |
Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts | https://papers.nips.cc/paper_files/paper/2020/hash/25ddc0f8c9d3e22e03d3076f98d83cb2-Abstract.html | Max Ryabinin, Anton Gusev | https://papers.nips.cc/paper_files/paper/2020/hash/25ddc0f8c9d3e22e03d3076f98d83cb2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/25ddc0f8c9d3e22e03d3076f98d83cb2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10033-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/25ddc0f8c9d3e22e03d3076f98d83cb2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/25ddc0f8c9d3e22e03d3076f98d83cb2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/25ddc0f8c9d3e22e03d3076f98d83cb2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/25ddc0f8c9d3e22e03d3076f98d83cb2-Supplemental.pdf | Many recent breakthroughs in deep learning were achieved by training increasingly larger models on massive datasets. However, training such models can be prohibitively expensive. For instance, the cluster used to train GPT-3 costs over $250 million. As a result, most researchers cannot afford to train state of the art models and contribute to their development. Hypothetically, a researcher could crowdsource the training of large neural networks with thousands of regular PCs provided by volunteers. The raw computing power of a hundred thousand $2500 desktops dwarfs that of a $250M server pod, but one cannot utilize that power efficiently with conventional distributed training methods. In this work, we propose Learning@home: a novel neural network training paradigm designed to handle large amounts of poorly connected participants. We analyze the performance, reliability, and architectural constraints of this paradigm and compare it against existing distributed training techniques. |
Bidirectional Convolutional Poisson Gamma Dynamical Systems | https://papers.nips.cc/paper_files/paper/2020/hash/26178fc759d2b89c45dd31962f81dc61-Abstract.html | wenchao chen, Chaojie Wang, Bo Chen, Yicheng Liu, Hao Zhang, Mingyuan Zhou | https://papers.nips.cc/paper_files/paper/2020/hash/26178fc759d2b89c45dd31962f81dc61-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/26178fc759d2b89c45dd31962f81dc61-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10034-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/26178fc759d2b89c45dd31962f81dc61-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/26178fc759d2b89c45dd31962f81dc61-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/26178fc759d2b89c45dd31962f81dc61-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/26178fc759d2b89c45dd31962f81dc61-Supplemental.pdf | Incorporating the natural document-sentence-word structure into hierarchical Bayesian modeling, we propose convolutional Poisson gamma dynamical systems (PGDS) that introduce not only word-level probabilistic convolutions, but also sentence-level stochastic temporal transitions. With word-level convolutions capturing phrase-level topics and sentence-level transitions capturing how the topic usages evolve over consecutive sentences, we aggregate the topic proportions of all sentences of a document as its feature representation. To consider not only forward but also backward sentence-level information transmissions, we further develop a bidirectional convolutional PGDS to incorporate the full contextual information to represent each sentence. For efficient inference, we construct a convolutional-recurrent inference network, which provides both sentence-level and document-level representations, and introduce a hybrid Bayesian inference scheme combining stochastic-gradient MCMC and amortized variational inference. Experimental results on a variety of document corpora demonstrate that the proposed models can extract expressive multi-level latent representations, including interpretable phrase-level topics and sentence-level temporal transitions as well as discriminative document-level features, achieving state-of-the-art document categorization performance while being memory and computation efficient. |
Deep Reinforcement and InfoMax Learning | https://papers.nips.cc/paper_files/paper/2020/hash/26588e932c7ccfa1df309280702fe1b5-Abstract.html | Bogdan Mazoure, Remi Tachet des Combes, Thang Long Doan, Philip Bachman, R Devon Hjelm | https://papers.nips.cc/paper_files/paper/2020/hash/26588e932c7ccfa1df309280702fe1b5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/26588e932c7ccfa1df309280702fe1b5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10035-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/26588e932c7ccfa1df309280702fe1b5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/26588e932c7ccfa1df309280702fe1b5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/26588e932c7ccfa1df309280702fe1b5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/26588e932c7ccfa1df309280702fe1b5-Supplemental.zip | We posit that a reinforcement learning (RL) agent will perform better when it uses representations that are better at predicting the future, particularly in terms of few-shot learning and domain adaptation. To test that hypothesis, we introduce an objective based on Deep InfoMax (DIM) which trains the agent to predict the future by maximizing the mutual information between its internal representation of successive timesteps. We provide an intuitive analysis of the convergence properties of our approach from the perspective of Markov chain mixing times, and argue that convergence of the lower bound on mutual information is related to the inverse absolute spectral gap of the transition model. We test our approach in several synthetic settings, where it successfully learns representations that are predictive of the future. Finally, we augment C51, a strong distributional RL agent, with our temporal DIM objective and demonstrate on a continual learning task (inspired by Ms.~PacMan) and on the recently introduced Procgen environment that our approach improves performance, which supports our core hypothesis. |
On ranking via sorting by estimated expected utility | https://papers.nips.cc/paper_files/paper/2020/hash/26b58a41da329e0cbde0cbf956640a58-Abstract.html | Clement Calauzenes, Nicolas Usunier | https://papers.nips.cc/paper_files/paper/2020/hash/26b58a41da329e0cbde0cbf956640a58-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/26b58a41da329e0cbde0cbf956640a58-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10036-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/26b58a41da329e0cbde0cbf956640a58-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/26b58a41da329e0cbde0cbf956640a58-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/26b58a41da329e0cbde0cbf956640a58-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/26b58a41da329e0cbde0cbf956640a58-Supplemental.pdf | Ranking and selection tasks appear in different contexts with specific desiderata, such as the maximizaton of average relevance on the top of the list, the requirement of diverse rankings, or, relatedly, the focus on providing at least one relevant items to as many users as possible. This paper addresses the question of which of these tasks are asymptotically solved by sorting by decreasing order of expected utility, for some suitable notion of utility, or, equivalently, \emph{when is square loss regression consistent for ranking \emph{via} score-and-sort?}. We provide an answer to this question in the form of a structural characterization of ranking losses for which a suitable regression is consistent. This result has two fundamental corollaries. First, whenever there exists a consistent approach based on convex risk minimization, there also is a consistent approach based on regression. Second, when regression is not consistent, there are data distributions for which consistent surrogate approaches necessarily have non-trivial local minima, and optimal scoring function are necessarily discontinuous, even when the underlying data distribution is regular. In addition to providing a better understanding of surrogate approaches for ranking, these results illustrate the intrinsic difficulty of solving general ranking problems with the score-and-sort approach. |
Distribution-free binary classification: prediction sets, confidence intervals and calibration | https://papers.nips.cc/paper_files/paper/2020/hash/26d88423fc6da243ffddf161ca712757-Abstract.html | Chirag Gupta, Aleksandr Podkopaev, Aaditya Ramdas | https://papers.nips.cc/paper_files/paper/2020/hash/26d88423fc6da243ffddf161ca712757-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/26d88423fc6da243ffddf161ca712757-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10037-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/26d88423fc6da243ffddf161ca712757-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/26d88423fc6da243ffddf161ca712757-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/26d88423fc6da243ffddf161ca712757-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/26d88423fc6da243ffddf161ca712757-Supplemental.zip | We study three notions of uncertainty quantification---calibration, confidence intervals and prediction sets---for binary classification in the distribution-free setting, that is without making any distributional assumptions on the data. With a focus towards calibration, we establish a 'tripod' of theorems that connect these three notions for score-based classifiers. A direct implication is that distribution-free calibration is only possible, even asymptotically, using a scoring function whose level sets partition the feature space into at most countably many sets. Parametric calibration schemes such as variants of Platt scaling do not satisfy this requirement, while nonparametric schemes based on binning do. To close the loop, we derive distribution-free confidence intervals for binned probabilities for both fixed-width and uniform-mass binning. As a consequence of our 'tripod' theorems, these confidence intervals for binned probabilities lead to distribution-free calibration. We also derive extensions to settings with streaming data and covariate shift. |
Closing the Dequantization Gap: PixelCNN as a Single-Layer Flow | https://papers.nips.cc/paper_files/paper/2020/hash/26ed695e9b7b9f6463ef4bc1fd74fc87-Abstract.html | Didrik Nielsen, Ole Winther | https://papers.nips.cc/paper_files/paper/2020/hash/26ed695e9b7b9f6463ef4bc1fd74fc87-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/26ed695e9b7b9f6463ef4bc1fd74fc87-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10038-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/26ed695e9b7b9f6463ef4bc1fd74fc87-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/26ed695e9b7b9f6463ef4bc1fd74fc87-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/26ed695e9b7b9f6463ef4bc1fd74fc87-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/26ed695e9b7b9f6463ef4bc1fd74fc87-Supplemental.pdf | Flow models have recently made great progress at modeling ordinal discrete data such as images and audio. Due to the continuous nature of flow models, dequantization is typically applied when using them for such discrete data, resulting in lower bound estimates of the likelihood. In this paper, we introduce subset flows, a class of flows that can tractably transform finite volumes and thus allow exact computation of likelihoods for discrete data. Based on subset flows, we identify ordinal discrete autoregressive models, including WaveNets, PixelCNNs and Transformers, as single-layer flows. We use the flow formulation to compare models trained and evaluated with either the exact likelihood or its dequantization lower bound. Finally, we study multilayer flows composed of PixelCNNs and non-autoregressive coupling layers and demonstrate state-of-the-art results on CIFAR-10 for flow models trained with dequantization. |
Sequence to Multi-Sequence Learning via Conditional Chain Mapping for Mixture Signals | https://papers.nips.cc/paper_files/paper/2020/hash/27059a11c58ade9b03bde05c2ca7c285-Abstract.html | Jing Shi, Xuankai Chang, Pengcheng Guo, Shinji Watanabe, Yusuke Fujita, Jiaming Xu, Bo Xu, Lei Xie | https://papers.nips.cc/paper_files/paper/2020/hash/27059a11c58ade9b03bde05c2ca7c285-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/27059a11c58ade9b03bde05c2ca7c285-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10039-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/27059a11c58ade9b03bde05c2ca7c285-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/27059a11c58ade9b03bde05c2ca7c285-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/27059a11c58ade9b03bde05c2ca7c285-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/27059a11c58ade9b03bde05c2ca7c285-Supplemental.zip | Neural sequence-to-sequence models are well established for applications which can be cast as mapping a single input sequence into a single output sequence. In this work, we focus on one-to-many sequence transduction problems, such as extracting multiple sequential sources from a mixture sequence. We extend the standard sequence-to-sequence model to a conditional multi-sequence model, which explicitly models the relevance between multiple output sequences with the probabilistic chain rule. Based on this extension, our model can conditionally infer output sequences one-by-one by making use of both input and previously-estimated contextual output sequences. This model additionally has a simple and efficient stop criterion for the end of the transduction, making it able to infer the variable number of output sequences. We take speech data as a primary test field to evaluate our methods since the observed speech data is often composed of multiple sources due to the nature of the superposition principle of sound waves. Experiments on several different tasks including speech separation and multi-speaker speech recognition show that our conditional multi-sequence models lead to consistent improvements over the conventional non-conditional models. |
Variance reduction for Random Coordinate Descent-Langevin Monte Carlo | https://papers.nips.cc/paper_files/paper/2020/hash/272e11700558e27be60f7489d2d782e7-Abstract.html | ZHIYAN DING, Qin Li | https://papers.nips.cc/paper_files/paper/2020/hash/272e11700558e27be60f7489d2d782e7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/272e11700558e27be60f7489d2d782e7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10040-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/272e11700558e27be60f7489d2d782e7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/272e11700558e27be60f7489d2d782e7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/272e11700558e27be60f7489d2d782e7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/272e11700558e27be60f7489d2d782e7-Supplemental.zip | We then introduce a new variance reduction approach, termed Randomized Coordinates Averaging Descent (RCAD), and incorporate it with both overdamped and underdamped LMC. The methods are termed RCAD-O-LMC and RCAD-U-LMC respectively. The methods still sit in the random gradient approximation framework, and thus the computational cost in each iteration is low. However, by employing RCAD, the variance is reduced, so the methods converge within the same number of iterations as the classical overdamped and underdamped LMC. This leads to a computational saving overall. |
Language as a Cognitive Tool to Imagine Goals in Curiosity Driven Exploration | https://papers.nips.cc/paper_files/paper/2020/hash/274e6fcf4a583de4a81c6376f17673e7-Abstract.html | Cédric Colas, Tristan Karch, Nicolas Lair, Jean-Michel Dussoux, Clément Moulin-Frier, Peter Dominey, Pierre-Yves Oudeyer | https://papers.nips.cc/paper_files/paper/2020/hash/274e6fcf4a583de4a81c6376f17673e7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/274e6fcf4a583de4a81c6376f17673e7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10041-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/274e6fcf4a583de4a81c6376f17673e7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/274e6fcf4a583de4a81c6376f17673e7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/274e6fcf4a583de4a81c6376f17673e7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/274e6fcf4a583de4a81c6376f17673e7-Supplemental.pdf | Developmental machine learning studies how artificial agents can model the way children learn open-ended repertoires of skills. Such agents need to create and represent goals, select which ones to pursue and learn to achieve them. Recent approaches have considered goal spaces that were either fixed and hand-defined or learned using generative models of states. This limited agents to sample goals within the distribution of known effects. We argue that the ability to imagine out-of-distribution goals is key to enable creative discoveries and open-ended learning. Children do so by leveraging the compositionality of language as a tool to imagine descriptions of outcomes they never experienced before, targeting them as goals during play. We introduce IMAGINE, an intrinsically motivated deep reinforcement learning architecture that models this ability. Such imaginative agents, like children, benefit from the guidance of a social peer who provides language descriptions. To take advantage of goal imagination, agents must be able to leverage these descriptions to interpret their imagined out-of-distribution goals. This generalization is made possible by modularity: a decomposition between learned goal-achievement reward function and policy relying on deep sets, gated attention and object-centered representations. We introduce the Playground environment and study how this form of goal imagination improves generalization and exploration over agents lacking this capacity. In addition, we identify the properties of goal imagination that enable these results and study the impacts of modularity and social interactions. |
All Word Embeddings from One Embedding | https://papers.nips.cc/paper_files/paper/2020/hash/275d7fb2fd45098ad5c3ece2ed4a2824-Abstract.html | Sho Takase, Sosuke Kobayashi | https://papers.nips.cc/paper_files/paper/2020/hash/275d7fb2fd45098ad5c3ece2ed4a2824-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/275d7fb2fd45098ad5c3ece2ed4a2824-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10042-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/275d7fb2fd45098ad5c3ece2ed4a2824-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/275d7fb2fd45098ad5c3ece2ed4a2824-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/275d7fb2fd45098ad5c3ece2ed4a2824-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/275d7fb2fd45098ad5c3ece2ed4a2824-Supplemental.zip | In neural network-based models for natural language processing (NLP), the largest part of the parameters often consists of word embeddings. Conventional models prepare a large embedding matrix whose size depends on the vocabulary size. Therefore, storing these models in memory and disk storage is costly. In this study, to reduce the total number of parameters, the embeddings for all words are represented by transforming a shared embedding. The proposed method, ALONE (all word embeddings from one), constructs the embedding of a word by modifying the shared embedding with a filter vector, which is word-specific but non-trainable. Then, we input the constructed embedding into a feed-forward neural network to increase its expressiveness. Naively, the filter vectors occupy the same memory size as the conventional embedding matrix, which depends on the vocabulary size. To solve this issue, we also introduce a memory-efficient filter construction approach. We indicate our ALONE can be used as word representation sufficiently through an experiment on the reconstruction of pre-trained word embeddings. In addition, we also conduct experiments on NLP application tasks: machine translation and summarization. We combined ALONE with the current state-of-the-art encoder-decoder model, the Transformer [36], and achieved comparable scores on WMT 2014 English-to-German translation and DUC 2004 very short summarization with less parameters. |
Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm | https://papers.nips.cc/paper_files/paper/2020/hash/2779fda014fbadb761f67dd708c1325e-Abstract.html | Adil Salim, Peter Richtarik | https://papers.nips.cc/paper_files/paper/2020/hash/2779fda014fbadb761f67dd708c1325e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2779fda014fbadb761f67dd708c1325e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10043-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2779fda014fbadb761f67dd708c1325e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2779fda014fbadb761f67dd708c1325e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2779fda014fbadb761f67dd708c1325e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2779fda014fbadb761f67dd708c1325e-Supplemental.pdf | We consider the task of sampling with respect to a log concave probability distribution. The potential of the target distribution is assumed to be composite, i.e., written as the sum of a smooth convex term, and a nonsmooth convex term possibly taking infinite values. The target distribution can be seen as a minimizer of the Kullback-Leibler divergence defined on the Wasserstein space (i.e., the space of probability measures). In the first part of this paper, we establish a strong duality result for this minimization problem. In the second part of this paper, we use the duality gap arising from the first part to study the complexity of the Proximal Stochastic Gradient Langevin Algorithm (PSGLA), which can be seen as a generalization of the Projected Langevin Algorithm. Our approach relies on viewing PSGLA as a primal dual algorithm and covers many cases where the target distribution is not fully supported. In particular, we show that if the potential is strongly convex, the complexity of PSGLA is $\cO(1/\varepsilon^2)$ in terms of the 2-Wasserstein distance. In contrast, the complexity of the Projected Langevin Algorithm is $\cO(1/\varepsilon^{12})$ in terms of total variation when the potential is convex.
|
How to Characterize The Landscape of Overparameterized Convolutional Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/2794f6a20ee0685f4006210f40799acd-Abstract.html | Yihong Gu, Weizhong Zhang, Cong Fang, Jason D. Lee, Tong Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/2794f6a20ee0685f4006210f40799acd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2794f6a20ee0685f4006210f40799acd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10044-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2794f6a20ee0685f4006210f40799acd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2794f6a20ee0685f4006210f40799acd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2794f6a20ee0685f4006210f40799acd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2794f6a20ee0685f4006210f40799acd-Supplemental.zip | For many initialization schemes, parameters of two randomly initialized deep neural networks (DNNs) can be quite different, but feature distributions of the hidden nodes are similar at each layer. With the help of a new technique called {\it neural network grafting}, we demonstrate that even during the entire training process, feature distributions of differently initialized networks remain similar at each layer. In this paper, we present an explanation of this phenomenon. Specifically, we consider the loss landscape of an overparameterized convolutional neural network (CNN) in the continuous limit, where the numbers of channels/hidden nodes in the hidden layers go to infinity. Although the landscape of the overparameterized CNN is still non-convex with respect to the trainable parameters, we show that very surprisingly, it can be reformulated as a convex function with respect to the feature distributions in the hidden layers. Therefore by reparameterizing neural networks in terms of feature distributions, we obtain a much simpler characterization of the landscape of overparameterized CNNs. We further argue that training with respect to network parameters leads to a fixed trajectory in the feature distributions. |
On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples | https://papers.nips.cc/paper_files/paper/2020/hash/27b587bbe83aecf9a98c8fe6ab48cacc-Abstract.html | Richard Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/27b587bbe83aecf9a98c8fe6ab48cacc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/27b587bbe83aecf9a98c8fe6ab48cacc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10045-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/27b587bbe83aecf9a98c8fe6ab48cacc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/27b587bbe83aecf9a98c8fe6ab48cacc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/27b587bbe83aecf9a98c8fe6ab48cacc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/27b587bbe83aecf9a98c8fe6ab48cacc-Supplemental.pdf | The robustness of a neural network to adversarial examples can be provably certified by solving a convex relaxation. If the relaxation is loose, however, then the resulting certificate can be too conservative to be practically useful. Recently, a less conservative robustness certificate was proposed, based on a semidefinite programming (SDP) relaxation of the ReLU activation function. In this paper, we describe a geometric technique that determines whether this SDP certificate is exact, meaning whether it provides both a lower-bound on the size of the smallest adversarial perturbation, as well as a globally optimal perturbation that attains the lower-bound. Concretely, we show, for a least-squares restriction of the usual adversarial attack problem, that the SDP relaxation amounts to the nonconvex projection of a point onto a hyperbola. The resulting SDP certificate is exact if and only if the projection of the point lies on the major axis of the hyperbola. Using this geometric technique, we prove that the certificate is exact over a single hidden layer under mild assumptions, and explain why it is usually conservative for several hidden layers. We experimentally confirm our theoretical insights using a general-purpose interior-point method and a custom rank-2 Burer-Monteiro algorithm. |
Submodular Meta-Learning | https://papers.nips.cc/paper_files/paper/2020/hash/27d8d40b22f812a1ba6c26f8ef7df480-Abstract.html | Arman Adibi, Aryan Mokhtari, Hamed Hassani | https://papers.nips.cc/paper_files/paper/2020/hash/27d8d40b22f812a1ba6c26f8ef7df480-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/27d8d40b22f812a1ba6c26f8ef7df480-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10046-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/27d8d40b22f812a1ba6c26f8ef7df480-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/27d8d40b22f812a1ba6c26f8ef7df480-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/27d8d40b22f812a1ba6c26f8ef7df480-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/27d8d40b22f812a1ba6c26f8ef7df480-Supplemental.zip | In this paper, we introduce a discrete variant of the Meta-learning framework. Meta-learning aims at exploiting prior experience and data to improve performance on future tasks. By now, there exist numerous formulations for Meta-learning in the continuous domain. Notably, the Model-Agnostic Meta-Learning (MAML) formulation views each task as a continuous optimization problem and based on prior data learns a suitable initialization that can be adapted to new, unseen tasks after a few simple gradient updates. Motivated by this terminology, we propose a novel Meta-learning framework in the discrete domain where each task is equivalent to maximizing a set function under a cardinality constraint. Our approach aims at using prior data, i.e., previously visited tasks, to train a proper initial solution set that can be quickly adapted to a new task at a relatively low computational cost. This approach leads to (i) a personalized solution for each task, and (ii) significantly reduced computational cost at test time compared to the case where the solution is fully optimized once the new task is revealed. The training procedure is performed by solving a challenging discrete optimization problem for which we present deterministic and randomized algorithms. In the case where the tasks are monotone and submodular, we show strong theoretical guarantees for our proposed methods even though the training objective may not be submodular. We also demonstrate the effectiveness of our framework on two real-world problem instances where we observe that our methods lead to a significant reduction in computational complexity in solving the new tasks while incurring a small performance loss compared to when the tasks are fully optimized. |
Rethinking Pre-training and Self-training | https://papers.nips.cc/paper_files/paper/2020/hash/27e9661e033a73a6ad8cefcde965c54d-Abstract.html | Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin Dogus Cubuk, Quoc Le | https://papers.nips.cc/paper_files/paper/2020/hash/27e9661e033a73a6ad8cefcde965c54d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/27e9661e033a73a6ad8cefcde965c54d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10047-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/27e9661e033a73a6ad8cefcde965c54d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/27e9661e033a73a6ad8cefcde965c54d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/27e9661e033a73a6ad8cefcde965c54d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/27e9661e033a73a6ad8cefcde965c54d-Supplemental.pdf | Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet pre-training is commonly used to initialize the backbones of object detection and segmentation models. He et al., however, show a striking result that ImageNet pre-training has limited impact on COCO object detection. Here we investigate self-training as another method to utilize additional data on the same setup and contrast it against ImageNet pre-training. Our study reveals the generality and flexibility of self-training with three additional insights: 1) stronger data augmentation and more labeled data further diminish the value of pre-training, 2) unlike pre-training, self-training is always helpful when using stronger data augmentation, in both low-data and high-data regimes, and 3) in the case that pre-training is helpful, self-training improves upon pre-training. For example, on the COCO object detection dataset, pre-training benefits when we use one fifth of the labeled data, and hurts accuracy when we use all labeled data. Self-training, on the other hand, shows positive improvements from +1.3 to +3.4AP across all dataset sizes. In other words, self-training works well exactly on the same setup that pre-training does not work (using ImageNet to help COCO). On the PASCAL segmentation dataset, which is a much smaller dataset than COCO, though pre-training does help significantly, self-training improves upon the pre-trained model. On COCO object detection, we achieve 53.8AP, an improvement of +1.7AP over the strongest SpineNet model. On PASCAL segmentation, we achieve 90.5mIOU, an improvement of +1.5mIOU over the previous state-of-the-art result by DeepLabv3+. |
Unsupervised Sound Separation Using Mixture Invariant Training | https://papers.nips.cc/paper_files/paper/2020/hash/28538c394c36e4d5ea8ff5ad60562a93-Abstract.html | Scott Wisdom, Efthymios Tzinis, Hakan Erdogan, Ron Weiss, Kevin Wilson, John Hershey | https://papers.nips.cc/paper_files/paper/2020/hash/28538c394c36e4d5ea8ff5ad60562a93-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/28538c394c36e4d5ea8ff5ad60562a93-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10048-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/28538c394c36e4d5ea8ff5ad60562a93-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/28538c394c36e4d5ea8ff5ad60562a93-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/28538c394c36e4d5ea8ff5ad60562a93-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/28538c394c36e4d5ea8ff5ad60562a93-Supplemental.pdf | In recent years, rapid progress has been made on the problem of single-channel sound separation using supervised training of deep neural networks. In such supervised approaches, a model is trained to predict the component sources from synthetic mixtures created by adding up isolated ground-truth sources. Reliance on this synthetic training data is problematic because good performance depends upon the degree of match between the training data and real-world audio, especially in terms of the acoustic conditions and distribution of sources. The acoustic properties can be challenging to accurately simulate, and the distribution of sound types may be hard to replicate. In this paper, we propose a completely unsupervised method, mixture invariant training (MixIT), that requires only single-channel acoustic mixtures. In MixIT, training examples are constructed by mixing together existing mixtures, and the model separates them into a variable number of latent sources, such that the separated sources can be remixed to approximate the original mixtures. We show that MixIT can achieve competitive performance compared to supervised methods on speech separation. Using MixIT in a semi-supervised learning setting enables unsupervised domain adaptation and learning from large amounts of real-world data without ground-truth source waveforms. In particular, we significantly improve reverberant speech separation performance by incorporating reverberant mixtures, train a speech enhancement system from noisy mixtures, and improve universal sound separation by incorporating a large amount of in-the-wild data. |
Adaptive Discretization for Model-Based Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/285baacbdf8fda1de94b19282acd23e2-Abstract.html | Sean Sinclair, Tianyu Wang, Gauri Jain, Siddhartha Banerjee, Christina Yu | https://papers.nips.cc/paper_files/paper/2020/hash/285baacbdf8fda1de94b19282acd23e2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/285baacbdf8fda1de94b19282acd23e2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10049-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/285baacbdf8fda1de94b19282acd23e2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/285baacbdf8fda1de94b19282acd23e2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/285baacbdf8fda1de94b19282acd23e2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/285baacbdf8fda1de94b19282acd23e2-Supplemental.pdf | From an implementation standpoint, our algorithm has much lower storage and computational requirements due to maintaining a more efficient partition of the state and action spaces. We illustrate this via experiments on several canonical control problems, which shows that our algorithm empirically performs significantly better than fixed discretization in terms of both faster convergence and lower memory usage. Interestingly, we observe empirically that while fixed discretization model-based algorithms vastly outperform their model-free counterparts, the two achieve comparable performance with adaptive discretization. |
CodeCMR: Cross-Modal Retrieval For Function-Level Binary Source Code Matching | https://papers.nips.cc/paper_files/paper/2020/hash/285f89b802bcb2651801455c86d78f2a-Abstract.html | Zeping Yu, Wenxin Zheng, Jiaqi Wang, Qiyi Tang, Sen Nie, Shi Wu | https://papers.nips.cc/paper_files/paper/2020/hash/285f89b802bcb2651801455c86d78f2a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/285f89b802bcb2651801455c86d78f2a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10050-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/285f89b802bcb2651801455c86d78f2a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/285f89b802bcb2651801455c86d78f2a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/285f89b802bcb2651801455c86d78f2a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/285f89b802bcb2651801455c86d78f2a-Supplemental.pdf | Binary source code matching, especially on function-level, has a critical role in the field of computer security. Given binary code only, finding the corresponding source code improves the accuracy and efficiency in reverse engineering. Given source code only, related binary code retrieval contributes to known vulnerabilities confirmation. However, due to the vast difference between source and binary code, few studies have investigated binary source code matching. Previously published studies focus on code literals extraction such as strings and integers, then utilize traditional matching algorithms such as the Hungarian algorithm for code matching. Nevertheless, these methods have limitations on function-level, because they ignore the potential semantic features of code and a lot of code lacks sufficient code literals. Also, these methods indicate a need for expert experience for useful feature identification and feature engineering, which is timeconsuming. This paper proposes an end-to-end cross-modal retrieval network for binary source code matching, which achieves higher accuracy and requires less expert experience. We adopt Deep Pyramid Convolutional Neural Network (DPCNN) for source code feature extraction and Graph Neural Network (GNN) for binary code feature extraction. We also exploit neural network-based models to capture code literals, including strings and integers. Furthermore, we implement "norm weighted sampling" for negative sampling. We evaluate our model on two datasets, where it outperforms other methods significantly. |
On Warm-Starting Neural Network Training | https://papers.nips.cc/paper_files/paper/2020/hash/288cd2567953f06e460a33951f55daaf-Abstract.html | Jordan Ash, Ryan P. Adams | https://papers.nips.cc/paper_files/paper/2020/hash/288cd2567953f06e460a33951f55daaf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/288cd2567953f06e460a33951f55daaf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10051-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/288cd2567953f06e460a33951f55daaf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/288cd2567953f06e460a33951f55daaf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/288cd2567953f06e460a33951f55daaf-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/288cd2567953f06e460a33951f55daaf-Supplemental.pdf | In many real-world deployments of machine learning systems, data arrive piecemeal. These learning scenarios may be passive, where data arrive incrementally due to structural properties of the problem (e.g., daily financial data) or active, where samples are selected according to a measure of their quality (e.g., experimental design). In both of these cases, we are building a sequence of models that incorporate an increasing amount of data. We would like each of these models in the sequence to be performant and take advantage of all the data that are available to that point. Conventional intuition suggests that when solving a sequence of related optimization problems of this form, it should be possible to initialize using the solution of the previous iterate---to ``warm start'' the optimization rather than initialize from scratch---and see reductions in wall-clock time. However, in practice this warm-starting seems to yield poorer generalization performance than models that have fresh random initializations, even though the final training losses are similar. While it appears that some hyperparameter settings allow a practitioner to close this generalization gap, they seem to only do so in regimes that damage the wall-clock gains of the warm start. Nevertheless, it is highly desirable to be able to warm-start neural network training, as it would dramatically reduce the resource usage associated with the construction of performant deep learning systems. In this work, we take a closer look at this empirical phenomenon and try to understand when and how it occurs. We also provide a surprisingly simple trick that overcomes this pathology in several important situations, and present experiments that elucidate some of its properties. |
DAGs with No Fears: A Closer Look at Continuous Optimization for Learning Bayesian Networks | https://papers.nips.cc/paper_files/paper/2020/hash/28a7602724ba16600d5ccc644c19bf18-Abstract.html | Dennis Wei, Tian Gao, Yue Yu | https://papers.nips.cc/paper_files/paper/2020/hash/28a7602724ba16600d5ccc644c19bf18-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/28a7602724ba16600d5ccc644c19bf18-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10052-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/28a7602724ba16600d5ccc644c19bf18-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/28a7602724ba16600d5ccc644c19bf18-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/28a7602724ba16600d5ccc644c19bf18-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/28a7602724ba16600d5ccc644c19bf18-Supplemental.pdf | This paper re-examines a continuous optimization framework dubbed NOTEARS for learning Bayesian networks. We first generalize existing algebraic characterizations of acyclicity to a class of matrix polynomials. Next, focusing on a one-parameter-per-edge setting, it is shown that the Karush-Kuhn-Tucker (KKT) optimality conditions for the NOTEARS formulation cannot be satisfied except in a trivial case, which explains a behavior of the associated algorithm. We then derive the KKT conditions for an equivalent reformulation, show that they are indeed necessary, and relate them to explicit constraints that certain edges be absent from the graph. If the score function is convex, these KKT conditions are also sufficient for local minimality despite the non-convexity of the constraint. Informed by the KKT conditions, a local search post-processing algorithm is proposed and shown to substantially and universally improve the structural Hamming distance of all tested algorithms, typically by a factor of 2 or more. Some combinations with local search are both more accurate and more efficient than the original NOTEARS. |
OOD-MAML: Meta-Learning for Few-Shot Out-of-Distribution Detection and Classification | https://papers.nips.cc/paper_files/paper/2020/hash/28e209b61a52482a0ae1cb9f5959c792-Abstract.html | Taewon Jeong, Heeyoung Kim | https://papers.nips.cc/paper_files/paper/2020/hash/28e209b61a52482a0ae1cb9f5959c792-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/28e209b61a52482a0ae1cb9f5959c792-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10053-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/28e209b61a52482a0ae1cb9f5959c792-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/28e209b61a52482a0ae1cb9f5959c792-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/28e209b61a52482a0ae1cb9f5959c792-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/28e209b61a52482a0ae1cb9f5959c792-Supplemental.pdf | We propose a few-shot learning method for detecting out-of-distribution (OOD) samples from classes that are unseen during training while classifying samples from seen classes using only a few labeled examples. For detecting unseen classes while generalizing to new samples of known classes, we synthesize fake samples, i.e., OOD samples, but that resemble in-distribution samples, and use them along with real samples. Our approach is based on an extension of model-agnostic meta learning (MAML) and is denoted as OOD-MAML, which not only learns a model initialization but also the initial fake samples across tasks. The learned initial fake samples can be used to quickly adapt to new tasks to form task-specific fake samples with only one or a few gradient update steps using MAML. For testing, OOD-MAML converts a K-shot N-way classification
task into N sub-tasks of K-shot OOD detection with respect to each class. The joint analysis of N sub-tasks facilitates simultaneous classification and OOD detection and, furthermore, offers an advantage, in that it does not require re-training when the number of classes for a test task differs from that for training tasks; it is sufficient to simply assume as many sub-tasks as the number of classes for the test task. We also demonstrate the effective performance of OOD-MAML over benchmark datasets. |
An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch | https://papers.nips.cc/paper_files/paper/2020/hash/28f248e9279ac845995c4e9f8af35c2b-Abstract.html | Siddharth Desai, Ishan Durugkar, Haresh Karnan, Garrett Warnell, Josiah Hanna, Peter Stone | https://papers.nips.cc/paper_files/paper/2020/hash/28f248e9279ac845995c4e9f8af35c2b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/28f248e9279ac845995c4e9f8af35c2b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10054-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/28f248e9279ac845995c4e9f8af35c2b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/28f248e9279ac845995c4e9f8af35c2b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/28f248e9279ac845995c4e9f8af35c2b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/28f248e9279ac845995c4e9f8af35c2b-Supplemental.zip | We examine the problem of transferring a policy learned in a source environment to a target environment with different dynamics, particularly in the case where it is critical to reduce the amount of interaction with the target environment during learning. This problem is particularly important in sim-to-real transfer because simulators inevitably model real-world dynamics imperfectly. In this paper, we show that one existing solution to this transfer problem-- grounded action transformation --is closely related to the problem of imitation from observation (IfO): learning behaviors that mimic the observations of behavior demonstrations. After establishing this relationship, we hypothesize that recent state-of-the-art approaches from the IfO literature can be effectively repurposed for grounded transfer learning. To validate our hypothesis we derive a new algorithm -- generative adversarial reinforced action transformation (GARAT) -- based on adversarial imitation from observation techniques. We run experiments in several domains with mismatched dynamics, and find that agents trained with GARAT achieve higher returns in the target environment compared to existing black-box transfer methods. |
Learning About Objects by Learning to Interact with Them | https://papers.nips.cc/paper_files/paper/2020/hash/291597a100aadd814d197af4f4bab3a7-Abstract.html | Martin Lohmann, Jordi Salvador, Aniruddha Kembhavi, Roozbeh Mottaghi | https://papers.nips.cc/paper_files/paper/2020/hash/291597a100aadd814d197af4f4bab3a7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/291597a100aadd814d197af4f4bab3a7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10055-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/291597a100aadd814d197af4f4bab3a7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/291597a100aadd814d197af4f4bab3a7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/291597a100aadd814d197af4f4bab3a7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/291597a100aadd814d197af4f4bab3a7-Supplemental.pdf | Much of the remarkable progress in computer vision has been focused around fully supervised learning mechanisms relying on highly curated datasets for a variety of tasks. In contrast, humans often learn about their world with little to no external supervision. Taking inspiration from infants learning from their environment through play and interaction, we present a computational framework to discover objects and learn their physical properties along this paradigm of Learning from Interaction. Our agent, when placed within the near photo-realistic and physics-enabled AI2-THOR environment, interacts with its world and learns about objects, their geometric extents and relative masses, without any external guidance. Our experiments reveal that this agent learns efficiently and effectively; not just for objects it has interacted with before, but also for novel instances from seen categories as well as novel object categories. |
Learning discrete distributions with infinite support | https://papers.nips.cc/paper_files/paper/2020/hash/291dbc18539ba7e19b8abb7d85aa204e-Abstract.html | Doron Cohen, Aryeh Kontorovich, Geoffrey Wolfer | https://papers.nips.cc/paper_files/paper/2020/hash/291dbc18539ba7e19b8abb7d85aa204e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/291dbc18539ba7e19b8abb7d85aa204e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10056-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/291dbc18539ba7e19b8abb7d85aa204e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/291dbc18539ba7e19b8abb7d85aa204e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/291dbc18539ba7e19b8abb7d85aa204e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/291dbc18539ba7e19b8abb7d85aa204e-Supplemental.pdf | We present a novel approach to estimating discrete distributions with (potentially) infinite support in the total variation metric. In a departure from the established paradigm, we make no structural assumptions whatsoever on the sampling distribution. In such a setting, distribution-free risk bounds are impossible, and the best one could hope for is a fully empirical data-dependent bound. We derive precisely such bounds, and demonstrate that these are, in a well-defined sense, the best possible. Our main discovery is that the half-norm of the empirical distribution provides tight upper and lower estimates on the empirical risk. Furthermore, this quantity decays at a nearly optimal rate as a function of the true distribution. The optimality follows from a minimax result, of possible independent interest. Additional structural results are provided, including an exact Rademacher complexity calculation and apparently a first connection between the total variation risk and the missing mass. |
Dissecting Neural ODEs | https://papers.nips.cc/paper_files/paper/2020/hash/293835c2cc75b585649498ee74b395f5-Abstract.html | Stefano Massaroli, Michael Poli, Jinkyoo Park, Atsushi Yamashita, Hajime Asama | https://papers.nips.cc/paper_files/paper/2020/hash/293835c2cc75b585649498ee74b395f5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/293835c2cc75b585649498ee74b395f5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10057-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/293835c2cc75b585649498ee74b395f5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/293835c2cc75b585649498ee74b395f5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/293835c2cc75b585649498ee74b395f5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/293835c2cc75b585649498ee74b395f5-Supplemental.zip | Continuous deep learning architectures have recently re-emerged as Neural Ordinary Differential Equations (Neural ODEs). This infinite-depth approach theoretically bridges the gap between deep learning and dynamical systems, offering a novel perspective. However, deciphering the inner working of these models is still an open challenge, as most applications apply them as generic black-box modules. In this work we ``open the box'', further developing the continuous-depth formulation with the aim of clarifying the influence of several design choices on the underlying dynamics. |
Teaching a GAN What Not to Learn | https://papers.nips.cc/paper_files/paper/2020/hash/29405e2a4c22866a205f557559c7fa4b-Abstract.html | Siddarth Asokan, Chandra Seelamantula | https://papers.nips.cc/paper_files/paper/2020/hash/29405e2a4c22866a205f557559c7fa4b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/29405e2a4c22866a205f557559c7fa4b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10058-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/29405e2a4c22866a205f557559c7fa4b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/29405e2a4c22866a205f557559c7fa4b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/29405e2a4c22866a205f557559c7fa4b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/29405e2a4c22866a205f557559c7fa4b-Supplemental.pdf | Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution. Variants such as conditional GANs, auxiliary-classifier GANs (ACGANs) project GANs on to supervised and semi-supervised learning frameworks by providing labelled data and using multi-class discriminators. In this paper, we approach the supervised GAN problem from a different perspective, one that is motivated by the philosophy of the famous Persian poet Rumi who said, "The art of knowing is knowing what to ignore." In the GAN framework, we not only provide the GAN positive data that it must learn to model, but also present it with so-called negative samples that it must learn to avoid — we call this "The Rumi Framework." This formulation allows the discriminator to represent the underlying target distribution better by learning to penalize generated samples that are undesirable — we show that this capability accelerates the learning process of the generator. We present a reformulation of the standard GAN (SGAN) and least-squares GAN (LSGAN) within the Rumi setting. The advantage of the reformulation is demonstrated by means of experiments conducted on MNIST, Fashion MNIST, CelebA, and CIFAR-10 datasets. Finally, we consider an application of the proposed formulation to address the important problem of learning an under-represented class in an unbalanced dataset. The Rumi approach results in substantially lower FID scores than the standard GAN frameworks while possessing better generalization capability. |
Counterfactual Data Augmentation using Locally Factored Dynamics | https://papers.nips.cc/paper_files/paper/2020/hash/294e09f267683c7ddc6cc5134a7e68a8-Abstract.html | Silviu Pitis, Elliot Creager, Animesh Garg | https://papers.nips.cc/paper_files/paper/2020/hash/294e09f267683c7ddc6cc5134a7e68a8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/294e09f267683c7ddc6cc5134a7e68a8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10059-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/294e09f267683c7ddc6cc5134a7e68a8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/294e09f267683c7ddc6cc5134a7e68a8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/294e09f267683c7ddc6cc5134a7e68a8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/294e09f267683c7ddc6cc5134a7e68a8-Supplemental.zip | Many dynamic processes, including common scenarios in robotic control and reinforcement learning (RL), involve a set of interacting subprocesses. Though the subprocesses are not independent, their interactions are often sparse, and the dynamics at any given time step can often be decomposed into locally independent} causal mechanisms. Such local causal structures can be leveraged to improve the sample efficiency of sequence prediction and off-policy reinforcement learning. We formalize this by introducing local causal models (LCMs), which are induced from a global causal model by conditioning on a subset of the state space. We propose an approach to inferring these structures given an object-oriented state representation, as well as a novel algorithm for Counterfactual Data Augmentation (CoDA). CoDA uses local structures and an experience replay to generate counterfactual experiences that are causally valid in the global model. We find that CoDA significantly improves the performance of RL agents in locally factored tasks, including the batch-constrained and goal-conditioned settings. Code available at https://github.com/spitis/mrl. |
Rethinking Learnable Tree Filter for Generic Feature Transform | https://papers.nips.cc/paper_files/paper/2020/hash/2952351097998ac1240cb2ab7333a3d2-Abstract.html | Lin Song, Yanwei Li, Zhengkai Jiang, Zeming Li, Xiangyu Zhang, Hongbin Sun, Jian Sun, Nanning Zheng | https://papers.nips.cc/paper_files/paper/2020/hash/2952351097998ac1240cb2ab7333a3d2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2952351097998ac1240cb2ab7333a3d2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10060-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2952351097998ac1240cb2ab7333a3d2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2952351097998ac1240cb2ab7333a3d2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2952351097998ac1240cb2ab7333a3d2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2952351097998ac1240cb2ab7333a3d2-Supplemental.pdf | The Learnable Tree Filter presents a remarkable approach to model structure-preserving relations for semantic segmentation. Nevertheless, the intrinsic geometric constraint forces it to focus on the regions with close spatial distance, hindering the effective long-range interactions. To relax the geometric constraint, we give the analysis by reformulating it as a Markov Random Field and introduce a learnable unary term. Besides, we propose a learnable spanning tree algorithm to replace the original non-differentiable one, which further improves the flexibility and robustness. With the above improvements, our method can better capture long range dependencies and preserve structural details with linear complexity, which is extended to several vision tasks for more generic feature transform. Extensive experiments on object detection/instance segmentation demonstrate the consistent improvements over the original version. For semantic segmentation, we achieve leading performance (82.1% mIoU) on the Cityscapes benchmark without bells-and whistles. Code is available at https://github.com/StevenGrove/LearnableTreeFilterV2. |
Self-Supervised Relational Reasoning for Representation Learning | https://papers.nips.cc/paper_files/paper/2020/hash/29539ed932d32f1c56324cded92c07c2-Abstract.html | Massimiliano Patacchiola, Amos J. Storkey | https://papers.nips.cc/paper_files/paper/2020/hash/29539ed932d32f1c56324cded92c07c2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/29539ed932d32f1c56324cded92c07c2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10061-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/29539ed932d32f1c56324cded92c07c2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/29539ed932d32f1c56324cded92c07c2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/29539ed932d32f1c56324cded92c07c2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/29539ed932d32f1c56324cded92c07c2-Supplemental.pdf | In self-supervised learning, a system is tasked with achieving a surrogate objective by defining alternative targets on a set of unlabeled data. The aim is to build useful representations that can be used in downstream tasks, without costly manual annotation. In this work, we propose a novel self-supervised formulation of relational reasoning that allows a learner to bootstrap a signal from information implicit in unlabeled data. Training a relation head to discriminate how entities relate to themselves (intra-reasoning) and other entities (inter-reasoning), results in rich and descriptive representations in the underlying neural network backbone, which can be used in downstream tasks such as classification and image retrieval. We evaluate the proposed method following a rigorous experimental procedure, using standard datasets, protocols, and backbones. Self-supervised relational reasoning outperforms the best competitor in all conditions by an average 14% in accuracy, and the most recent state-of-the-art model by 3%. We link the effectiveness of the method to the maximization of a Bernoulli log-likelihood, which can be considered as a proxy for maximizing the mutual information, resulting in a more efficient objective with respect to the commonly used contrastive losses. |
Sufficient dimension reduction for classification using principal optimal transport direction | https://papers.nips.cc/paper_files/paper/2020/hash/29586cb449c90e249f1f09a0a4ee245a-Abstract.html | Cheng Meng, Jun Yu, Jingyi Zhang, Ping Ma, Wenxuan Zhong | https://papers.nips.cc/paper_files/paper/2020/hash/29586cb449c90e249f1f09a0a4ee245a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/29586cb449c90e249f1f09a0a4ee245a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10062-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/29586cb449c90e249f1f09a0a4ee245a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/29586cb449c90e249f1f09a0a4ee245a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/29586cb449c90e249f1f09a0a4ee245a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/29586cb449c90e249f1f09a0a4ee245a-Supplemental.pdf | Sufficient dimension reduction is used pervasively as a supervised dimension reduction approach.
Most existing sufficient dimension reduction methods are developed for data with a continuous response and may have an unsatisfactory performance for the categorical response, especially for the binary-response.
To address this issue, we propose a novel estimation method of sufficient dimension reduction subspace (SDR subspace) using optimal transport.
The proposed method, named principal optimal transport direction (POTD), estimates the basis of the SDR subspace using the principal directions of the optimal transport coupling between the data respecting different response categories.
The proposed method also reveals the relationship among three seemingly irrelevant topics, i.e., sufficient dimension reduction, support vector machine, and optimal transport.
We study the asymptotic properties of POTD and show that in the cases when the class labels contain no error, POTD estimates the SDR subspace exclusively.
Empirical studies show POTD outperforms most of the state-of-the-art linear dimension reduction methods. |
Fast Epigraphical Projection-based Incremental Algorithms for Wasserstein Distributionally Robust Support Vector Machine | https://papers.nips.cc/paper_files/paper/2020/hash/2974788b53f73e7950e8aa49f3a306db-Abstract.html | Jiajin Li, Caihua Chen, Anthony Man-Cho So | https://papers.nips.cc/paper_files/paper/2020/hash/2974788b53f73e7950e8aa49f3a306db-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2974788b53f73e7950e8aa49f3a306db-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10063-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2974788b53f73e7950e8aa49f3a306db-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2974788b53f73e7950e8aa49f3a306db-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2974788b53f73e7950e8aa49f3a306db-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2974788b53f73e7950e8aa49f3a306db-Supplemental.pdf | Wasserstein \textbf{D}istributionally \textbf{R}obust \textbf{O}ptimization (DRO) is concerned with finding decisions that perform well on data that are drawn from the worst probability distribution within a Wasserstein ball centered at a certain nominal distribution. In recent years, it has been shown that various DRO formulations of learning models admit tractable convex reformulations. However, most existing works propose to solve these convex reformulations by general-purpose solvers, which are not well-suited for tackling large-scale problems. In this paper, we focus on a family of Wasserstein distributionally robust support vector machine (DRSVM) problems and propose two novel epigraphical projection-based incremental algorithms to solve them. The updates in each iteration of these algorithms can be computed in a highly efficient manner. Moreover, we show that the DRSVM problems considered in this paper satisfy a Hölderian growth condition with explicitly determined growth exponents. Consequently, we are able to establish the convergence rates of the proposed incremental algorithms. Our numerical results indicate that the proposed methods are orders of magnitude faster than the state-of-the-art, and the performance gap grows considerably as the problem size increases. |
Differentially Private Clustering: Tight Approximation Ratios | https://papers.nips.cc/paper_files/paper/2020/hash/299dc35e747eb77177d9cea10a802da2-Abstract.html | Badih Ghazi, Ravi Kumar, Pasin Manurangsi | https://papers.nips.cc/paper_files/paper/2020/hash/299dc35e747eb77177d9cea10a802da2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/299dc35e747eb77177d9cea10a802da2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10064-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/299dc35e747eb77177d9cea10a802da2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/299dc35e747eb77177d9cea10a802da2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/299dc35e747eb77177d9cea10a802da2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/299dc35e747eb77177d9cea10a802da2-Supplemental.pdf | Our results also imply an improved algorithm for the Sample and Aggregate privacy framework. Furthermore, we show that one of the tools used in our 1-Cluster algorithm can be employed to get a faster quantum algorithm for ClosestPair in a moderate number of dimensions. |
On the Power of Louvain in the Stochastic Block Model | https://papers.nips.cc/paper_files/paper/2020/hash/29a6aa8af3c942a277478a90aa4cae21-Abstract.html | Vincent Cohen-Addad, Adrian Kosowski, Frederik Mallmann-Trenn, David Saulpic | https://papers.nips.cc/paper_files/paper/2020/hash/29a6aa8af3c942a277478a90aa4cae21-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/29a6aa8af3c942a277478a90aa4cae21-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10065-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/29a6aa8af3c942a277478a90aa4cae21-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/29a6aa8af3c942a277478a90aa4cae21-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/29a6aa8af3c942a277478a90aa4cae21-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/29a6aa8af3c942a277478a90aa4cae21-Supplemental.pdf | The goal of this paper is to shed light on the inner-workings of Louvain; only if we understand Louvain, can we rely on it and further improve it.
To achieve this goal, we study the behavior of Louvain in the
famous two-bloc Stochastic Block Model, which has a clear ground-truth and serves as the standard testbed for graph clustering algorithms.
We provide valuable tools for the analysis of Louvain, but also for many other combinatorial algorithms. For example, we show that the probability for a node to have more edges towards its own community is 1/2 + \Omega( \min( \Delta(p-q)/\sqrt{np},1 )) in the SBM(n,p,q), where \Delta is the imbalance. Note that this bound is asymptotically tight and useful for the analysis of a wide range of algorithms (Louvain, Kernighan-Lin, Simulated Annealing etc). |
Fairness with Overlapping Groups; a Probabilistic Perspective | https://papers.nips.cc/paper_files/paper/2020/hash/29c0605a3bab4229e46723f89cf59d83-Abstract.html | Forest Yang, Mouhamadou Cisse, Sanmi Koyejo | https://papers.nips.cc/paper_files/paper/2020/hash/29c0605a3bab4229e46723f89cf59d83-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/29c0605a3bab4229e46723f89cf59d83-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10066-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/29c0605a3bab4229e46723f89cf59d83-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/29c0605a3bab4229e46723f89cf59d83-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/29c0605a3bab4229e46723f89cf59d83-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/29c0605a3bab4229e46723f89cf59d83-Supplemental.pdf | In algorithmically fair prediction problems, a standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously. We reconsider this standard fair classification problem using a probabilistic population analysis, which, in turn, reveals the Bayes-optimal classifier. Our approach unifies a variety of existing group-fair classification methods and enables extensions to a wide range of non-decomposable multiclass performance metrics and fairness measures. The Bayes-optimal classifier further inspires consistent procedures for algorithmically fair classification with overlapping groups. On a variety of real datasets, the proposed approach outperforms baselines in terms of its fairness-performance tradeoff. |
AttendLight: Universal Attention-Based Reinforcement Learning Model for Traffic Signal Control | https://papers.nips.cc/paper_files/paper/2020/hash/29e48b79ae6fc68e9b6480b677453586-Abstract.html | Afshin Oroojlooy, Mohammadreza Nazari, Davood Hajinezhad, Jorge Silva | https://papers.nips.cc/paper_files/paper/2020/hash/29e48b79ae6fc68e9b6480b677453586-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/29e48b79ae6fc68e9b6480b677453586-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10067-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/29e48b79ae6fc68e9b6480b677453586-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/29e48b79ae6fc68e9b6480b677453586-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/29e48b79ae6fc68e9b6480b677453586-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/29e48b79ae6fc68e9b6480b677453586-Supplemental.pdf | We propose AttendLight, an end-to-end Reinforcement Learning (RL) algorithm for the problem of traffic signal control. Previous approaches for this problem have the shortcoming that they require training for each new intersection with a different structure or traffic flow distribution. AttendLight solves this issue by training a single, universal model for intersections with any number of roads, lanes, phases (possible signals), and traffic flow. To this end, we propose a deep RL model which incorporates two attention models. The first attention model is introduced to handle different numbers of roads-lanes; and the second attention model is intended for enabling decision-making with any number of phases in an intersection. As a result, our proposed model works for any intersection configuration, as long as a similar configuration is represented in the training set.
Experiments were conducted with both synthetic and real-world standard benchmark datasets. Our numerical experiment covers intersections with three or four approaching roads; one-directional/bi-directional roads with one, two, and three lanes; different number of phases; and different traffic flows. We consider two regimes: (i) single-environment training, single-deployment, and (ii) multi-environment training, multi-deployment. AttendLight outperforms both classical and other RL-based approaches on all cases in both regimes. |
Searching for Low-Bit Weights in Quantized Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/2a084e55c87b1ebcdaad1f62fdbbac8e-Abstract.html | Zhaohui Yang, Yunhe Wang, Kai Han, Chunjing XU, Chao Xu, Dacheng Tao, Chang Xu | https://papers.nips.cc/paper_files/paper/2020/hash/2a084e55c87b1ebcdaad1f62fdbbac8e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2a084e55c87b1ebcdaad1f62fdbbac8e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10068-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2a084e55c87b1ebcdaad1f62fdbbac8e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2a084e55c87b1ebcdaad1f62fdbbac8e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2a084e55c87b1ebcdaad1f62fdbbac8e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2a084e55c87b1ebcdaad1f62fdbbac8e-Supplemental.pdf | Quantized neural networks with low-bit weights and activations are attractive for developing AI accelerators. However, the quantization functions used in most conventional quantization methods are non-differentiable, which increases the optimization difficulty of quantized networks. Compared with full-precision parameters (\emph{i.e.}, 32-bit floating numbers), low-bit values are selected from a much smaller set. For example, there are only 16 possibilities in 4-bit space. Thus, we present to regard the discrete weights in an arbitrary quantized neural network as searchable variables, and utilize a differential method to search them accurately. In particular, each weight is represented as a probability distribution over the discrete value set. The probabilities are optimized during training and the values with the highest probability are selected to establish the desired quantized network. Experimental results on benchmarks demonstrate that the proposed method is able to produce quantized neural networks with higher performance over the state-of-the-arts on both image classification and super-resolution tasks. |
Adaptive Reduced Rank Regression | https://papers.nips.cc/paper_files/paper/2020/hash/2a27b8144ac02f67687f76782a3b5d8f-Abstract.html | Qiong Wu, Felix MF Wong, Yanhua Li, Zhenming Liu, Varun Kanade | https://papers.nips.cc/paper_files/paper/2020/hash/2a27b8144ac02f67687f76782a3b5d8f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2a27b8144ac02f67687f76782a3b5d8f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10069-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2a27b8144ac02f67687f76782a3b5d8f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2a27b8144ac02f67687f76782a3b5d8f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2a27b8144ac02f67687f76782a3b5d8f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2a27b8144ac02f67687f76782a3b5d8f-Supplemental.pdf | We study the low rank regression problem y = Mx + ε, where x and y are d1 and d2 dimensional vectors respectively. We consider the extreme high-dimensional setting where the number of observations n is less than d1 + d2. Existing algorithms are designed for settings where n is typically as large as rank(M)(d1+d2). This work provides an efficient algorithm which only involves two SVD, and establishes statistical guarantees on its performance. The algorithm decouples the problem by first estimating the precision matrix of the features, and then solving the matrix denoising problem. To complement the upper bound, we introduce new techniques for establishing lower bounds on the performance of any algorithm for this problem. Our preliminary experiments confirm that our algorithm often out-performs existing baseline, and is always at least competitive. |
From Predictions to Decisions: Using Lookahead Regularization | https://papers.nips.cc/paper_files/paper/2020/hash/2adcfc3929e7c03fac3100d3ad51da26-Abstract.html | Nir Rosenfeld, Anna Hilgard, Sai Srivatsa Ravindranath, David C. Parkes | https://papers.nips.cc/paper_files/paper/2020/hash/2adcfc3929e7c03fac3100d3ad51da26-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2adcfc3929e7c03fac3100d3ad51da26-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10070-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2adcfc3929e7c03fac3100d3ad51da26-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2adcfc3929e7c03fac3100d3ad51da26-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2adcfc3929e7c03fac3100d3ad51da26-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2adcfc3929e7c03fac3100d3ad51da26-Supplemental.zip | Machine learning is a powerful tool for predicting human-related outcomes, from creditworthiness to heart attack risks. But when deployed transparently, learned models also affect how users act in order to improve outcomes. The standard approach to learning predictive models is agnostic to induced user actions and provides no guarantees as to the effect of actions. We provide a framework for learning predictors that are accurate, while also considering interactions between the learned model and user decisions. For this, we introduce look-ahead regularization which, by anticipating user actions, encourages predictive models to also induce actions that improve outcomes. This regularization carefully tailors the uncertainty estimates that govern confidence in this improvement to the distribution of model-induced actions. We report the results of experiments on real and synthetic data that show the effectiveness of this approach. |
Sequential Bayesian Experimental Design with Variable Cost Structure | https://papers.nips.cc/paper_files/paper/2020/hash/2adee8815dd939548ee6b2772524b6f2-Abstract.html | Sue Zheng, David Hayden, Jason Pacheco, John W. Fisher III | https://papers.nips.cc/paper_files/paper/2020/hash/2adee8815dd939548ee6b2772524b6f2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2adee8815dd939548ee6b2772524b6f2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10071-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2adee8815dd939548ee6b2772524b6f2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2adee8815dd939548ee6b2772524b6f2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2adee8815dd939548ee6b2772524b6f2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2adee8815dd939548ee6b2772524b6f2-Supplemental.pdf | Mutual information (MI) is a commonly adopted utility function in Bayesian optimal experimental design (BOED). While theoretically appealing, MI evaluation poses a significant computational burden for most real world applications. As a result, many algorithms utilize MI bounds as proxies that lack regret-style guarantees. Here, we utilize two-sided bounds to provide such guarantees. Bounds are successively refined/tightened through additional computation until a desired guarantee is achieved. We consider the problem of adaptively allocating computational resources in BOED. Our approach achieves the same guarantee as existing methods, but with fewer evaluations of the costly MI reward. We adapt knapsack optimization of best arm identification problems, with important differences that impact overall algorithm design and performance. First, observations of MI rewards are biased. Second, evaluating experiments incurs shared costs amongst all experiments (posterior sampling) in addition to per experiment costs that may vary with increasing evaluation. We propose and demonstrate an algorithm that accounts for these variable costs in the refinement decision. |
Predictive inference is free with the jackknife+-after-bootstrap | https://papers.nips.cc/paper_files/paper/2020/hash/2b346a0aa375a07f5a90a344a61416c4-Abstract.html | Byol Kim, Chen Xu, Rina Barber | https://papers.nips.cc/paper_files/paper/2020/hash/2b346a0aa375a07f5a90a344a61416c4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2b346a0aa375a07f5a90a344a61416c4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10072-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2b346a0aa375a07f5a90a344a61416c4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2b346a0aa375a07f5a90a344a61416c4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2b346a0aa375a07f5a90a344a61416c4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2b346a0aa375a07f5a90a344a61416c4-Supplemental.zip | Ensemble learning is widely used in applications to make predictions in complex decision problems---for example, averaging models fitted to a sequence of samples bootstrapped from the available training data. While such methods offer more accurate, stable, and robust predictions and model estimates, much less is known about how to perform valid, assumption-lean inference on the output of these types of procedures. In this paper, we propose the jackknife+-after-bootstrap (J+aB), a procedure for constructing a predictive interval, which uses only the available bootstrapped samples and their corresponding fitted models, and is therefore "free" in terms of the cost of model fitting. The J+aB offers a predictive coverage guarantee that holds with no assumptions on the distribution of the data, the nature of the fitted model, or the way in which the ensemble of models are aggregated---at worst, the failure rate of the predictive interval is inflated by a factor of 2. Our numerical experiments verify the coverage and accuracy of the resulting predictive intervals on real data. |
Counterfactual Predictions under Runtime Confounding | https://papers.nips.cc/paper_files/paper/2020/hash/2b64c2f19d868305aa8bbc2d72902cc5-Abstract.html | Amanda Coston, Edward Kennedy, Alexandra Chouldechova | https://papers.nips.cc/paper_files/paper/2020/hash/2b64c2f19d868305aa8bbc2d72902cc5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2b64c2f19d868305aa8bbc2d72902cc5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10073-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2b64c2f19d868305aa8bbc2d72902cc5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2b64c2f19d868305aa8bbc2d72902cc5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2b64c2f19d868305aa8bbc2d72902cc5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2b64c2f19d868305aa8bbc2d72902cc5-Supplemental.pdf | Algorithms are commonly used to predict outcomes under a particular decision or intervention, such as predicting likelihood of default if a loan is approved.
Generally, to learn such counterfactual prediction models from observational data on historical decisions and corresponding outcomes, one must measure all factors that jointly affect the outcome and the decision taken.
Motivated by decision support applications, we study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data, but it is infeasible, undesirable, or impermissible to use some such factors in the prediction model.
We refer to this setting as runtime confounding.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
Our theoretical analysis and experimental results suggest that our method often outperforms competing approaches.
We also present a validation procedure for evaluating the performance of counterfactual prediction methods. |
Learning Loss for Test-Time Augmentation | https://papers.nips.cc/paper_files/paper/2020/hash/2ba596643cbbbc20318224181fa46b28-Abstract.html | Ildoo Kim, Younghoon Kim, Sungwoong Kim | https://papers.nips.cc/paper_files/paper/2020/hash/2ba596643cbbbc20318224181fa46b28-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2ba596643cbbbc20318224181fa46b28-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10074-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2ba596643cbbbc20318224181fa46b28-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2ba596643cbbbc20318224181fa46b28-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2ba596643cbbbc20318224181fa46b28-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2ba596643cbbbc20318224181fa46b28-Supplemental.pdf | Data augmentation has been actively studied for robust neural networks. Most of the recent data augmentation methods focus on augmenting datasets during the training phase. At the testing phase, simple transformations are still widely used for test-time augmentation. This paper proposes a novel instance-level test- time augmentation that efficiently selects suitable transformations for a test input. Our proposed method involves an auxiliary module to predict the loss of each possible transformation given the input. Then, the transformations having lower predicted losses are applied to the input. The network obtains the results by averaging the prediction results of augmented inputs. Experimental results on several image classification benchmarks show that the proposed instance-aware test- time augmentation improves the model’s robustness against various corruptions. |
Balanced Meta-Softmax for Long-Tailed Visual Recognition | https://papers.nips.cc/paper_files/paper/2020/hash/2ba61cc3a8f44143e1f2f13b2b729ab3-Abstract.html | Jiawei Ren, Cunjun Yu, shunan sheng, Xiao Ma, Haiyu Zhao, Shuai Yi, hongsheng Li | https://papers.nips.cc/paper_files/paper/2020/hash/2ba61cc3a8f44143e1f2f13b2b729ab3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2ba61cc3a8f44143e1f2f13b2b729ab3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10075-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2ba61cc3a8f44143e1f2f13b2b729ab3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2ba61cc3a8f44143e1f2f13b2b729ab3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2ba61cc3a8f44143e1f2f13b2b729ab3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2ba61cc3a8f44143e1f2f13b2b729ab3-Supplemental.pdf | Deep classifiers have achieved great success in visual recognition. However, real-world data is long-tailed by nature, leading to the mismatch between training and testing distributions. In this paper, we show that the Softmax function, though used in most classification tasks, gives a biased gradient estimation under the long-tailed setup. This paper presents Balanced Softmax, an elegant unbiased extension of Softmax, to accommodate the label distribution shift between training and testing. Theoretically, we derive the generalization bound for multiclass Softmax regression and show our loss minimizes the bound. In addition, we introduce Balanced Meta-Softmax, applying a complementary Meta Sampler to estimate the optimal class sample rate and further improve long-tailed learning. In our experiments, we demonstrate that Balanced Meta-Softmax outperforms state-of-the-art long-tailed classification solutions on both visual recognition and instance segmentation tasks. |
Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/2bba9f4124283edd644799e0cecd45ca-Abstract.html | Sreejith Balakrishnan, Quoc Phong Nguyen, Bryan Kian Hsiang Low, Harold Soh | https://papers.nips.cc/paper_files/paper/2020/hash/2bba9f4124283edd644799e0cecd45ca-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2bba9f4124283edd644799e0cecd45ca-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10076-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2bba9f4124283edd644799e0cecd45ca-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2bba9f4124283edd644799e0cecd45ca-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2bba9f4124283edd644799e0cecd45ca-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2bba9f4124283edd644799e0cecd45ca-Supplemental.zip | The problem of inverse reinforcement learning (IRL) is relevant to a variety of tasks including value alignment and robot learning from demonstration. Despite significant algorithmic contributions in recent years, IRL remains an ill-posed problem at its core; multiple reward functions coincide with the observed behavior and the actual reward function is not identifiable without prior knowledge or supplementary information. This paper presents an IRL framework called Bayesian optimization-IRL (BO-IRL) which identifies multiple solutions that are consistent with the expert demonstrations by efficiently exploring the reward function space. BO-IRL achieves this by utilizing Bayesian Optimization along with our newly proposed kernel that (a) projects the parameters of policy invariant reward functions to a single point in a latent space and (b) ensures nearby points in the latent space correspond to reward functions yielding similar likelihoods. This projection allows the use of standard stationary kernels in the latent space to capture the correlations present across the reward function space. Empirical results on synthetic and real-world environments (model-free and model-based) show that BO-IRL discovers multiple reward functions while minimizing the number of expensive exact policy optimizations. |
MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/2be5f9c2e3620eb73c2972d7552b6cb5-Abstract.html | Elise van der Pol, Daniel Worrall, Herke van Hoof, Frans Oliehoek, Max Welling | https://papers.nips.cc/paper_files/paper/2020/hash/2be5f9c2e3620eb73c2972d7552b6cb5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2be5f9c2e3620eb73c2972d7552b6cb5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10077-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2be5f9c2e3620eb73c2972d7552b6cb5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2be5f9c2e3620eb73c2972d7552b6cb5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2be5f9c2e3620eb73c2972d7552b6cb5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2be5f9c2e3620eb73c2972d7552b6cb5-Supplemental.pdf | This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong. |
How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods | https://papers.nips.cc/paper_files/paper/2020/hash/2c29d89cc56cdb191c60db2f0bae796b-Abstract.html | Jeya Vikranth Jeyakumar, Joseph Noor, Yu-Hsi Cheng, Luis Garcia, Mani Srivastava | https://papers.nips.cc/paper_files/paper/2020/hash/2c29d89cc56cdb191c60db2f0bae796b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2c29d89cc56cdb191c60db2f0bae796b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10078-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2c29d89cc56cdb191c60db2f0bae796b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2c29d89cc56cdb191c60db2f0bae796b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2c29d89cc56cdb191c60db2f0bae796b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2c29d89cc56cdb191c60db2f0bae796b-Supplemental.zip | Explaining the inner workings of deep neural network models have received considerable attention in recent years. Researchers have attempted to provide human parseable explanations justifying why a model performed a specific classification. Although many of these toolkits are available for use, it is unclear which style of explanation is preferred by end-users, thereby demanding investigation. We performed a cross-analysis Amazon Mechanical Turk study comparing the popular state-of-the-art explanation methods to empirically determine which are better in explaining model decisions. The participants were asked to compare explanation methods across applications spanning image, text, audio, and sensory domains. Among the surveyed methods, explanation-by-example was preferred in all domains except text sentiment classification, where LIME's method of annotating input text was preferred. We highlight qualitative aspects of employing the studied explainability methods and conclude with implications for researchers and engineers that seek to incorporate explanations into user-facing deployments. |
On the Error Resistance of Hinge-Loss Minimization | https://papers.nips.cc/paper_files/paper/2020/hash/2c5201a7391fedbc40c3cc6aa057a029-Abstract.html | Kunal Talwar | https://papers.nips.cc/paper_files/paper/2020/hash/2c5201a7391fedbc40c3cc6aa057a029-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2c5201a7391fedbc40c3cc6aa057a029-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10079-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2c5201a7391fedbc40c3cc6aa057a029-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2c5201a7391fedbc40c3cc6aa057a029-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2c5201a7391fedbc40c3cc6aa057a029-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2c5201a7391fedbc40c3cc6aa057a029-Supplemental.pdf | Commonly used classification algorithms in machine learning, such as support vector machines, minimize a convex surrogate loss on training examples. In practice, these algorithms are surprisingly robust to errors in the training data. In this work, we identify a set of conditions on the data under which such surrogate loss minimization algorithms provably learn the correct classifier. This allows us to establish, in a unified framework, the robustness of these algorithms under various models on data as well as error. In particular, we show that if the data is linearly classifiable with a slightly non-trivial margin (i.e. a margin at least $C\div\sqrt{d}$ for $d$-dimensional unit vectors), and the class-conditional distributions are near isotropic and logconcave, then surrogate loss minimization has negligible error on the uncorrupted data even when a constant fraction of examples are adversarially mislabeled. |
Munchausen Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/2c6a0bae0f071cbbf0bb3d5b11d90a82-Abstract.html | Nino Vieillard, Olivier Pietquin, Matthieu Geist | https://papers.nips.cc/paper_files/paper/2020/hash/2c6a0bae0f071cbbf0bb3d5b11d90a82-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2c6a0bae0f071cbbf0bb3d5b11d90a82-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10080-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2c6a0bae0f071cbbf0bb3d5b11d90a82-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2c6a0bae0f071cbbf0bb3d5b11d90a82-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2c6a0bae0f071cbbf0bb3d5b11d90a82-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2c6a0bae0f071cbbf0bb3d5b11d90a82-Supplemental.pdf | Bootstrapping is a core mechanism in Reinforcement Learning (RL). Most algorithms, based on temporal differences, replace the true value of a transiting state by their current estimate of this value. Yet, another estimate could be leveraged to bootstrap RL: the current policy. Our core contribution stands in a very simple idea: adding the scaled log-policy to the immediate reward. We show that, by slightly modifying Deep Q-Network (DQN) in that way provides an agent that is competitive with the state-of-the-art Rainbow on Atari games, without making use of distributional RL, n-step returns or prioritized replay. To demonstrate the versatility of this idea, we also use it together with an Implicit Quantile Network (IQN). The resulting agent outperforms Rainbow on Atari, installing a new State of the Art with very little modifications to the original algorithm. To add to this empirical study, we provide strong theoretical insights on what happens under the hood -- implicit Kullback-Leibler regularization and increase of the action-gap. |
Object Goal Navigation using Goal-Oriented Semantic Exploration | https://papers.nips.cc/paper_files/paper/2020/hash/2c75cf2681788adaca63aa95ae028b22-Abstract.html | Devendra Singh Chaplot, Dhiraj Prakashchand Gandhi, Abhinav Gupta, Russ R. Salakhutdinov | https://papers.nips.cc/paper_files/paper/2020/hash/2c75cf2681788adaca63aa95ae028b22-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2c75cf2681788adaca63aa95ae028b22-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10081-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2c75cf2681788adaca63aa95ae028b22-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2c75cf2681788adaca63aa95ae028b22-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2c75cf2681788adaca63aa95ae028b22-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2c75cf2681788adaca63aa95ae028b22-Supplemental.zip | This work studies the problem of object goal navigation which involves navigating to an instance of the given object category in unseen environments. End-to-end learning-based navigation methods struggle at this task as they are ineffective at exploration and long-term planning. We propose a modular system called, `Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently based on the goal object category. Empirical results in visually realistic simulation environments show that the proposed model outperforms a wide range of baselines including end-to-end learning-based methods as well as modular map-based methods and led to the winning entry of the CVPR-2020 Habitat ObjectNav Challenge. Ablation analysis indicates that the proposed model learns semantic priors of the relative arrangement of objects in a scene, and uses them to explore efficiently. Domain-agnostic module design allows us to transfer our model to a mobile robot platform and achieve similar performance for object goal navigation in the real-world. |
Efficient semidefinite-programming-based inference for binary and multi-class MRFs | https://papers.nips.cc/paper_files/paper/2020/hash/2cb274e6ce940f47beb8011d8ecb1462-Abstract.html | Chirag Pabbaraju, Po-Wei Wang, J. Zico Kolter | https://papers.nips.cc/paper_files/paper/2020/hash/2cb274e6ce940f47beb8011d8ecb1462-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2cb274e6ce940f47beb8011d8ecb1462-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10082-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2cb274e6ce940f47beb8011d8ecb1462-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2cb274e6ce940f47beb8011d8ecb1462-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2cb274e6ce940f47beb8011d8ecb1462-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2cb274e6ce940f47beb8011d8ecb1462-Supplemental.pdf | Probabilistic inference in pairwise Markov Random Fields (MRFs), i.e. computing the partition function or computing a MAP estimate of the variables, is a foundational problem in probabilistic graphical models. Semidefinite programming relaxations have long been a theoretically powerful tool for analyzing properties of probabilistic inference, but have not been practical owing to the high computational cost of typical solvers for solving the resulting SDPs. In this paper, we propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF by instead exploiting a recently proposed coordinate-descent-based fast semidefinite solver. We also extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver. We show that the method substantially outperforms (both in terms of solution quality and speed) the existing state of the art in approximate inference, on benchmark problems drawn from previous work. We also show that our approach can scale to large MRF domains such as fully-connected pairwise CRF models used in computer vision. |
Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing | https://papers.nips.cc/paper_files/paper/2020/hash/2cd2915e69546904e4e5d4a2ac9e1652-Abstract.html | Zihang Dai, Guokun Lai, Yiming Yang, Quoc Le | https://papers.nips.cc/paper_files/paper/2020/hash/2cd2915e69546904e4e5d4a2ac9e1652-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2cd2915e69546904e4e5d4a2ac9e1652-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10083-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2cd2915e69546904e4e5d4a2ac9e1652-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2cd2915e69546904e4e5d4a2ac9e1652-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2cd2915e69546904e4e5d4a2ac9e1652-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2cd2915e69546904e4e5d4a2ac9e1652-Supplemental.pdf | With the success of language pretraining, it is highly desirable to develop more efficient architectures of good scalability that can exploit the abundant unlabeled data at a lower cost.
To improve the efficiency, we examine the much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only require a single-vector presentation of the sequence.
With this intuition, we propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost.
More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further improve the model capacity.
In addition, to perform token-level predictions as required by common pretraining objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence via a decoder.
Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading comprehension. |
Semantic Visual Navigation by Watching YouTube Videos | https://papers.nips.cc/paper_files/paper/2020/hash/2cd4e8a2ce081c3d7c32c3cde4312ef7-Abstract.html | Matthew Chang, Arjun Gupta, Saurabh Gupta | https://papers.nips.cc/paper_files/paper/2020/hash/2cd4e8a2ce081c3d7c32c3cde4312ef7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2cd4e8a2ce081c3d7c32c3cde4312ef7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10084-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2cd4e8a2ce081c3d7c32c3cde4312ef7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2cd4e8a2ce081c3d7c32c3cde4312ef7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2cd4e8a2ce081c3d7c32c3cde4312ef7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2cd4e8a2ce081c3d7c32c3cde4312ef7-Supplemental.pdf | Semantic cues and statistical regularities in real-world environment layouts can improve efficiency for navigation in novel environments. This paper learns and leverages such semantic cues for navigating to objects of interest in novel environments, by simply watching YouTube videos. This is challenging because YouTube videos don't come with labels for actions or goals, and may not even showcase optimal behavior. Our method tackles these challenges through the use of Q-learning on pseudo-labeled transition quadruples (image, action, next image, reward). We show that such off-policy Q-learning from passive data is able to learn meaningful semantic cues for navigation. These cues, when used in a hierarchical navigation policy, lead to improved efficiency at the ObjectGoal task in visually realistic simulations. We observe a relative improvement of 15-83% over end-to-end RL, behavior cloning, and classical methods, while using minimal direct interaction. |
Heavy-tailed Representations, Text Polarity Classification & Data Augmentation | https://papers.nips.cc/paper_files/paper/2020/hash/2cfa3753d6a524711acb5fce38eeca1a-Abstract.html | Hamid Jalalzai, Pierre Colombo, Chloé Clavel, Eric Gaussier, Giovanna Varni, Emmanuel Vignon, Anne Sabourin | https://papers.nips.cc/paper_files/paper/2020/hash/2cfa3753d6a524711acb5fce38eeca1a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2cfa3753d6a524711acb5fce38eeca1a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10085-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2cfa3753d6a524711acb5fce38eeca1a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2cfa3753d6a524711acb5fce38eeca1a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2cfa3753d6a524711acb5fce38eeca1a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2cfa3753d6a524711acb5fce38eeca1a-Supplemental.pdf | The dominant approaches to text representation in natural language rely on learning embeddings on massive corpora which have convenient properties such as compositionality and distance preservation. In this paper, we develop a novel method to learn a heavy-tailed embedding with desirable regularity properties regarding the distributional tails, which allows to analyze the points far away from the distribution bulk using the framework of multivariate extreme value theory. In particular, a classifier dedicated to the tails of the proposed embedding is obtained which exhibits a scale invariance property exploited in a novel text generation method for label preserving dataset augmentation. Experiments on synthetic and real text data show the relevance of the proposed framework and confirm that this method generates meaningful sentences with controllable attribute, e.g. positive or negative sentiments. |
SuperLoss: A Generic Loss for Robust Curriculum Learning | https://papers.nips.cc/paper_files/paper/2020/hash/2cfa8f9e50e0f510ede9d12338a5f564-Abstract.html | Thibault Castells, Philippe Weinzaepfel, Jerome Revaud | https://papers.nips.cc/paper_files/paper/2020/hash/2cfa8f9e50e0f510ede9d12338a5f564-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2cfa8f9e50e0f510ede9d12338a5f564-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10086-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2cfa8f9e50e0f510ede9d12338a5f564-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2cfa8f9e50e0f510ede9d12338a5f564-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2cfa8f9e50e0f510ede9d12338a5f564-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2cfa8f9e50e0f510ede9d12338a5f564-Supplemental.pdf | Curriculum learning is a technique to improve a model performance and generalization based on the idea that easy samples should be presented before difficult ones during training. While it is generally complex to estimate a priori the difficulty of a given sample, recent works have shown that curriculum learning can be formulated dynamically in a self-supervised manner. The key idea is to somehow estimate the importance (or weight) of each sample directly during training based on the observation that easy and hard samples behave differently and can therefore be separated. However, these approaches are usually limited to a specific task (e.g., classification) and require extra data annotations, layers or parameters as well as a dedicated training procedure. We propose instead a simple and generic method that can be applied to a variety of losses and tasks without any change in the learning procedure. It consists in appending a novel loss function on top of any existing task loss, hence its name: the SuperLoss. Its main effect is to automatically downweight the contribution of samples with a large loss, i.e. hard samples, effectively mimicking the core principle of curriculum learning. As a side effect, we show that our loss prevents the memorization of noisy samples, making it possible to train from noisy data even with non-robust loss functions. Experimental results on image classification, regression, object detection and image retrieval demonstrate consistent gain, particularly in the presence of noise. |
CogMol: Target-Specific and Selective Drug Design for COVID-19 Using Deep Generative Models | https://papers.nips.cc/paper_files/paper/2020/hash/2d16ad1968844a4300e9a490588ff9f8-Abstract.html | Vijil Chenthamarakshan, Payel Das, Samuel Hoffman, Hendrik Strobelt, Inkit Padhi, Kar Wai Lim, Benjamin Hoover, Matteo Manica, Jannis Born, Teodoro Laino, Aleksandra Mojsilovic | https://papers.nips.cc/paper_files/paper/2020/hash/2d16ad1968844a4300e9a490588ff9f8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2d16ad1968844a4300e9a490588ff9f8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10087-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2d16ad1968844a4300e9a490588ff9f8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2d16ad1968844a4300e9a490588ff9f8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2d16ad1968844a4300e9a490588ff9f8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2d16ad1968844a4300e9a490588ff9f8-Supplemental.pdf | The novel nature of SARS-CoV-2 calls for the development of efficient de novo drug design approaches. In this study, we propose an end-to-end framework, named CogMol (Controlled Generation of Molecules), for designing new drug-like small molecules targeting novel viral proteins with high affinity and off-target selectivity. CogMol combines adaptive pre-training of a molecular SMILES Variational Autoencoder (VAE) and an efficient multi-attribute controlled sampling scheme that uses guidance from attribute predictors trained on latent features. To generate novel and optimal drug-like molecules for unseen viral targets, CogMol leverages a protein-molecule binding affinity predictor that is trained using SMILES VAE embeddings and protein sequence embeddings learned unsupervised from a large corpus.
We applied the CogMol framework to three SARS-CoV-2 target proteins: main protease, receptor-binding domain of the spike protein, and non-structural protein 9 replicase. The generated candidates are novel at both the molecular and chemical scaffold levels when compared to the training data. CogMol also includes insilico screening for assessing toxicity of parent molecules and their metabolites with a multi-task toxicity classifier, synthetic feasibility with a chemical retrosynthesis predictor, and target structure binding with docking simulations.
Docking reveals favorable binding of generated molecules to the target protein structure, where 87--95\% of high affinity molecules showed docking free energy $<$ -6 kcal/mol. When compared to approved drugs, the majority of designed compounds show low predicted parent molecule and metabolite toxicity and high predicted synthetic feasibility. In summary, CogMol can handle multi-constraint design of synthesizable, low-toxic, drug-like molecules with high target specificity and selectivity, even to novel protein target sequences, and does not need target-dependent fine-tuning of the framework or target structure information. |
Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards | https://papers.nips.cc/paper_files/paper/2020/hash/2df45244f09369e16ea3f9117ca45157-Abstract.html | Yijie Guo, Jongwook Choi, Marcin Moczulski, Shengyu Feng, Samy Bengio, Mohammad Norouzi, Honglak Lee | https://papers.nips.cc/paper_files/paper/2020/hash/2df45244f09369e16ea3f9117ca45157-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2df45244f09369e16ea3f9117ca45157-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10088-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2df45244f09369e16ea3f9117ca45157-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2df45244f09369e16ea3f9117ca45157-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2df45244f09369e16ea3f9117ca45157-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2df45244f09369e16ea3f9117ca45157-Supplemental.pdf | Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow. Recent work demonstrated that using a memory buffer of previous successful trajectories can result in more effective policies. However, existing methods may overly exploit past successful experiences, which can encourage the agent to adopt sub-optimal and myopic behaviors. In this work, instead of focusing on good experiences with limited diversity, we propose to learn a trajectory-conditioned policy to follow and expand diverse past trajectories from a memory buffer. Our method allows the agent to reach diverse regions in the state space and improve upon the past trajectories to reach new states. We empirically show that our approach significantly outperforms count-based exploration methods (parametric approach) and self-imitation learning (parametric approach with non-parametric memory) on various complex tasks with local optima. In particular, without using expert demonstrations or resetting to arbitrary states, we achieve the state-of-the-art scores under five billion number of frames, on challenging Atari games such as Montezuma’s Revenge and Pitfall. |
Liberty or Depth: Deep Bayesian Neural Nets Do Not Need Complex Weight Posterior Approximations | https://papers.nips.cc/paper_files/paper/2020/hash/2dfe1946b3003933b7f8ddd71f24dbb1-Abstract.html | Sebastian Farquhar, Lewis Smith, Yarin Gal | https://papers.nips.cc/paper_files/paper/2020/hash/2dfe1946b3003933b7f8ddd71f24dbb1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2dfe1946b3003933b7f8ddd71f24dbb1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10089-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2dfe1946b3003933b7f8ddd71f24dbb1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2dfe1946b3003933b7f8ddd71f24dbb1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2dfe1946b3003933b7f8ddd71f24dbb1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2dfe1946b3003933b7f8ddd71f24dbb1-Supplemental.pdf | We challenge the longstanding assumption that the mean-field approximation for variational inference in Bayesian neural networks is severely restrictive, and show this is not the case in deep networks. We prove several results indicating that deep mean-field variational weight posteriors can induce similar distributions in function-space to those induced by shallower networks with complex weight posteriors. We validate our theoretical contributions empirically, both through examination of the weight posterior using Hamiltonian Monte Carlo in small models and by comparing diagonal- to structured-covariance in large settings. Since complex variational posteriors are often expensive and cumbersome to implement, our results suggest that using mean-field variational inference in a deeper model is both a practical and theoretically justified alternative to structured approximations. |
Improving Sample Complexity Bounds for (Natural) Actor-Critic Algorithms | https://papers.nips.cc/paper_files/paper/2020/hash/2e1b24a664f5e9c18f407b2f9c73e821-Abstract.html | Tengyu Xu, Zhe Wang, Yingbin Liang | https://papers.nips.cc/paper_files/paper/2020/hash/2e1b24a664f5e9c18f407b2f9c73e821-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2e1b24a664f5e9c18f407b2f9c73e821-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10090-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2e1b24a664f5e9c18f407b2f9c73e821-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2e1b24a664f5e9c18f407b2f9c73e821-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2e1b24a664f5e9c18f407b2f9c73e821-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2e1b24a664f5e9c18f407b2f9c73e821-Supplemental.pdf | The actor-critic (AC) algorithm is a popular method to find an optimal policy in reinforcement learning. In the infinite horizon scenario, the finite-sample convergence rate for the AC and natural actor-critic (NAC) algorithms has been established recently, but under independent and identically distributed (i.i.d.) sampling and single-sample update at each iteration. In contrast, this paper characterizes the convergence rate and sample complexity of AC and NAC under Markovian sampling, with mini-batch data for each iteration, and with actor having general policy class approximation. We show that the overall sample complexity for a mini-batch AC to attain an $\epsilon$-accurate stationary point improves the best known sample complexity of AC by an order of $\mathcal{O}(\epsilon^{-1}\log(1/\epsilon))$, and the overall sample complexity for a mini-batch NAC to attain an $\epsilon$-accurate globally optimal point improves the existing sample complexity of NAC by an order of $\mathcal{O}(\epsilon^{-2}/\log(1/\epsilon))$. Moreover, the sample complexity of AC and NAC characterized in this work outperforms that of policy gradient (PG) and natural policy gradient (NPG) by a factor of $\mathcal{O}((1-\gamma)^{-3})$ and $\mathcal{O}((1-\gamma)^{-4}\epsilon^{-2}/\log(1/\epsilon))$, respectively. This is the first theoretical study establishing that AC and NAC attain orderwise performance improvement over PG and NPG under infinite horizon due to the incorporation of critic. |
Learning Differential Equations that are Easy to Solve | https://papers.nips.cc/paper_files/paper/2020/hash/2e255d2d6bf9bb33030246d31f1a79ca-Abstract.html | Jacob Kelly, Jesse Bettencourt, Matthew J. Johnson, David K. Duvenaud | https://papers.nips.cc/paper_files/paper/2020/hash/2e255d2d6bf9bb33030246d31f1a79ca-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2e255d2d6bf9bb33030246d31f1a79ca-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10091-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2e255d2d6bf9bb33030246d31f1a79ca-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2e255d2d6bf9bb33030246d31f1a79ca-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2e255d2d6bf9bb33030246d31f1a79ca-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2e255d2d6bf9bb33030246d31f1a79ca-Supplemental.zip | Differential equations parameterized by neural networks become expensive to solve numerically as training progresses. We propose a remedy that encourages learned dynamics to be easier to solve. Specifically, we introduce a differentiable surrogate for the time cost of standard numerical solvers, using higher-order derivatives of solution trajectories. These derivatives are efficient to compute with Taylor-mode automatic differentiation. Optimizing this additional objective trades model performance against the time cost of solving the learned dynamics. We demonstrate our approach by training substantially faster, while nearly as accurate, models in supervised classification, density estimation, and time-series modelling tasks. |
Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses | https://papers.nips.cc/paper_files/paper/2020/hash/2e2c4bf7ceaa4712a72dd5ee136dc9a8-Abstract.html | Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, Kunal Talwar | https://papers.nips.cc/paper_files/paper/2020/hash/2e2c4bf7ceaa4712a72dd5ee136dc9a8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2e2c4bf7ceaa4712a72dd5ee136dc9a8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10092-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2e2c4bf7ceaa4712a72dd5ee136dc9a8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2e2c4bf7ceaa4712a72dd5ee136dc9a8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2e2c4bf7ceaa4712a72dd5ee136dc9a8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2e2c4bf7ceaa4712a72dd5ee136dc9a8-Supplemental.pdf | Our work is the first to address uniform stability of SGD on nonsmooth convex losses. Specifically, we provide sharp upper and lower bounds for several forms of SGD and full-batch GD on arbitrary Lipschitz nonsmooth convex losses. Our lower bounds show that, in the nonsmooth case, (S)GD can be inherently less stable than in the smooth case. On the other hand, our upper bounds show that (S)GD is sufficiently stable for deriving new and useful bounds on generalization error. Most notably, we obtain the first dimension-independent generalization bounds for multi-pass SGD in the nonsmooth case. In addition, our bound allow us to derive a new algorithm for differentially private nonsmooth stochastic convex optimization with optimal excess population risk. Our algorithm is simpler and more efficient than the best known algorithm for the nonsmooth case, due to Feldman et al. [2020]. |
Influence-Augmented Online Planning for Complex Environments | https://papers.nips.cc/paper_files/paper/2020/hash/2e6d9c6052e99fcdfa61d9b9da273ca2-Abstract.html | Jinke He, Miguel Suau de Castro, Frans Oliehoek | https://papers.nips.cc/paper_files/paper/2020/hash/2e6d9c6052e99fcdfa61d9b9da273ca2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2e6d9c6052e99fcdfa61d9b9da273ca2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10093-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2e6d9c6052e99fcdfa61d9b9da273ca2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2e6d9c6052e99fcdfa61d9b9da273ca2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2e6d9c6052e99fcdfa61d9b9da273ca2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2e6d9c6052e99fcdfa61d9b9da273ca2-Supplemental.pdf | How can we plan efficiently in real time to control an agent in a complex environment that may involve many other agents? While existing sample-based planners have enjoyed empirical success in large POMDPs, their performance heavily relies on a fast simulator. However, real-world scenarios are complex in nature and their simulators are often computationally demanding, which severely limits the performance of online planners. In this work, we propose influence-augmented online planning, a principled method to transform a factored simulator of the entire environment into a local simulator that samples only the state variables that are most relevant to the observation and reward of the planning agent and captures the incoming influence from the rest of the environment using machine learning methods. Our main experimental results show that planning on this less accurate but much faster local simulator with POMCP leads to higher real-time planning performance than planning on the simulator that models the entire environment. |
PAC-Bayes Learning Bounds for Sample-Dependent Priors | https://papers.nips.cc/paper_files/paper/2020/hash/2e85d72295b67c5b649290dfbf019285-Abstract.html | Pranjal Awasthi, Satyen Kale, Stefani Karp, Mehryar Mohri | https://papers.nips.cc/paper_files/paper/2020/hash/2e85d72295b67c5b649290dfbf019285-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2e85d72295b67c5b649290dfbf019285-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10094-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2e85d72295b67c5b649290dfbf019285-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2e85d72295b67c5b649290dfbf019285-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2e85d72295b67c5b649290dfbf019285-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2e85d72295b67c5b649290dfbf019285-Supplemental.pdf | We present a series of new PAC-Bayes learning guarantees for randomized algorithms with sample-dependent priors. Our most general bounds make no assumption on the priors and are given in terms of certain covering numbers under the infinite-Renyi divergence and the L1 distance. We show how to use these general bounds to derive leaning bounds in the setting where the sample-dependent priors obey an infinite-Renyi divergence or L1-distance sensitivity condition. We also provide a flexible framework for computing PAC-Bayes bounds, under certain stability assumptions on the sample-dependent priors, and show how to use this framework to give more refined bounds when the priors satisfy an infinite-Renyi divergence sensitivity condition. |
Reward-rational (implicit) choice: A unifying formalism for reward learning | https://papers.nips.cc/paper_files/paper/2020/hash/2f10c1578a0706e06b6d7db6f0b4a6af-Abstract.html | Hong Jun Jeon, Smitha Milli, Anca Dragan | https://papers.nips.cc/paper_files/paper/2020/hash/2f10c1578a0706e06b6d7db6f0b4a6af-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2f10c1578a0706e06b6d7db6f0b4a6af-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10095-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2f10c1578a0706e06b6d7db6f0b4a6af-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2f10c1578a0706e06b6d7db6f0b4a6af-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2f10c1578a0706e06b6d7db6f0b4a6af-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2f10c1578a0706e06b6d7db6f0b4a6af-Supplemental.pdf | It is often difficult to hand-specify what the correct reward function is for a task, so researchers have instead aimed to learn reward functions from human behavior or feedback. The types of behavior interpreted as evidence of the reward function have expanded greatly in recent years. We've gone from demonstrations, to comparisons, to reading into the information leaked when the human is pushing the robot away or turning it off. And surely, there is more to come. How will a robot make sense of all these diverse types of behavior? Our key observation is that different types of behavior can be interpreted in a single unifying formalism - as a reward-rational choice that the human is making, often implicitly. We use this formalism to survey prior work through a unifying lens, and discuss its potential use as a recipe for interpreting new sources of information that are yet to be uncovered. |
Probabilistic Time Series Forecasting with Shape and Temporal Diversity | https://papers.nips.cc/paper_files/paper/2020/hash/2f2b265625d76a6704b08093c652fd79-Abstract.html | Vincent LE GUEN, Nicolas THOME | https://papers.nips.cc/paper_files/paper/2020/hash/2f2b265625d76a6704b08093c652fd79-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2f2b265625d76a6704b08093c652fd79-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10096-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2f2b265625d76a6704b08093c652fd79-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2f2b265625d76a6704b08093c652fd79-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2f2b265625d76a6704b08093c652fd79-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2f2b265625d76a6704b08093c652fd79-Supplemental.pdf | Probabilistic forecasting consists in predicting a distribution of possible future outcomes. In this paper, we address this problem for non-stationary time series, which is very challenging yet crucially important. We introduce the STRIPE model for representing structured diversity based on shape and time features, ensuring both probable predictions while being sharp and accurate. STRIPE is agnostic to the forecasting model, and we equip it with a diversification mechanism relying on determinantal point processes (DPP). We introduce two DPP kernels for modelling diverse trajectories in terms of shape and time, which are both differentiable and proved to be positive semi-definite. To have an explicit control on the diversity structure, we also design an iterative sampling mechanism to disentangle shape and time representations in the latent space. Experiments carried out on synthetic datasets show that STRIPE significantly outperforms baseline methods for representing diversity, while maintaining accuracy of the forecasting model. We also highlight the relevance of the iterative sampling scheme and the importance to use different criteria for measuring quality and diversity. Finally, experiments on real datasets illustrate that STRIPE is able to outperform state-of-the-art probabilistic forecasting approaches in the best sample prediction. |
Low Distortion Block-Resampling with Spatially Stochastic Networks | https://papers.nips.cc/paper_files/paper/2020/hash/2f380b99d45812a211da102c04dc1ddb-Abstract.html | Sarah Hong, Martin Arjovsky, Darryl Barnhart, Ian Thompson | https://papers.nips.cc/paper_files/paper/2020/hash/2f380b99d45812a211da102c04dc1ddb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2f380b99d45812a211da102c04dc1ddb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10097-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2f380b99d45812a211da102c04dc1ddb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2f380b99d45812a211da102c04dc1ddb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2f380b99d45812a211da102c04dc1ddb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2f380b99d45812a211da102c04dc1ddb-Supplemental.pdf | We formalize and attack the problem of generating new images from old ones that are as diverse as possible, only allowing them to change without restrictions in certain parts of the image while remaining globally consistent.
This encompasses the typical situation found in generative modelling, where we are happy with parts of the generated data, but would like to resample others (``I like this generated castle overall, but this tower looks unrealistic, I would like a new one'').
In order to attack this problem we build from the best conditional and unconditional generative models to introduce a new network architecture, training procedure, and a new algorithm for resampling parts of the image as desired. |
Continual Deep Learning by Functional Regularisation of Memorable Past | https://papers.nips.cc/paper_files/paper/2020/hash/2f3bbb9730639e9ea48f309d9a79ff01-Abstract.html | Pingbo Pan, Siddharth Swaroop, Alexander Immer, Runa Eschenhagen, Richard Turner, Mohammad Emtiyaz Khan | https://papers.nips.cc/paper_files/paper/2020/hash/2f3bbb9730639e9ea48f309d9a79ff01-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2f3bbb9730639e9ea48f309d9a79ff01-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10098-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2f3bbb9730639e9ea48f309d9a79ff01-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2f3bbb9730639e9ea48f309d9a79ff01-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2f3bbb9730639e9ea48f309d9a79ff01-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2f3bbb9730639e9ea48f309d9a79ff01-Supplemental.pdf | Continually learning new skills is important for intelligent systems, yet standard deep learning methods suffer from catastrophic forgetting of the past. Recent works address this with weight regularisation. Functional regularisation, although computationally expensive, is expected to perform better, but rarely does so in practice. In this paper, we fix this issue by using a new functional-regularisation approach that utilises a few memorable past examples crucial to avoid forgetting. By using a Gaussian Process formulation of deep networks, our approach enables training in weight-space while identifying both the memorable past and a functional prior. Our method achieves state-of-the-art performance on standard benchmarks and opens a new direction for life-long learning where regularisation and memory-based methods are naturally combined. |
Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning | https://papers.nips.cc/paper_files/paper/2020/hash/2f73168bf3656f697507752ec592c437-Abstract.html | Pan Li, Yanbang Wang, Hongwei Wang, Jure Leskovec | https://papers.nips.cc/paper_files/paper/2020/hash/2f73168bf3656f697507752ec592c437-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2f73168bf3656f697507752ec592c437-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10099-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2f73168bf3656f697507752ec592c437-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2f73168bf3656f697507752ec592c437-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2f73168bf3656f697507752ec592c437-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/2f73168bf3656f697507752ec592c437-Supplemental.pdf | Learning representations of sets of nodes in a graph is crucial for applications ranging from node-role discovery to link prediction and molecule classification. Graph Neural Networks (GNNs) have achieved great success in graph representation learning. However, expressive power of GNNs is limited by the 1-Weisfeiler-Lehman (WL) test and thus GNNs generate identical representations for graph substructures that may in fact be very different. More powerful GNNs, proposed recently by mimicking higher-order-WL tests, only focus on representing entire graphs and they are computationally inefficient as they cannot utilize sparsity of the underlying graph. Here we propose and mathematically analyze a general class of structure-related features, termed Distance Encoding (DE). DE assists GNNs in representing any set of nodes, while providing strictly more expressive power than the 1-WL test. DE captures the distance between the node set whose representation is to be learned and each node in the graph. To capture the distance DE can apply various graph-distance measures such as shortest path distance or generalized PageRank scores. We propose two ways for GNNs to use DEs (1) as extra node features, and (2) as controllers of message aggregation in GNNs. Both approaches can utilize the sparse structure of the underlying graph, which leads to computational efficiency and scalability. We also prove that DE can distinguish node sets embedded in almost all regular graphs where traditional GNNs always fail. We evaluate DE on three tasks over six real networks: structural role prediction, link prediction, and triangle prediction. Results show that our models outperform GNNs without DE by up-to 15\% in accuracy and AUROC. Furthermore, our models also significantly outperform other state-of-the-art methods especially designed for the above tasks. |
Fast Fourier Convolution | https://papers.nips.cc/paper_files/paper/2020/hash/2fd5d41ec6cfab47e32164d5624269b1-Abstract.html | Lu Chi, Borui Jiang, Yadong Mu | https://papers.nips.cc/paper_files/paper/2020/hash/2fd5d41ec6cfab47e32164d5624269b1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/2fd5d41ec6cfab47e32164d5624269b1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10100-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/2fd5d41ec6cfab47e32164d5624269b1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/2fd5d41ec6cfab47e32164d5624269b1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/2fd5d41ec6cfab47e32164d5624269b1-Review.html | null | Vanilla convolutions in modern deep networks are known to operate locally and at fixed scale (e.g., the widely-adopted 3*3 kernels in image-oriented tasks). This causes low efficacy in connecting two distant locations in the network. In this work, we propose a novel convolutional operator dubbed as fast Fourier convolution (FFC), which has the main hallmarks of non-local receptive fields and cross-scale fusion within the convolutional unit. According to spectral convolution theorem in Fourier theory, point-wise update in the spectral domain globally affects all input features involved in Fourier transform, which sheds light on neural architectural design with non-local receptive field. Our proposed FFC is inspired to capsulate three different kinds of computations in a single operation unit: a local branch that conducts ordinary small-kernel convolution, a semi-global branch that processes spectrally stacked image patches, and a global branch that manipulates image-level spectrum. All branches complementarily address different scales. A multi-branch aggregation step is included in FFC for cross-scale fusion. FFC is a generic operator that can directly replace vanilla convolutions in a large body of existing networks, without any adjustments and with comparable complexity metrics (e.g., FLOPs). We experimentally evaluate FFC in three major vision benchmarks (ImageNet for image recognition, Kinetics for video action recognition, MSCOCO for human keypoint detection). It consistently elevates accuracies in all above tasks by significant margins. |
Unsupervised Learning of Dense Visual Representations | https://papers.nips.cc/paper_files/paper/2020/hash/3000311ca56a1cb93397bc676c0b7fff-Abstract.html | Pedro O. O. Pinheiro, Amjad Almahairi, Ryan Benmalek, Florian Golemo, Aaron C. Courville | https://papers.nips.cc/paper_files/paper/2020/hash/3000311ca56a1cb93397bc676c0b7fff-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3000311ca56a1cb93397bc676c0b7fff-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10101-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3000311ca56a1cb93397bc676c0b7fff-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3000311ca56a1cb93397bc676c0b7fff-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3000311ca56a1cb93397bc676c0b7fff-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3000311ca56a1cb93397bc676c0b7fff-Supplemental.pdf | Contrastive self-supervised learning has emerged as a promising approach to unsupervised visual representation learning. In general, these methods learn global (image-level) representations that are invariant to different views (i.e., compositions of data augmentation) of the same image. However, many visual understanding tasks require dense (pixel-level) representations. In this paper, we propose View-Agnostic Dense Representation (VADeR) for unsupervised learning of dense representations. VADeR learns pixelwise representations by forcing local features to remain constant over different viewing conditions. Specifically, this is achieved through pixel-level contrastive learning: matching features (that is, features that describes the same location of the scene on different views) should be close in an embedding space, while non-matching features should be apart. VADeR provides a natural representation for dense prediction tasks and transfers well to downstream tasks. Our method outperforms ImageNet supervised pretraining (and strong unsupervised baselines) in multiple dense prediction tasks. |
Higher-Order Certification For Randomized Smoothing | https://papers.nips.cc/paper_files/paper/2020/hash/300891a62162b960cf02ce3827bb363c-Abstract.html | Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel | https://papers.nips.cc/paper_files/paper/2020/hash/300891a62162b960cf02ce3827bb363c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/300891a62162b960cf02ce3827bb363c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10102-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/300891a62162b960cf02ce3827bb363c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/300891a62162b960cf02ce3827bb363c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/300891a62162b960cf02ce3827bb363c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/300891a62162b960cf02ce3827bb363c-Supplemental.pdf | Randomized smoothing is a recently proposed defense against adversarial attacks that has achieved state-of-the-art provable robustness against $\ell_2$ perturbations. A number of works have extended the guarantees to other metrics, such as $\ell_1$ or $\ell_\infty$, by using different smoothing measures. Although the current framework has been shown to yield near-optimal $\ell_p$ radii, the total safety region certified by the current framework can be arbitrarily small compared to the optimal.
In this work, we propose a framework to improve the certified safety region for these smoothed classifiers without changing the underlying smoothing scheme. The theoretical contributions are as follows: 1) We generalize the certification for randomized smoothing by reformulating certified radius calculation as a nested optimization problem over a class of functions. 2) We provide a method to calculate the certified safety region using zeroth-order and first-order information for Gaussian-smoothed classifiers. We also provide a framework that generalizes the calculation for certification using higher-order information. 3) We design efficient, high-confidence estimators for the relevant statistics of the first-order information. Combining the theoretical contribution 2) and 3) allows us to certify safety region that are significantly larger than ones provided by the current methods. On CIFAR and Imagenet, the new regions achieve significant improvements on general $\ell_1$ certified radii and on the $\ell_2$ certified radii for color-space attacks ($\ell_2$ perturbation restricted to only one color/channel) while also achieving smaller improvements on the general $\ell_2$ certified radii.
As discussed in the future works section, our framework can also provide a way to circumvent the current impossibility results on achieving higher magnitudes of certified radii without requiring the use of data-dependent smoothing techniques. |
Learning Structured Distributions From Untrusted Batches: Faster and Simpler | https://papers.nips.cc/paper_files/paper/2020/hash/305ddad049f65a2c241dbb6e6f746c54-Abstract.html | Sitan Chen, Jerry Li, Ankur Moitra | https://papers.nips.cc/paper_files/paper/2020/hash/305ddad049f65a2c241dbb6e6f746c54-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/305ddad049f65a2c241dbb6e6f746c54-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10103-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/305ddad049f65a2c241dbb6e6f746c54-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/305ddad049f65a2c241dbb6e6f746c54-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/305ddad049f65a2c241dbb6e6f746c54-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/305ddad049f65a2c241dbb6e6f746c54-Supplemental.zip | In this paper, we find an appealing way to synthesize the techniques of [JO19] and [CLM19] to give the best of both worlds: an algorithm which runs in polynomial time and can exploit structure in the underlying distribution to achieve sublinear sample complexity. Along the way, we simplify the approach of [JO19] by avoiding the need for SDP rounding and giving a more direct interpretation of it through the lens of soft filtering, a powerful recent technique in high-dimensional robust estimation. We validate the usefulness of our algorithms in preliminary experimental evaluations. |
Hierarchical Quantized Autoencoders | https://papers.nips.cc/paper_files/paper/2020/hash/309fee4e541e51de2e41f21bebb342aa-Abstract.html | Will Williams, Sam Ringer, Tom Ash, David MacLeod, Jamie Dougherty, John Hughes | https://papers.nips.cc/paper_files/paper/2020/hash/309fee4e541e51de2e41f21bebb342aa-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/309fee4e541e51de2e41f21bebb342aa-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10104-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/309fee4e541e51de2e41f21bebb342aa-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/309fee4e541e51de2e41f21bebb342aa-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/309fee4e541e51de2e41f21bebb342aa-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/309fee4e541e51de2e41f21bebb342aa-Supplemental.pdf | Despite progress in training neural networks for lossy image compression, current approaches fail to maintain both perceptual quality and abstract features at very low bitrates. Encouraged by recent success in learning discrete representations with Vector Quantized Variational Autoencoders (VQ-VAEs), we motivate the use of a hierarchy of VQ-VAEs to attain high factors of compression. We show that the combination of stochastic quantization and hierarchical latent structure aids likelihood-based image compression. This leads us to introduce a novel objective for training hierarchical VQ-VAEs. Our resulting scheme produces a Markovian series of latent variables that reconstruct images of high-perceptual quality which retain semantically meaningful features. We provide qualitative and quantitative evaluations on the CelebA and MNIST datasets. |
Diversity can be Transferred: Output Diversification for White- and Black-box Attacks | https://papers.nips.cc/paper_files/paper/2020/hash/30da227c6b5b9e2482b6b221c711edfd-Abstract.html | Yusuke Tashiro, Yang Song, Stefano Ermon | https://papers.nips.cc/paper_files/paper/2020/hash/30da227c6b5b9e2482b6b221c711edfd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/30da227c6b5b9e2482b6b221c711edfd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10105-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/30da227c6b5b9e2482b6b221c711edfd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/30da227c6b5b9e2482b6b221c711edfd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/30da227c6b5b9e2482b6b221c711edfd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/30da227c6b5b9e2482b6b221c711edfd-Supplemental.zip | Adversarial attacks often involve random perturbations of the inputs drawn from uniform or Gaussian distributions, e.g. to initialize optimization-based white-box attacks or generate update directions in black-box attacks. These simple perturbations, however, could be sub-optimal as they are agnostic to the model being attacked. To improve the efficiency of these attacks, we propose Output Diversified Sampling (ODS), a novel sampling strategy that attempts to maximize diversity in the target model's outputs among the generated samples. While ODS is a gradient-based strategy, the diversity offered by ODS is transferable and can be helpful for both white-box and black-box attacks via surrogate models. Empirically, we demonstrate that ODS significantly improves the performance of existing white-box and black-box attacks. In particular, ODS reduces the number of queries needed for state-of-the-art black-box attacks on ImageNet by a factor of two. |
POLY-HOOT: Monte-Carlo Planning in Continuous Space MDPs with Non-Asymptotic Analysis | https://papers.nips.cc/paper_files/paper/2020/hash/30de24287a6d8f07b37c716ad51623a7-Abstract.html | Weichao Mao, Kaiqing Zhang, Qiaomin Xie, Tamer Basar | https://papers.nips.cc/paper_files/paper/2020/hash/30de24287a6d8f07b37c716ad51623a7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/30de24287a6d8f07b37c716ad51623a7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10106-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/30de24287a6d8f07b37c716ad51623a7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/30de24287a6d8f07b37c716ad51623a7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/30de24287a6d8f07b37c716ad51623a7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/30de24287a6d8f07b37c716ad51623a7-Supplemental.pdf | Monte-Carlo planning, as exemplified by Monte-Carlo Tree Search (MCTS), has demonstrated remarkable performance in applications with finite spaces. In this paper, we consider Monte-Carlo planning in an environment with continuous state-action spaces, a much less understood problem with important applications in control and robotics. We introduce POLY-HOOT, an algorithm that augments MCTS with a continuous armed bandit strategy named Hierarchical Optimistic Optimization (HOO) (Bubeck et al., 2011). Specifically, we enhance HOO by using an appropriate polynomial, rather than logarithmic, bonus term in the upper confidence bounds. Such a polynomial bonus is motivated by its empirical successes in AlphaGo Zero (Silver et al., 2017b), as well as its significant role in achieving theoretical guarantees of finite space MCTS (Shah et al., 2019). We investigate, for the first time, the regret of the enhanced HOO algorithm in non-stationary bandit problems. Using this result as a building block, we establish non-asymptotic convergence guarantees for POLY-HOOT: the value estimate converges to an arbitrarily small neighborhood of the optimal value function at a polynomial rate. We further provide experimental results that corroborate our theoretical findings. |
AvE: Assistance via Empowerment | https://papers.nips.cc/paper_files/paper/2020/hash/30de9ece7cf3790c8c39ccff1a044209-Abstract.html | Yuqing Du, Stas Tiomkin, Emre Kiciman, Daniel Polani, Pieter Abbeel, Anca Dragan | https://papers.nips.cc/paper_files/paper/2020/hash/30de9ece7cf3790c8c39ccff1a044209-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/30de9ece7cf3790c8c39ccff1a044209-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10107-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/30de9ece7cf3790c8c39ccff1a044209-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/30de9ece7cf3790c8c39ccff1a044209-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/30de9ece7cf3790c8c39ccff1a044209-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/30de9ece7cf3790c8c39ccff1a044209-Supplemental.zip | One difficulty in using artificial agents for human-assistive applications lies in the challenge of accurately assisting with a person's goal(s). Existing methods tend to rely on inferring the human's goal, which is challenging when there are many potential goals or when the set of candidate goals is difficult to identify. We propose a new paradigm for assistance by instead increasing the human's ability to control their environment, and formalize this approach by augmenting reinforcement learning with human empowerment. This task-agnostic objective increases the person's autonomy and ability to achieve any eventual state. We test our approach against assistance based on goal inference, highlighting scenarios where our method overcomes failure modes stemming from goal ambiguity or misspecification. As existing methods for estimating empowerment in continuous domains are computationally hard, precluding its use in real time learned assistance, we also propose an efficient empowerment-inspired proxy metric. Using this, we are able to successfully demonstrate our method in a shared autonomy user study for a challenging simulated teleoperation task with human-in-the-loop training. |
Variational Policy Gradient Method for Reinforcement Learning with General Utilities | https://papers.nips.cc/paper_files/paper/2020/hash/30ee748d38e21392de740e2f9dc686b6-Abstract.html | Junyu Zhang, Alec Koppel, Amrit Singh Bedi, Csaba Szepesvari, Mengdi Wang | https://papers.nips.cc/paper_files/paper/2020/hash/30ee748d38e21392de740e2f9dc686b6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/30ee748d38e21392de740e2f9dc686b6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10108-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/30ee748d38e21392de740e2f9dc686b6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/30ee748d38e21392de740e2f9dc686b6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/30ee748d38e21392de740e2f9dc686b6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/30ee748d38e21392de740e2f9dc686b6-Supplemental.pdf | In recent years, reinforcement learning systems with general goals beyond a cumulative sum of rewards have gained traction, such as in constrained problems, exploration, and acting upon prior experiences. In this paper, we consider policy optimization in Markov Decision Problems, where the objective is a general utility function of the state-action occupancy measure, which subsumes several of the aforementioned examples as special cases. Such generality invalidates the Bellman equation. As this means that dynamic programming no longer works, we focus on direct policy search. Analogously to the Policy Gradient Theorem \cite{sutton2000policy} available for RL with cumulative rewards, we derive a new Variational Policy Gradient Theorem for RL with general utilities, which establishes that the gradient may be obtained as the solution of a stochastic saddle point problem involving the Fenchel dual of the utility function. We develop a variational Monte Carlo gradient estimation algorithm to compute the policy gradient based on sample paths. Further, we prove that the variational policy gradient scheme converges globally to the optimal policy for the general objective, and we also establish its rate of convergence that matches or improves the convergence rate available in the case of RL with cumulative rewards. |
Reverse-engineering recurrent neural network solutions to a hierarchical inference task for mice | https://papers.nips.cc/paper_files/paper/2020/hash/30f0641c041f03d94e95a76b9d8bd58f-Abstract.html | Rylan Schaeffer, Mikail Khona, Leenoy Meshulam, Brain Laboratory International, Ila Fiete | https://papers.nips.cc/paper_files/paper/2020/hash/30f0641c041f03d94e95a76b9d8bd58f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/30f0641c041f03d94e95a76b9d8bd58f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10109-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/30f0641c041f03d94e95a76b9d8bd58f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/30f0641c041f03d94e95a76b9d8bd58f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/30f0641c041f03d94e95a76b9d8bd58f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/30f0641c041f03d94e95a76b9d8bd58f-Supplemental.pdf | We study how recurrent neural networks (RNNs) solve a hierarchical inference task involving two latent variables and disparate timescales separated by 1-2 orders of magnitude. The task is of interest to the International Brain Laboratory, a global collaboration of experimental and theoretical neuroscientists studying how the mammalian brain generates behavior. We make four discoveries. First, RNNs learn behavior that is quantitatively similar to ideal Bayesian baselines. Second, RNNs perform inference by learning a two-dimensional subspace defining beliefs about the latent variables. Third, the geometry of RNN dynamics reflects an induced coupling between the two separate inference processes necessary to solve the task. Fourth, we perform model compression through a novel form of knowledge distillation on hidden representations -- Representations and Dynamics Distillation (RADD)-- to reduce the RNN dynamics to a low-dimensional, highly interpretable model. This technique promises a useful tool for interpretability of high dimensional nonlinear dynamical systems. Altogether, this work yields predictions to guide exploration and analysis of mouse neural data and circuity. |
Temporal Positive-unlabeled Learning for Biomedical Hypothesis Generation via Risk Estimation | https://papers.nips.cc/paper_files/paper/2020/hash/310614fca8fb8e5491295336298c340f-Abstract.html | Uchenna Akujuobi, Jun Chen, Mohamed Elhoseiny, Michael Spranger, Xiangliang Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/310614fca8fb8e5491295336298c340f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/310614fca8fb8e5491295336298c340f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10110-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/310614fca8fb8e5491295336298c340f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/310614fca8fb8e5491295336298c340f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/310614fca8fb8e5491295336298c340f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/310614fca8fb8e5491295336298c340f-Supplemental.pdf | Understanding the relationships between biomedical terms like viruses, drugs, and
symptoms is essential in the fight against diseases. Many attempts have been made
to introduce the use of machine learning to the scientific process of hypothesis
generation (HG), which refers to the discovery of meaningful implicit connections
between biomedical terms. However, most existing methods fail to truly capture
the temporal dynamics of scientific term relations and also assume unobserved
connections to be irrelevant (i.e., in a positive-negative (PN) learning setting). To
break these limits, we formulate this HG problem as future connectivity prediction
task on a dynamic attributed graph via positive-unlabeled (PU) learning. Then,
the key is to capture the temporal evolution of node pair (term pair) relations
from just the positive and unlabeled data. We propose a variational inference
model to estimate the positive prior, and incorporate it in the learning of node
pair embeddings, which are then used for link prediction. Experiment results on
real-world biomedical term relationship datasets and case study analyses on a
COVID-19 dataset validate the effectiveness of the proposed model. |
Efficient Low Rank Gaussian Variational Inference for Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/310cc7ca5a76a446f85c1a0d641ba96d-Abstract.html | Marcin Tomczak, Siddharth Swaroop, Richard Turner | https://papers.nips.cc/paper_files/paper/2020/hash/310cc7ca5a76a446f85c1a0d641ba96d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/310cc7ca5a76a446f85c1a0d641ba96d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10111-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/310cc7ca5a76a446f85c1a0d641ba96d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/310cc7ca5a76a446f85c1a0d641ba96d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/310cc7ca5a76a446f85c1a0d641ba96d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/310cc7ca5a76a446f85c1a0d641ba96d-Supplemental.zip | Bayesian neural networks are enjoying a renaissance driven in part by recent advances in variational inference (VI).
The most common form of VI employs a fully factorized or mean-field distribution, but this is known to suffer from several pathologies, especially as we expect posterior distributions with highly correlated parameters.
Current algorithms that capture these correlations with a Gaussian approximating family are difficult to scale to large models due to computational costs and high variance of gradient updates.
By using a new form of the reparametrization trick, we derive a computationally efficient algorithm for performing VI with a Gaussian family with a low-rank plus diagonal covariance structure.
We scale to deep feed-forward and convolutional architectures.
We find that adding low-rank terms to parametrized diagonal covariance does not improve predictive performance except on small networks, but low-rank terms added to a constant diagonal covariance improves performance on small and large-scale network architectures. |
Privacy Amplification via Random Check-Ins | https://papers.nips.cc/paper_files/paper/2020/hash/313f422ac583444ba6045cd122653b0e-Abstract.html | Borja Balle, Peter Kairouz, Brendan McMahan, Om Thakkar, Abhradeep Guha Thakurta | https://papers.nips.cc/paper_files/paper/2020/hash/313f422ac583444ba6045cd122653b0e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/313f422ac583444ba6045cd122653b0e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10112-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/313f422ac583444ba6045cd122653b0e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/313f422ac583444ba6045cd122653b0e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/313f422ac583444ba6045cd122653b0e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/313f422ac583444ba6045cd122653b0e-Supplemental.zip | Differentially Private Stochastic Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data. Two standard approaches, privacy amplification by subsampling, and privacy amplification by shuffling, permit adding lower noise in DP-SGD than via na\"{\i}ve schemes. A key assumption in both these approaches is that the elements in the data set can be uniformly sampled, or be uniformly permuted --- constraints that may become prohibitive when the data is processed in a decentralized or distributed fashion. In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients). Our main contribution is the \emph{random check-in} distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client. It has privacy/accuracy trade-offs similar to privacy amplification by subsampling/shuffling. However, our method does not require server-initiated communication, or even knowledge of the population size. To our knowledge, this is the first privacy amplification tailored for a distributed learning framework, and it may have broader applicability beyond FL. Along the way, we improve the privacy guarantees of amplification by shuffling and show that, in practical regimes, this improvement allows for similar privacy and utility using data from an order of magnitude fewer users. |
Probabilistic Circuits for Variational Inference in Discrete Graphical Models | https://papers.nips.cc/paper_files/paper/2020/hash/31784d9fc1fa0d25d04eae50ac9bf787-Abstract.html | Andy Shih, Stefano Ermon | https://papers.nips.cc/paper_files/paper/2020/hash/31784d9fc1fa0d25d04eae50ac9bf787-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/31784d9fc1fa0d25d04eae50ac9bf787-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10113-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/31784d9fc1fa0d25d04eae50ac9bf787-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/31784d9fc1fa0d25d04eae50ac9bf787-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/31784d9fc1fa0d25d04eae50ac9bf787-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/31784d9fc1fa0d25d04eae50ac9bf787-Supplemental.pdf | Inference in discrete graphical models with variational methods is difficult because of the inability to re-parameterize gradients of the Evidence Lower Bound (ELBO). Many sampling-based methods have been proposed for estimating these gradients, but they suffer from high bias or variance. In this paper, we propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN), to compute ELBO gradients exactly (without sampling) for a certain class of densities. In particular, we show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is a polynomial the corresponding ELBO can be computed analytically. To scale to graphical models with thousands of variables, we develop an efficient and effective construction of selective-SPNs with size (O(kn)), where (n) is the number of variables and (k) is an adjustable hyperparameter. We demonstrate our approach on three types of graphical models -- Ising models, Latent Dirichlet Allocation, and factor graphs from the UAI Inference Competition. Selective-SPNs give a better lower bound than mean-field and structured mean-field, and is competitive with approximations that do not provide a lower bound, such as Loopy Belief Propagation and Tree-Reweighted Belief Propagation. Our results show that probabilistic circuits are promising tools for variational inference in discrete graphical models as they combine tractability and expressivity. |
Your Classifier can Secretly Suffice Multi-Source Domain Adaptation | https://papers.nips.cc/paper_files/paper/2020/hash/3181d59d19e76e902666df5c7821259a-Abstract.html | Naveen Venkat, Jogendra Nath Kundu, Durgesh Singh, Ambareesh Revanur, Venkatesh Babu R | https://papers.nips.cc/paper_files/paper/2020/hash/3181d59d19e76e902666df5c7821259a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3181d59d19e76e902666df5c7821259a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10114-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3181d59d19e76e902666df5c7821259a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3181d59d19e76e902666df5c7821259a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3181d59d19e76e902666df5c7821259a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3181d59d19e76e902666df5c7821259a-Supplemental.pdf | Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain, under a domain-shift. Existing methods aim to minimize this domain-shift using auxiliary distribution alignment objectives. In this work, we present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision. Thus, we aim to utilize implicit alignment without additional training objectives to perform adaptation. To this end, we use pseudo-labeled target samples and enforce a classifier agreement on the pseudo-labels, a process called Self-supervised Implicit Alignment (SImpAl). We find that SImpAl readily works even under category-shift among the source domains. Further, we propose classifier agreement as a cue to determine the training convergence, resulting in a simple training algorithm. We provide a thorough evaluation of our approach on five benchmarks, along with detailed insights into each component of our approach. |
Labelling unlabelled videos from scratch with multi-modal self-supervision | https://papers.nips.cc/paper_files/paper/2020/hash/31fefc0e570cb3860f2a6d4b38c6490d-Abstract.html | Yuki Asano, Mandela Patrick, Christian Rupprecht, Andrea Vedaldi | https://papers.nips.cc/paper_files/paper/2020/hash/31fefc0e570cb3860f2a6d4b38c6490d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/31fefc0e570cb3860f2a6d4b38c6490d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10115-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/31fefc0e570cb3860f2a6d4b38c6490d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/31fefc0e570cb3860f2a6d4b38c6490d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/31fefc0e570cb3860f2a6d4b38c6490d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/31fefc0e570cb3860f2a6d4b38c6490d-Supplemental.zip | A large part of the current success of deep learning lies in the effectiveness of data -- more precisely: of labeled data. Yet, labelling a dataset with human annotation continues to carry high costs, especially for videos. While in the image domain, recent methods have allowed to generate meaningful (pseudo-) labels for unlabelled datasets without supervision, this development is missing for the video domain where learning feature representations is the current focus. In this work, we a) show that unsupervised labelling of a video dataset does not come for free from strong feature encoders and b) propose a novel clustering method that allows pseudo-labelling of a video dataset without any human annotations, by leveraging the natural correspondence between audio and visual modalities. An extensive analysis shows that the resulting clusters have high semantic overlap to ground truth human labels. We further introduce the first benchmarking results on unsupervised labelling of common video datasets. |
A Non-Asymptotic Analysis for Stein Variational Gradient Descent | https://papers.nips.cc/paper_files/paper/2020/hash/3202111cf90e7c816a472aaceb72b0df-Abstract.html | Anna Korba, Adil Salim, Michael Arbel, Giulia Luise, Arthur Gretton | https://papers.nips.cc/paper_files/paper/2020/hash/3202111cf90e7c816a472aaceb72b0df-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3202111cf90e7c816a472aaceb72b0df-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10116-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3202111cf90e7c816a472aaceb72b0df-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3202111cf90e7c816a472aaceb72b0df-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3202111cf90e7c816a472aaceb72b0df-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3202111cf90e7c816a472aaceb72b0df-Supplemental.pdf | We study the Stein Variational Gradient Descent (SVGD) algorithm, which optimises a set of particles to approximate a target probability distribution $\pi\propto e^{-V}$ on $\R^d$. In the population limit, SVGD performs gradient descent in the space of probability distributions on the KL divergence with respect to $\pi$, where the gradient is smoothed through a kernel integral operator. In this paper, we provide a novel finite time analysis for the SVGD algorithm. We provide a descent lemma establishing that the algorithm decreases the objective at each iteration, and rates of convergence. We also provide a convergence result of the finite particle system corresponding to the practical implementation of SVGD to its population version. |
Robust Meta-learning for Mixed Linear Regression with Small Batches | https://papers.nips.cc/paper_files/paper/2020/hash/3214a6d842cc69597f9edf26df552e43-Abstract.html | Weihao Kong, Raghav Somani, Sham Kakade, Sewoong Oh | https://papers.nips.cc/paper_files/paper/2020/hash/3214a6d842cc69597f9edf26df552e43-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3214a6d842cc69597f9edf26df552e43-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10117-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3214a6d842cc69597f9edf26df552e43-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3214a6d842cc69597f9edf26df552e43-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3214a6d842cc69597f9edf26df552e43-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3214a6d842cc69597f9edf26df552e43-Supplemental.pdf | A common challenge faced in practical supervised learning, such as medical image processing and robotic interactions, is that there are plenty of tasks but each task cannot afford to collect enough labeled examples to be learned in isolation. However, by exploiting the similarities across those tasks, one can hope to overcome such data scarcity. Under a canonical scenario where each task is drawn from a mixture of $k$ linear regressions, we study a fundamental question: can abundant small-data tasks compensate for the lack of big-data tasks? Existing second moment based approaches of \cite{2020arXiv200208936K} show that such a trade-off is efficiently achievable, with the help of medium-sized tasks with $\Omega(k^{1/2})$ examples each. However, this algorithm is brittle in two important scenarios. The predictions can be arbitrarily bad $(i)$ even with only a few outliers in the dataset; or $(ii)$ even if the medium-sized tasks are slightly smaller with $o(k^{1/2})$ examples each. We introduce a spectral approach that is simultaneously robust under both scenarios. To this end, we first design a novel outlier-robust principal component analysis algorithm that achieves an optimal accuracy. This is followed by a sum-of-squares algorithm to exploit the information from higher order moments. Together, this approach is robust against outliers and achieves a graceful statistical trade-off; the lack of $\Omega(k^{1/2})$-size tasks can be compensated for with smaller tasks, which can now be as small as ${\cal O}(\log k)$. |
Bayesian Deep Learning and a Probabilistic Perspective of Generalization | https://papers.nips.cc/paper_files/paper/2020/hash/322f62469c5e3c7dc3e58f5a4d1ea399-Abstract.html | Andrew G. Wilson, Pavel Izmailov | https://papers.nips.cc/paper_files/paper/2020/hash/322f62469c5e3c7dc3e58f5a4d1ea399-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/322f62469c5e3c7dc3e58f5a4d1ea399-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10118-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/322f62469c5e3c7dc3e58f5a4d1ea399-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/322f62469c5e3c7dc3e58f5a4d1ea399-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/322f62469c5e3c7dc3e58f5a4d1ea399-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/322f62469c5e3c7dc3e58f5a4d1ea399-Supplemental.pdf | The key distinguishing property of a Bayesian approach is marginalization, rather than using a single setting of weights. Bayesian marginalization can particularly improve the accuracy and calibration of modern deep neural networks, which are typically underspecified by the data, and can represent many compelling but different solutions. We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization, and propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction, without significant overhead. We also investigate the prior over functions implied by a vague distribution over neural network weights, explaining the generalization properties of such models from a probabilistic perspective. From this perspective, we explain results that have been presented as mysterious and distinct to neural network generalization, such as the ability to fit images with random labels, and show that these results can be reproduced with Gaussian processes. We also show that Bayesian model averaging alleviates double descent, resulting in monotonic performance improvements with increased flexibility. |
Unsupervised Learning of Object Landmarks via Self-Training Correspondence | https://papers.nips.cc/paper_files/paper/2020/hash/32508f53f24c46f685870a075eaaa29c-Abstract.html | Dimitrios Mallis, Enrique Sanchez, Matthew Bell, Georgios Tzimiropoulos | https://papers.nips.cc/paper_files/paper/2020/hash/32508f53f24c46f685870a075eaaa29c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/32508f53f24c46f685870a075eaaa29c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10119-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/32508f53f24c46f685870a075eaaa29c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/32508f53f24c46f685870a075eaaa29c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/32508f53f24c46f685870a075eaaa29c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/32508f53f24c46f685870a075eaaa29c-Supplemental.pdf | This paper addresses the problem of unsupervised discovery of object landmarks. We take a different path compared to that of existing works, based on 2 novel perspectives: (1) Self-training: starting from generic keypoints, we propose a self-training approach where the goal is to learn a detector that improves itself becoming more and more tuned to object landmarks. (2) Correspondence: we identify correspondence as a key objective for unsupervised landmark discovery and propose an optimization scheme which alternates between recovering object landmark correspondence across different images via clustering and learning an object landmark descriptor without labels. Compared to previous works, our approach can learn landmarks that are more flexible in terms of capturing large changes in viewpoint. We show the favourable properties of our method on a variety of difficult datasets including LS3D, BBCPose and Human3.6M. Code is available at https://github.com/malldimi1/UnsupervisedLandmarks |
Randomized tests for high-dimensional regression: A more efficient and powerful solution | https://papers.nips.cc/paper_files/paper/2020/hash/3261769be720b0fefbfffec05e9d9202-Abstract.html | Yue Li, Ilmun Kim, Yuting Wei | https://papers.nips.cc/paper_files/paper/2020/hash/3261769be720b0fefbfffec05e9d9202-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3261769be720b0fefbfffec05e9d9202-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10120-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3261769be720b0fefbfffec05e9d9202-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3261769be720b0fefbfffec05e9d9202-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3261769be720b0fefbfffec05e9d9202-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3261769be720b0fefbfffec05e9d9202-Supplemental.zip | We investigate the problem of testing the global null in the high-dimensional regression models when the feature dimension $p$ grows proportionally to the number of observations $n$. Despite a number of prior work studying this problem, whether there exists a test that is model-agnostic, efficient to compute and enjoys a high power, still remains unsettled. In this paper, we answer this question in the affirmative by leveraging the random projection techniques, and propose a testing procedure that blends the classical $F$-test with a random projection step. When combined with a systematic choice of the projection dimension, the proposed procedure is proved to be minimax optimal and, meanwhile, reduces the computation and data storage requirements. We illustrate our results in various scenarios when the underlying feature matrix exhibits an intrinsic lower dimensional structure (such as approximate low-rank or has exponential/polynomial eigen-decay), and it turns out that the proposed test achieves sharp adaptive rates. Our theoretical findings are further validated by comparisons to other state-of-the-art tests on synthetic data. |
Learning Representations from Audio-Visual Spatial Alignment | https://papers.nips.cc/paper_files/paper/2020/hash/328e5d4c166bb340b314d457a208dc83-Abstract.html | Pedro Morgado, Yi Li, Nuno Nvasconcelos | https://papers.nips.cc/paper_files/paper/2020/hash/328e5d4c166bb340b314d457a208dc83-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/328e5d4c166bb340b314d457a208dc83-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10121-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/328e5d4c166bb340b314d457a208dc83-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/328e5d4c166bb340b314d457a208dc83-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/328e5d4c166bb340b314d457a208dc83-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/328e5d4c166bb340b314d457a208dc83-Supplemental.pdf | We introduce a novel self-supervised pretext task for learning representations from audio-visual content. Prior work on audio-visual representation learning leverages correspondences at the video level. Approaches based on audio-visual correspondence (AVC) predict whether audio and video clips originate from the same or different video instances. Audio-visual temporal synchronization (AVTS) further discriminates negative pairs originated from the same video instance but at different moments in time. While these approaches learn high-quality representations for downstream tasks such as action recognition, they completely disregard the spatial cues of audio and visual signals naturally occurring in the real world. To learn from these spatial cues, we tasked a network to perform contrastive audio-visual spatial alignment of 360\degree video and spatial audio. The ability to perform spatial alignment is enhanced by reasoning over the full spatial content of the 360\degree video using a transformer architecture to combine representations from multiple viewpoints. The advantages of the proposed pretext task are demonstrated on a variety of audio and visual downstream tasks, including audio-visual correspondence, spatial alignment, action recognition and video semantic segmentation. Dataset and code are available at https://github.com/pedro-morgado/AVSpatialAlignment. |
Generative View Synthesis: From Single-view Semantics to Novel-view Images | https://papers.nips.cc/paper_files/paper/2020/hash/3295c76acbf4caaed33c36b1b5fc2cb1-Abstract.html | Tewodros Amberbir Habtegebrial, Varun Jampani, Orazio Gallo, Didier Stricker | https://papers.nips.cc/paper_files/paper/2020/hash/3295c76acbf4caaed33c36b1b5fc2cb1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3295c76acbf4caaed33c36b1b5fc2cb1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10122-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3295c76acbf4caaed33c36b1b5fc2cb1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3295c76acbf4caaed33c36b1b5fc2cb1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3295c76acbf4caaed33c36b1b5fc2cb1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3295c76acbf4caaed33c36b1b5fc2cb1-Supplemental.zip | Content creation, central to applications such as virtual reality, can be tedious and time-consuming. Recent image synthesis methods simplify this task by offering tools to generate new views from as little as a single input image, or by converting a semantic map into a photorealistic image. We propose to push the envelope further, and introduce Generative View Synthesis (GVS) that can synthesize multiple photorealistic views of a scene given a single semantic map.
We show that the sequential application of existing techniques, e.g., semantics-to-image translation followed by monocular view synthesis, fail at capturing the scene's structure. In contrast, we solve the semantics-to-image translation in concert with the estimation of the 3D layout of the scene, thus producing geometrically consistent novel views that preserve semantic structures. We first lift the input 2D semantic map onto a 3D layered representation of the scene in feature space, thereby preserving the semantic labels of 3D geometric structures. We then project the layered features onto the target views to generate the final novel-view images. We verify the strengths of our method and compare it with several advanced baselines on three different datasets. Our approach also allows for style manipulation and image editing operations, such as the addition or removal of objects, with simple manipulations of the input style images and semantic maps respectively. For code and additional results, visit the project page at https://gvsnet.github.io |
Towards More Practical Adversarial Attacks on Graph Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/32bb90e8976aab5298d5da10fe66f21d-Abstract.html | Jiaqi Ma, Shuangrui Ding, Qiaozhu Mei | https://papers.nips.cc/paper_files/paper/2020/hash/32bb90e8976aab5298d5da10fe66f21d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/32bb90e8976aab5298d5da10fe66f21d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10123-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/32bb90e8976aab5298d5da10fe66f21d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/32bb90e8976aab5298d5da10fe66f21d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/32bb90e8976aab5298d5da10fe66f21d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/32bb90e8976aab5298d5da10fe66f21d-Supplemental.pdf | We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint: attackers have access to only a subset of nodes in the network, and they can only attack a small number of them. A node selection step is essential under this setup. We demonstrate that the structural inductive biases of GNN models can be an effective source for this type of attacks. Specifically, by exploiting the connection between the backward propagation of GNNs and random walks, we show that the common gradient-based white-box attacks can be generalized to the black-box setting via the connection between the gradient and an importance score similar to PageRank. In practice, we find attacks based on this importance score indeed increase the classification loss by a large margin, but they fail to significantly increase the mis-classification rate. Our theoretical and empirical analyses suggest that there is a discrepancy between the loss and mis-classification rate, as the latter presents a diminishing-return pattern when the number of attacked nodes increases. Therefore, we propose a greedy procedure to correct the importance score that takes into account of the diminishing-return pattern. Experimental results show that the proposed procedure can significantly increase the mis-classification rate of common GNNs on real-world data without access to model parameters nor predictions. |
Multi-Task Reinforcement Learning with Soft Modularization | https://papers.nips.cc/paper_files/paper/2020/hash/32cfdce9631d8c7906e8e9d6e68b514b-Abstract.html | Ruihan Yang, Huazhe Xu, YI WU, Xiaolong Wang | https://papers.nips.cc/paper_files/paper/2020/hash/32cfdce9631d8c7906e8e9d6e68b514b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/32cfdce9631d8c7906e8e9d6e68b514b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10124-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/32cfdce9631d8c7906e8e9d6e68b514b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/32cfdce9631d8c7906e8e9d6e68b514b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/32cfdce9631d8c7906e8e9d6e68b514b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/32cfdce9631d8c7906e8e9d6e68b514b-Supplemental.zip | Multi-task learning is a very challenging problem in reinforcement learning. While training multiple tasks jointly allow the policies to share parameters across different tasks, the optimization problem becomes non-trivial: It remains unclear what parameters in the network should be reused across tasks, and how the gradients from different tasks may interfere with each other. Thus, instead of naively sharing parameters across tasks, we introduce an explicit modularization technique on policy representation to alleviate this optimization issue. Given a base policy network, we design a routing network which estimates different routing strategies to reconfigure the base network for each task. Instead of directly selecting routes for each task, our task-specific policy uses a method called soft modularization to softly combine all the possible routes, which makes it suitable for sequential tasks. We experiment with various robotics manipulation tasks in simulation and show our method improves both sample efficiency and performance over strong baselines by a large margin. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.