title
stringlengths 13
150
| url
stringlengths 97
97
| authors
stringlengths 8
467
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | AuthorFeedback
stringlengths 102
102
⌀ | Bibtex
stringlengths 53
54
| MetaReview
stringlengths 99
99
| Paper
stringlengths 93
93
| Review
stringlengths 95
95
| Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 53
2k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Latent Template Induction with Gumbel-CRFs | https://papers.nips.cc/paper_files/paper/2020/hash/ea119a40c1592979f51819b0bd38d39d-Abstract.html | Yao Fu, Chuanqi Tan, Bin Bi, Mosha Chen, Yansong Feng, Alexander Rush | https://papers.nips.cc/paper_files/paper/2020/hash/ea119a40c1592979f51819b0bd38d39d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ea119a40c1592979f51819b0bd38d39d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11425-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ea119a40c1592979f51819b0bd38d39d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ea119a40c1592979f51819b0bd38d39d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ea119a40c1592979f51819b0bd38d39d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ea119a40c1592979f51819b0bd38d39d-Supplemental.pdf | Learning to control the structure of sentences is a challenging problem in text generation. Existing work either relies on simple deterministic approaches or RL-based hard structures. We explore the use of structured variational autoencoders to infer latent templates for sentence generation using a soft, continuous relaxation in order to utilize reparameterization for training. Specifically, we propose a Gumbel-CRF, a continuous relaxation of the CRF sampling algorithm using a relaxed Forward-Filtering Backward-Sampling (FFBS) approach. As a reparameterized gradient estimator, the Gumbel-CRF gives more stable gradients than score-function based estimators. As a structured inference network, we show that it learns interpretable templates during training, which allows us to control the decoder during testing. We demonstrate the effectiveness of our methods with experiments on data-to-text generation and unsupervised paraphrase generation. |
Instance Based Approximations to Profile Maximum Likelihood | https://papers.nips.cc/paper_files/paper/2020/hash/ea33b4fd0fc1ea0a40344be8a8641123-Abstract.html | Nima Anari, Moses Charikar, Kirankumar Shiragur, Aaron Sidford | https://papers.nips.cc/paper_files/paper/2020/hash/ea33b4fd0fc1ea0a40344be8a8641123-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ea33b4fd0fc1ea0a40344be8a8641123-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11426-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ea33b4fd0fc1ea0a40344be8a8641123-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ea33b4fd0fc1ea0a40344be8a8641123-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ea33b4fd0fc1ea0a40344be8a8641123-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ea33b4fd0fc1ea0a40344be8a8641123-Supplemental.pdf | In this paper we provide a new efficient algorithm for approximately computing the profile maximum likelihood (PML) distribution, a prominent quantity in symmetric property estimation. We provide an algorithm which matches the previous best known efficient algorithms for computing approximate PML distributions and improves when the number of distinct observed frequencies in the given instance is small. We achieve this result by exploiting new sparsity structure in approximate PML distributions and providing a new matrix rounding algorithm, of independent interest. Leveraging this result, we obtain the first provable computationally efficient implementation of PseudoPML, a general framework for estimating a broad class of symmetric properties. Additionally, we obtain efficient PML-based estimators for distributions with small profile entropy, a natural instance-based complexity measure. Further, we provide a simpler and more practical PseudoPML implementation that matches the best-known theoretical guarantees of such an estimator and evaluate this method empirically. |
Factorizable Graph Convolutional Networks | https://papers.nips.cc/paper_files/paper/2020/hash/ea3502c3594588f0e9d5142f99c66627-Abstract.html | Yiding Yang, Zunlei Feng, Mingli Song, Xinchao Wang | https://papers.nips.cc/paper_files/paper/2020/hash/ea3502c3594588f0e9d5142f99c66627-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ea3502c3594588f0e9d5142f99c66627-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11427-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ea3502c3594588f0e9d5142f99c66627-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ea3502c3594588f0e9d5142f99c66627-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ea3502c3594588f0e9d5142f99c66627-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ea3502c3594588f0e9d5142f99c66627-Supplemental.pdf | Graphs have been widely adopted to denote structural connections between entities. The relations are in many cases heterogeneous, but entangled together and denoted merely as a single edge between a pair of nodes. For example, in a social network graph, users in different latent relationships like friends and colleagues, are usually connected via a bare edge that conceals such intrinsic connections. In this paper, we introduce a novel graph convolutional network (GCN), termed as factorizable graph convolutional network (FactorGCN), that explicitly disentangles such intertwined relations encoded in a graph. FactorGCN takes a simple graph as input, and disentangles it into several factorized graphs, each of which represents a latent and disentangled relation among nodes. The features of the nodes are then aggregated separately in each factorized latent space to produce disentangled features, which further leads to better performances for downstream tasks. We evaluate the proposed FactorGCN both qualitatively and quantitatively on the synthetic and real-world datasets, and demonstrate that it yields truly encouraging results in terms of both disentangling and feature aggregation. Code is publicly available at https://github.com/ihollywhy/FactorGCN.PyTorch. |
Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses | https://papers.nips.cc/paper_files/paper/2020/hash/ea3ed20b6b101a09085ef09c97da1597-Abstract.html | Gaurang Sriramanan, Sravanti Addepalli, Arya Baburaj, Venkatesh Babu R | https://papers.nips.cc/paper_files/paper/2020/hash/ea3ed20b6b101a09085ef09c97da1597-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ea3ed20b6b101a09085ef09c97da1597-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11428-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ea3ed20b6b101a09085ef09c97da1597-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ea3ed20b6b101a09085ef09c97da1597-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ea3ed20b6b101a09085ef09c97da1597-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ea3ed20b6b101a09085ef09c97da1597-Supplemental.pdf | Advances in the development of adversarial attacks have been fundamental to the progress of adversarial defense research. Efficient and effective attacks are crucial for reliable evaluation of defenses, and also for developing robust models. Adversarial attacks are often generated by maximizing standard losses such as the cross-entropy loss or maximum-margin loss within a constraint set using Projected Gradient Descent (PGD). In this work, we introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training. We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries, thereby resulting in stronger attacks. We evaluate our attack against multiple defenses and show improved performance when compared to existing attacks. Further, we propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses by utilizing the proposed relaxation term for both attack generation and training. |
A Study on Encodings for Neural Architecture Search | https://papers.nips.cc/paper_files/paper/2020/hash/ea4eb49329550caaa1d2044105223721-Abstract.html | Colin White, Willie Neiswanger, Sam Nolen, Yash Savani | https://papers.nips.cc/paper_files/paper/2020/hash/ea4eb49329550caaa1d2044105223721-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ea4eb49329550caaa1d2044105223721-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11429-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ea4eb49329550caaa1d2044105223721-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ea4eb49329550caaa1d2044105223721-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ea4eb49329550caaa1d2044105223721-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ea4eb49329550caaa1d2044105223721-Supplemental.zip | In this work, we present the first formal study on the effect of architecture encodings for NAS, including a theoretical grounding and an empirical study. First we formally define architecture encodings and give a theoretical characterization on the scalability of the encodings we study. Then we identify the main encoding-dependent subroutines which NAS algorithms employ, running experiments to show which encodings work best with each subroutine for many popular algorithms. The experiments act as an ablation study for prior work, disentangling the algorithmic and encoding-based contributions, as well as a guideline for future work. Our results demonstrate that NAS encodings are an important design decision which can have a significant impact on overall performance. Our code is available at https://github.com/naszilla/naszilla. |
Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising | https://papers.nips.cc/paper_files/paper/2020/hash/ea6b2efbdd4255a9f1b3bbc6399b58f4-Abstract.html | Yaochen Xie, Zhengyang Wang, Shuiwang Ji | https://papers.nips.cc/paper_files/paper/2020/hash/ea6b2efbdd4255a9f1b3bbc6399b58f4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ea6b2efbdd4255a9f1b3bbc6399b58f4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11430-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ea6b2efbdd4255a9f1b3bbc6399b58f4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ea6b2efbdd4255a9f1b3bbc6399b58f4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ea6b2efbdd4255a9f1b3bbc6399b58f4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ea6b2efbdd4255a9f1b3bbc6399b58f4-Supplemental.pdf | Self-supervised frameworks that learn denoising models with merely individual noisy images have shown strong capability and promising performance in various image denoising tasks. Existing self-supervised denoising frameworks are mostly built upon the same theoretical foundation, where the denoising models are required to be J-invariant. However, our analyses indicate that the current theory and the J-invariance may lead to denoising models with reduced performance. In this work, we introduce Noise2Same, a novel self-supervised denoising framework. In Noise2Same, a new self-supervised loss is proposed by deriving a self-supervised upper bound of the typical supervised loss. In particular, Noise2Same requires neither J-invariance nor extra information about the noise model and can be used in a wider range of denoising applications. We analyze our proposed Noise2Same both theoretically and experimentally. The experimental results show that our Noise2Same remarkably outperforms previous self-supervised denoising methods in terms of denoising performance and training efficiency. |
Early-Learning Regularization Prevents Memorization of Noisy Labels | https://papers.nips.cc/paper_files/paper/2020/hash/ea89621bee7c88b2c5be6681c8ef4906-Abstract.html | Sheng Liu, Jonathan Niles-Weed, Narges Razavian, Carlos Fernandez-Granda | https://papers.nips.cc/paper_files/paper/2020/hash/ea89621bee7c88b2c5be6681c8ef4906-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ea89621bee7c88b2c5be6681c8ef4906-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11431-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ea89621bee7c88b2c5be6681c8ef4906-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ea89621bee7c88b2c5be6681c8ef4906-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ea89621bee7c88b2c5be6681c8ef4906-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ea89621bee7c88b2c5be6681c8ef4906-Supplemental.pdf | We propose a novel framework to perform classification via deep learning in the presence of noisy annotations. When trained on noisy labels, deep neural networks have been observed to first fit the training data with clean labels during an "early learning" phase, before eventually memorizing the examples with false labels. We prove that early learning and memorization are fundamental phenomena in high-dimensional classification tasks, even in simple linear models, and give a theoretical explanation in this setting. Motivated by these findings, we develop a new technique for noisy classification tasks, which exploits the progress of the early learning phase. In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization. There are two key elements to our approach. First, we leverage semi-supervised learning techniques to produce target probabilities based on the model outputs. Second, we design a regularization term that steers the model towards these targets, implicitly preventing memorization of the false labels. The resulting framework is shown to provide robustness to noisy annotations on several standard benchmarks and real-world datasets, where it achieves results comparable to the state of the art. |
LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single Image Super-resolution and Beyond | https://papers.nips.cc/paper_files/paper/2020/hash/eaae339c4d89fc102edd9dbdb6a28915-Abstract.html | Wenbo Li, Kun Zhou, Lu Qi, Nianjuan Jiang, Jiangbo Lu, Jiaya Jia | https://papers.nips.cc/paper_files/paper/2020/hash/eaae339c4d89fc102edd9dbdb6a28915-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eaae339c4d89fc102edd9dbdb6a28915-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11432-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eaae339c4d89fc102edd9dbdb6a28915-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eaae339c4d89fc102edd9dbdb6a28915-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eaae339c4d89fc102edd9dbdb6a28915-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eaae339c4d89fc102edd9dbdb6a28915-Supplemental.pdf | Single image super-resolution (SISR) deals with a fundamental problem of upsampling a low-resolution (LR) image to its high-resolution (HR) version. Last few years have witnessed impressive progress propelled by deep learning methods. However, one critical challenge faced by existing methods is to strike a sweet spot of deep model complexity and resulting SISR quality. This paper addresses this pain point by proposing a linearly-assembled pixel-adaptive regression network (LAPAR), which casts the direct LR to HR mapping learning into a linear coefficient regression task over a dictionary of multiple predefined filter bases. Such a parametric representation renders our model highly lightweight and easy to optimize while achieving state-of-the-art results on SISR benchmarks. Moreover, based on the same idea, LAPAR is extended to tackle other restoration tasks, e.g., image denoising and JPEG image deblocking, and again, yields strong performance. |
Learning Parities with Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/eaae5e04a259d09af85c108fe4d7dd0c-Abstract.html | Amit Daniely, Eran Malach | https://papers.nips.cc/paper_files/paper/2020/hash/eaae5e04a259d09af85c108fe4d7dd0c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eaae5e04a259d09af85c108fe4d7dd0c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11433-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eaae5e04a259d09af85c108fe4d7dd0c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eaae5e04a259d09af85c108fe4d7dd0c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eaae5e04a259d09af85c108fe4d7dd0c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eaae5e04a259d09af85c108fe4d7dd0c-Supplemental.pdf | In recent years we see a rapidly growing line of research which shows learnability of various models via common neural network algorithms. Yet, besides a very few outliers, these results show learnability of models that can be learned using linear methods. Namely, such results show that learning neural-networks with gradient-descent is competitive with learning a linear classifier on top of a data-independent representation of the examples. This leaves much to be desired, as neural networks are far more successful than linear methods. Furthermore, on the more conceptual level, linear models don't seem to capture the``deepness" of deep networks. In this paper we make a step towards showing leanability of models that are inherently non-linear. We show that under certain distributions, sparse parities are learnable via gradient decent on depth-two network. On the other hand, under the same distributions, these parities cannot be learned efficiently by linear methods. |
Consistent Plug-in Classifiers for Complex Objectives and Constraints | https://papers.nips.cc/paper_files/paper/2020/hash/eab1bceaa6c5823d7ed86cfc7a8bd824-Abstract.html | Shiv Kumar Tavker, Harish Guruprasad Ramaswamy, Harikrishna Narasimhan | https://papers.nips.cc/paper_files/paper/2020/hash/eab1bceaa6c5823d7ed86cfc7a8bd824-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eab1bceaa6c5823d7ed86cfc7a8bd824-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11434-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eab1bceaa6c5823d7ed86cfc7a8bd824-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eab1bceaa6c5823d7ed86cfc7a8bd824-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eab1bceaa6c5823d7ed86cfc7a8bd824-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eab1bceaa6c5823d7ed86cfc7a8bd824-Supplemental.pdf | We present a statistically consistent algorithm for constrained classification problems where the objective (e.g. F-measure, G-mean) and the constraints (e.g. demographic parity, coverage) are defined by general functions of the confusion matrix. The key idea is to reduce the problem into a sequence of plug-in classifier learning problems, which is done by formulating an optimization problem over the intersection of the set of achievable confusion matrices and the set of feasible matrices. For objective and constraints that are convex functions of the confusion matrix, our algorithm requires $O(1/\epsilon^2)$ calls to the plug-in routine, which improves on the $O(1/\epsilon^3)$ rate achieved by Narasimhan (2018). We demonstrate empirically that our algorithm performs at least as well as the state-of-the-art methods for these problems. |
Movement Pruning: Adaptive Sparsity by Fine-Tuning | https://papers.nips.cc/paper_files/paper/2020/hash/eae15aabaa768ae4a5993a8a4f4fa6e4-Abstract.html | Victor Sanh, Thomas Wolf, Alexander Rush | https://papers.nips.cc/paper_files/paper/2020/hash/eae15aabaa768ae4a5993a8a4f4fa6e4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eae15aabaa768ae4a5993a8a4f4fa6e4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11435-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eae15aabaa768ae4a5993a8a4f4fa6e4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eae15aabaa768ae4a5993a8a4f4fa6e4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eae15aabaa768ae4a5993a8a4f4fa6e4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eae15aabaa768ae4a5993a8a4f4fa6e4-Supplemental.pdf | Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications. We propose the use of movement pruning, a simple, deterministic first-order weight pruning method that is more adaptive to pretrained model fine-tuning. We give mathematical foundations to the method and compare it to existing zeroth- and first-order pruning methods. Experiments show that when pruning large pretrained language models, movement pruning shows significant improvements in high-sparsity regimes. When combined with distillation, the approach achieves minimal accuracy loss with down to only 3% of the model parameters. |
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot | https://papers.nips.cc/paper_files/paper/2020/hash/eae27d77ca20db309e056e3d2dcd7d69-Abstract.html | Jingtong Su, Yihang Chen, Tianle Cai, Tianhao Wu, Ruiqi Gao, Liwei Wang, Jason D. Lee | https://papers.nips.cc/paper_files/paper/2020/hash/eae27d77ca20db309e056e3d2dcd7d69-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eae27d77ca20db309e056e3d2dcd7d69-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11436-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eae27d77ca20db309e056e3d2dcd7d69-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eae27d77ca20db309e056e3d2dcd7d69-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eae27d77ca20db309e056e3d2dcd7d69-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eae27d77ca20db309e056e3d2dcd7d69-Supplemental.pdf | Network pruning is a method for reducing test-time computational resource requirements with minimal performance degradation. Conventional wisdom of pruning algorithms suggests that: (1) Pruning methods exploit information from training data to find good subnetworks; (2) The architecture of the pruned network is crucial for good performance. In this paper, we conduct sanity checks for the above beliefs on several recent unstructured pruning methods and surprisingly find that: (1) A set of methods which aims to find good subnetworks of the randomly-initialized network (which we call initial tickets''), hardly exploits any information from the training data; (2) For the pruned networks obtained by these methods, randomly changing the preserved weights in each layer, while keeping the total number of preserved weights unchanged per layer, does not affect the final performance. These findings inspire us to choose a series of simple \emph{data-independent} prune ratios for each layer, and randomly prune each layer accordingly to get a subnetwork (which we callrandom tickets''). Experimental results show that our zero-shot random tickets outperforms or attains similar performance compared to existing initial tickets''. In addition, we identify one existing pruning method that passes our sanity checks. We hybridize the ratios in our random ticket with this method and propose a new method calledhybrid tickets'', which achieves further improvement. |
Online Matrix Completion with Side Information | https://papers.nips.cc/paper_files/paper/2020/hash/eb06b9db06012a7a4179b8f3cb5384d3-Abstract.html | Mark Herbster, Stephen Pasteris, Lisa Tse | https://papers.nips.cc/paper_files/paper/2020/hash/eb06b9db06012a7a4179b8f3cb5384d3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eb06b9db06012a7a4179b8f3cb5384d3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11437-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eb06b9db06012a7a4179b8f3cb5384d3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eb06b9db06012a7a4179b8f3cb5384d3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eb06b9db06012a7a4179b8f3cb5384d3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eb06b9db06012a7a4179b8f3cb5384d3-Supplemental.pdf | We give an online algorithm and prove novel mistake and regret bounds for online binary matrix completion with side information. The mistake bounds we prove are of the form \tilde{O}(D/\gamma^2). The term 1/\gamma^2 is analogous to the usual margin term in SVM (perceptron) bounds. More specifically, if we assume that there is some factorization of the underlying m x n matrix into PQ^T, where the rows of P are interpreted as "classifiers" in R^d and the rows of Q as "instances" in R^d, then gamma is the maximum (normalized) margin over all factorizations PQ^T consistent with the observed matrix. The quasi-dimension term D measures the quality of side information. In the presence of vacuous side information, D = m+n. However, if the side information is predictive of the underlying factorization of the matrix, then in an ideal case, D \in O(k + l) where k is the number of distinct row factors and l is the number of distinct column factors. We additionally provide a generalization of our algorithm to the inductive setting. In this setting, we provide an example where the side information is not directly specified in advance. For this example, the quasi-dimension D is now bounded by O(k^2 + l^2). |
Position-based Scaled Gradient for Model Quantization and Pruning | https://papers.nips.cc/paper_files/paper/2020/hash/eb1e78328c46506b46a4ac4a1e378b91-Abstract.html | Jangho Kim, KiYoon Yoo, Nojun Kwak | https://papers.nips.cc/paper_files/paper/2020/hash/eb1e78328c46506b46a4ac4a1e378b91-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eb1e78328c46506b46a4ac4a1e378b91-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11438-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eb1e78328c46506b46a4ac4a1e378b91-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eb1e78328c46506b46a4ac4a1e378b91-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eb1e78328c46506b46a4ac4a1e378b91-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eb1e78328c46506b46a4ac4a1e378b91-Supplemental.pdf | We propose the position-based scaled gradient (PSG) that scales the gradient depending on the position of a weight vector to make it more compression-friendly. First, we theoretically show that applying PSG to the standard gradient descent (GD), which is called PSGD, is equivalent to the GD in the warped weight space, a space made by warping the original weight space via an appropriately designed invertible function. Second, we empirically show that PSG acting as a regularizer to a weight vector is favorable for model compression domains such as quantization and pruning. PSG reduces the gap between the weight distributions of a full-precision model and its compressed counterpart. This enables the versatile deployment of a model either as an uncompressed mode or as a compressed mode depending on the availability of resources. The experimental results on CIFAR-10/100 and ImageNet datasets show the effectiveness of the proposed PSG in both domains of pruning and quantization even for extremely low bits. The code is released in Github. |
Online Learning with Primary and Secondary Losses | https://papers.nips.cc/paper_files/paper/2020/hash/eb2e9dffe58d635b7d72e99c8e61b5f2-Abstract.html | Avrim Blum, Han Shao | https://papers.nips.cc/paper_files/paper/2020/hash/eb2e9dffe58d635b7d72e99c8e61b5f2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eb2e9dffe58d635b7d72e99c8e61b5f2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11439-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eb2e9dffe58d635b7d72e99c8e61b5f2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eb2e9dffe58d635b7d72e99c8e61b5f2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eb2e9dffe58d635b7d72e99c8e61b5f2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eb2e9dffe58d635b7d72e99c8e61b5f2-Supplemental.pdf | We study the problem of online learning with primary and secondary losses. For example, a recruiter making decisions of which job applicants to hire might weigh false positives and false negatives equally (the primary loss) but the applicants might weigh false negatives much higher (the secondary loss). We consider the following question: Can we combine ``expert advice'' to achieve low regret with respect to the primary loss, while at the same time performing {\em not much worse than the worst expert} with respect to the secondary loss? Unfortunately, we show that this goal is unachievable without any bounded variance assumption on the secondary loss. More generally, we consider the goal of minimizing the regret with respect to the primary loss and bounding the secondary loss by a linear threshold. On the positive side, we show that running any switching-limited algorithm can achieve this goal if all experts satisfy the assumption that the secondary loss does not exceed the linear threshold by $o(T)$ for any time interval. If not all experts satisfy this assumption, our algorithms can achieve this goal given access to some external oracles which determine when to deactivate and reactivate experts. |
Graph Information Bottleneck | https://papers.nips.cc/paper_files/paper/2020/hash/ebc2aa04e75e3caabda543a1317160c0-Abstract.html | Tailin Wu, Hongyu Ren, Pan Li, Jure Leskovec | https://papers.nips.cc/paper_files/paper/2020/hash/ebc2aa04e75e3caabda543a1317160c0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ebc2aa04e75e3caabda543a1317160c0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11440-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ebc2aa04e75e3caabda543a1317160c0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ebc2aa04e75e3caabda543a1317160c0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ebc2aa04e75e3caabda543a1317160c0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ebc2aa04e75e3caabda543a1317160c0-Supplemental.pdf | Representation learning of graph-structured data is challenging because both graph structure and node features carry important information. Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features. However, GNNs are prone to adversarial attacks. Here we introduce Graph Information Bottleneck (GIB), an information-theoretic principle that optimally balances expressiveness and robustness of the learned representation of graph-structured data. Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task by maximizing the mutual information between the representation and the target, and simultaneously constraining the mutual information between the representation and the input data. Different from the general IB, GIB regularizes the structural as well as the feature information. We design two sampling algorithms for structural regularization and instantiate the GIB principle with two new models: GIB-Cat and GIB-Bern, and demonstrate the benefits by evaluating the resilience to adversarial attacks. We show that our proposed models are more robust than state-of-the-art graph defense models. GIB-based models empirically achieve up to 31% improvement with adversarial perturbation of the graph structure as well as node features. |
The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise | https://papers.nips.cc/paper_files/paper/2020/hash/ebd64e2bf193fc8c658af2b91952ce8d-Abstract.html | Ilias Diakonikolas, Daniel M. Kane, Pasin Manurangsi | https://papers.nips.cc/paper_files/paper/2020/hash/ebd64e2bf193fc8c658af2b91952ce8d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ebd64e2bf193fc8c658af2b91952ce8d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11441-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ebd64e2bf193fc8c658af2b91952ce8d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ebd64e2bf193fc8c658af2b91952ce8d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ebd64e2bf193fc8c658af2b91952ce8d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ebd64e2bf193fc8c658af2b91952ce8d-Supplemental.pdf | We study the computational complexity of adversarially robust proper learning of halfspaces in the distribution-independent agnostic PAC model, with a focus on $L_p$ perturbations. We give a computationally efficient learning algorithm and a nearly matching computational hardness result for this problem. An interesting implication of our findings is that the $L_{\infty}$ perturbations case is provably computationally harder than the case $2 \leq p < \infty$. |
Adaptive Online Estimation of Piecewise Polynomial Trends | https://papers.nips.cc/paper_files/paper/2020/hash/ebd6d2f5d60ff9afaeda1a81fc53e2d0-Abstract.html | Dheeraj Baby, Yu-Xiang Wang | https://papers.nips.cc/paper_files/paper/2020/hash/ebd6d2f5d60ff9afaeda1a81fc53e2d0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ebd6d2f5d60ff9afaeda1a81fc53e2d0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11442-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ebd6d2f5d60ff9afaeda1a81fc53e2d0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ebd6d2f5d60ff9afaeda1a81fc53e2d0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ebd6d2f5d60ff9afaeda1a81fc53e2d0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ebd6d2f5d60ff9afaeda1a81fc53e2d0-Supplemental.pdf | We consider the framework of non-stationary stochastic optimization [Besbes et.al. 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied. Motivated from the theory of non-parametric regression, we introduce a \emph{new variational constraint} that enforces the comparator sequence to belong to a discrete $k^{th}$ order Total Variation ball of radius $C_n$. This variational constraint models comparators that have piece-wise polynomial structure which has many relevant practical applications [Tibshirani2015]. By establishing connections to the theory of wavelet based non-parametric regression, we design a \emph{polynomial time} algorithm that achieves the nearly \emph{optimal dynamic regret} of $\tilde{O}(n^{\frac{1}{2k+3}}C_n^{\frac{2}{2k+3}})$. The proposed policy is \emph{adaptive to the unknown radius} $C_n$. Further, we show that the same policy is minimax optimal for several other non-parametric families of interest. |
RNNPool: Efficient Non-linear Pooling for RAM Constrained Inference | https://papers.nips.cc/paper_files/paper/2020/hash/ebd9629fc3ae5e9f6611e2ee05a31cef-Abstract.html | Oindrila Saha, Aditya Kusupati, Harsha Vardhan Simhadri, Manik Varma, Prateek Jain | https://papers.nips.cc/paper_files/paper/2020/hash/ebd9629fc3ae5e9f6611e2ee05a31cef-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ebd9629fc3ae5e9f6611e2ee05a31cef-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11443-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ebd9629fc3ae5e9f6611e2ee05a31cef-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ebd9629fc3ae5e9f6611e2ee05a31cef-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ebd9629fc3ae5e9f6611e2ee05a31cef-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ebd9629fc3ae5e9f6611e2ee05a31cef-Supplemental.pdf | Standard Convolutional Neural Networks (CNNs) designed for computer vision tasks tend to have large intermediate activation maps. These require large working memory and are thus unsuitable for deployment on resource-constrained devices typically used for inference on the edge. Aggressively downsampling the images via pooling or strided convolutions can address the problem but leads to a significant decrease in accuracy due to gross aggregation of the feature map by standard pooling operators. In this paper, we introduce RNNPool, a novel pooling operator based on Recurrent Neural Networks (RNNs), that efficiently aggregates features over large patches of an image and rapidly downsamples activation maps. Empirical evaluation indicates that an RNNPool layer can effectively replace multiple blocks in a variety of architectures such as MobileNets, DenseNet when applied to standard vision tasks like image classification and face detection. That is, RNNPool can significantly decrease computational complexity and peak memory usage for inference while retaining comparable accuracy. We use RNNPool with the standard S3FD architecture to construct a face detection method that achieves state-of-the-art MAP for tiny ARM Cortex-M4 class microcontrollers with under 256 KB of RAM. Code is released at https://github.com/Microsoft/EdgeML. |
Agnostic Learning with Multiple Objectives | https://papers.nips.cc/paper_files/paper/2020/hash/ebea2325dc670423afe9a1f4d9d1aef5-Abstract.html | Corinna Cortes, Mehryar Mohri, Javier Gonzalvo, Dmitry Storcheus | https://papers.nips.cc/paper_files/paper/2020/hash/ebea2325dc670423afe9a1f4d9d1aef5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ebea2325dc670423afe9a1f4d9d1aef5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11444-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ebea2325dc670423afe9a1f4d9d1aef5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ebea2325dc670423afe9a1f4d9d1aef5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ebea2325dc670423afe9a1f4d9d1aef5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ebea2325dc670423afe9a1f4d9d1aef5-Supplemental.pdf | Most machine learning tasks are inherently multi-objective. This
means that the learner has to come up with a model that performs
well across a number of base objectives $\cL_{1}, \ldots, \cL_{p}$,
as opposed to a single one. Since optimizing with respect to
multiple objectives at the same time is often computationally
expensive, the base objectives are often combined in an ensemble
$\sum_{k=1}^{p}\lambda_{k}\cL_{k}$, thereby reducing the problem to
scalar optimization. The mixture weights $\lambda_{k}$ are set to
uniform or some other fixed distribution, based on the learner's
preferences. We argue that learning with a fixed distribution on the
mixture weights runs the risk of overfitting to some individual
objectives and significantly harming others, despite
performing well on an entire ensemble. Moreover, in reality, the true
preferences of a learner across multiple objectives are often
unknown or hard to express as a specific distribution. Instead, we
propose a new framework of \emph{Agnostic Learning with Multiple
Objectives} ($\almo$), where a model is optimized for \emph{any}
weights in the mixture of base objectives. We present data-dependent
Rademacher complexity guarantees for learning in the $\almo$
framework, which are used to guide a scalable optimization
algorithm and the corresponding regularization. We present
convergence guarantees for this algorithm, assuming convexity of the
loss functions and the underlying hypothesis space. We further
implement the algorithm in a popular symbolic gradient computation
framework and empirically demonstrate on a number of datasets the
benefits of $\almo$ framework versus learning with a fixed mixture
weights distribution. |
3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous Image Data | https://papers.nips.cc/paper_files/paper/2020/hash/ebf99bb5df6533b6dd9180a59034698d-Abstract.html | Benjamin Biggs, David Novotny, Sebastien Ehrhardt, Hanbyul Joo, Ben Graham, Andrea Vedaldi | https://papers.nips.cc/paper_files/paper/2020/hash/ebf99bb5df6533b6dd9180a59034698d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ebf99bb5df6533b6dd9180a59034698d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11445-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ebf99bb5df6533b6dd9180a59034698d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ebf99bb5df6533b6dd9180a59034698d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ebf99bb5df6533b6dd9180a59034698d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ebf99bb5df6533b6dd9180a59034698d-Supplemental.zip | We consider the problem of obtaining dense 3D reconstructions of deformable objects from single and partially occluded views. In such cases, the visual evidence is usually insufficient to identify a 3D reconstruction uniquely, so we aim at recovering several plausible reconstructions compatible with the input data. We suggest that ambiguities can be modeled more effectively by parametrizing the possible body shapes and poses via a suitable 3D model, such as SMPL for humans. We propose to learn a multi-hypothesis neural network regressor using a best-of-M loss, where each of the M hypotheses is constrained to lie on a manifold of plausible human poses by means of a generative model. We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans, and in heavily occluded versions of these benchmarks. |
Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation | https://papers.nips.cc/paper_files/paper/2020/hash/ec1f764517b7ffb52057af6df18142b7-Abstract.html | Yangxin Wu, Gengwei Zhang, Hang Xu, Xiaodan Liang, Liang Lin | https://papers.nips.cc/paper_files/paper/2020/hash/ec1f764517b7ffb52057af6df18142b7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ec1f764517b7ffb52057af6df18142b7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11446-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ec1f764517b7ffb52057af6df18142b7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ec1f764517b7ffb52057af6df18142b7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ec1f764517b7ffb52057af6df18142b7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ec1f764517b7ffb52057af6df18142b7-Supplemental.pdf | Panoptic segmentation is posed as a new popular test-bed for the state-of-the-art holistic scene understanding methods with the requirement of simultaneously segmenting both foreground things and background stuff. The state-of-the-art panoptic segmentation network exhibits high structural complexity in different network components, i.e. backbone, proposal-based foreground branch, segmentation-based background branch, and feature fusion module across branches, which heavily relies on expert knowledge and tedious trials. In this work, we propose an efficient, cooperative and highly automated framework to simultaneously search for all main components including backbone, segmentation branches, and feature fusion module in a unified panoptic segmentation pipeline based on the prevailing one-shot Network Architecture Search (NAS) paradigm. Notably, we extend the common single-task NAS into the multi-component scenario by taking the advantages of the newly proposed intra-modular search space and problem-oriented inter-modular search space, which helps us to obtain an optimal network architecture that not only performs well in both instance segmentation and semantic segmentation tasks but also be aware of the reciprocal relations between foreground things and background stuff classes. To relieve the vast computation burden incurred by applying NAS to complicated network architectures, we present a novel path-priority greedy search policy to find a robust, transferrable architecture with significantly reduced searching overhead. Our searched architecture, namely Auto-Panoptic, achieves the new state-of-the-art on the challenging COCO and ADE20K benchmarks. Moreover, extensive experiments are conducted to demonstrate the effectiveness of path-priority policy and transferability of Auto-Panoptic across different datasets. |
Differentiable Top-k with Optimal Transport | https://papers.nips.cc/paper_files/paper/2020/hash/ec24a54d62ce57ba93a531b460fa8d18-Abstract.html | Yujia Xie, Hanjun Dai, Minshuo Chen, Bo Dai, Tuo Zhao, Hongyuan Zha, Wei Wei, Tomas Pfister | https://papers.nips.cc/paper_files/paper/2020/hash/ec24a54d62ce57ba93a531b460fa8d18-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ec24a54d62ce57ba93a531b460fa8d18-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11447-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ec24a54d62ce57ba93a531b460fa8d18-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ec24a54d62ce57ba93a531b460fa8d18-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ec24a54d62ce57ba93a531b460fa8d18-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ec24a54d62ce57ba93a531b460fa8d18-Supplemental.zip | Finding the k largest or smallest elements from a collection of scores, i.e., top-k operation, is an important model component widely used in information retrieval, machine learning, and data mining. However, if the top-k operation is implemented in an algorithmic way, e.g., using bubble algorithm, the resulted model cannot be trained in an end-to-end way using prevalent gradient descent algorithms. This is because these implementations typically involve swapping indices, whose gradient cannot be computed. Moreover, the corresponding mapping from the input scores to the indicator vector of whether this element belongs to the top-k set is essentially discontinuous. To address the issue, we propose a smoothed approximation, namely SOFT (Scalable Optimal transport-based diFferenTiable) top-k operator. Specifically, our SOFT top-k operator approximates the output of top-k operation as the solution of an Entropic Optimal Transport (EOT) problem. The gradient of the SOFT operator can then be efficiently approximated based on the optimality conditions of EOT problem.
We then apply the proposed operator to k-nearest neighbors algorithm and beam search algorithm. The numerical experiment demonstrates their achieve improved performance. |
Information-theoretic Task Selection for Meta-Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/ec3183a7f107d1b8dbb90cb3c01ea7d5-Abstract.html | Ricardo Luna Gutierrez, Matteo Leonetti | https://papers.nips.cc/paper_files/paper/2020/hash/ec3183a7f107d1b8dbb90cb3c01ea7d5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ec3183a7f107d1b8dbb90cb3c01ea7d5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11448-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ec3183a7f107d1b8dbb90cb3c01ea7d5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ec3183a7f107d1b8dbb90cb3c01ea7d5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ec3183a7f107d1b8dbb90cb3c01ea7d5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ec3183a7f107d1b8dbb90cb3c01ea7d5-Supplemental.zip | In Meta-Reinforcement Learning (meta-RL) an agent is trained on a set of tasks to prepare for and learn faster in new, unseen, but related tasks. The training tasks are usually hand-crafted to be representative of the expected distribution of target tasks and hence all used in training. We show that given a set of training tasks, learning can be both faster and more effective (leading to better performance in the target tasks), if the training tasks are appropriately selected. We propose a task selection algorithm based on information theory, which optimizes the set of tasks used for training in meta-RL, irrespectively of how they are generated. The algorithm establishes which training tasks are both sufficiently relevant for the target tasks, and different enough from one another. We reproduce different meta-RL experiments from the literature and show that our task selection algorithm improves the final performance in all of them. |
A Limitation of the PAC-Bayes Framework | https://papers.nips.cc/paper_files/paper/2020/hash/ec79d4bed810ed64267d169b0d37373e-Abstract.html | Roi Livni, Shay Moran | https://papers.nips.cc/paper_files/paper/2020/hash/ec79d4bed810ed64267d169b0d37373e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ec79d4bed810ed64267d169b0d37373e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11449-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ec79d4bed810ed64267d169b0d37373e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ec79d4bed810ed64267d169b0d37373e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ec79d4bed810ed64267d169b0d37373e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ec79d4bed810ed64267d169b0d37373e-Supplemental.pdf | PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98). This framework has the flexibility of deriving distribution- and algorithm-dependent bounds, which are often tighter than VC-related uniform convergence bounds.
In this manuscript we present a limitation for the PAC-Bayes framework. We demonstrate an easy learning task which is not amenable to a PAC-Bayes analysis.
Specifically, we consider the task of linear classification in 1D; it is well-known that this task is learnable using just $O(\log(1/\delta)/\epsilon)$ examples. On the other hand, we show that this fact can not be proved using a PAC-Bayes analysis: for any algorithm that learns 1-dimensional linear classifiers there exists a (realizable) distribution for which the PAC-Bayes bound is arbitrarily large. |
On Completeness-aware Concept-Based Explanations in Deep Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/ecb287ff763c169694f682af52c1f309-Abstract.html | Chih-Kuan Yeh, Been Kim, Sercan Arik, Chun-Liang Li, Tomas Pfister, Pradeep Ravikumar | https://papers.nips.cc/paper_files/paper/2020/hash/ecb287ff763c169694f682af52c1f309-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ecb287ff763c169694f682af52c1f309-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11450-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ecb287ff763c169694f682af52c1f309-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ecb287ff763c169694f682af52c1f309-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ecb287ff763c169694f682af52c1f309-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ecb287ff763c169694f682af52c1f309-Supplemental.pdf | Human explanations of high-level decisions are often expressed in terms of key concepts the decisions are based on. In this paper, we study such concept-based explainability for Deep Neural Networks (DNNs). First, we define the notion of \emph{completeness}, which quantifies how sufficient a particular set of concepts is in explaining a model's prediction behavior based on the assumption that complete concept scores are sufficient statistics of the model prediction. Next, we propose a concept discovery method that aims to infer a complete set of concepts that are additionally encouraged to be interpretable, which addresses the limitations of existing methods on concept explanations. To define an importance score for each discovered concept, we adapt game-theoretic notions to aggregate over sets and propose \emph{ConceptSHAP}. Via proposed metrics and user studies, on a synthetic dataset with apriori-known concept explanations, as well as on real-world image and language datasets, we validate the effectiveness of our method in finding concepts that are both complete in explaining the decisions and interpretable. |
Stochastic Recursive Gradient Descent Ascent for Stochastic Nonconvex-Strongly-Concave Minimax Problems | https://papers.nips.cc/paper_files/paper/2020/hash/ecb47fbb07a752413640f82a945530f8-Abstract.html | Luo Luo, Haishan Ye, Zhichao Huang, Tong Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/ecb47fbb07a752413640f82a945530f8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ecb47fbb07a752413640f82a945530f8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11451-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ecb47fbb07a752413640f82a945530f8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ecb47fbb07a752413640f82a945530f8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ecb47fbb07a752413640f82a945530f8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ecb47fbb07a752413640f82a945530f8-Supplemental.pdf | We consider nonconvex-concave minimax optimization problems of the form $\min_{\bf x}\max_{\bf y\in{\mathcal Y}} f({\bf x},{\bf y})$, where $f$ is strongly-concave in $\bf y$ but possibly nonconvex in $\bf x$ and ${\mathcal Y}$ is a convex and compact set. We focus on the stochastic setting, where we can only access an unbiased stochastic gradient estimate of $f$ at each iteration. This formulation includes many machine learning applications as special cases such as robust optimization and adversary training. We are interested in finding an ${\mathcal O}(\varepsilon)$-stationary point of the function $\Phi(\cdot)=\max_{\bf y\in{\mathcal Y}} f(\cdot, {\bf y})$. The most popular algorithm to solve this problem is stochastic gradient decent ascent, which requires $\mathcal O(\kappa^3\varepsilon^{-4})$ stochastic gradient evaluations, where $\kappa$ is the condition number. In this paper, we propose a novel method called Stochastic Recursive gradiEnt Descent Ascent (SREDA), which estimates gradients more efficiently using variance reduction. This method achieves the best known stochastic gradient complexity of ${\mathcal O}(\kappa^3\varepsilon^{-3})$, and its dependency on $\varepsilon$ is optimal for this problem. |
Why Normalizing Flows Fail to Detect Out-of-Distribution Data | https://papers.nips.cc/paper_files/paper/2020/hash/ecb9fe2fbb99c31f567e9823e884dbec-Abstract.html | Polina Kirichenko, Pavel Izmailov, Andrew G. Wilson | https://papers.nips.cc/paper_files/paper/2020/hash/ecb9fe2fbb99c31f567e9823e884dbec-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ecb9fe2fbb99c31f567e9823e884dbec-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11452-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ecb9fe2fbb99c31f567e9823e884dbec-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ecb9fe2fbb99c31f567e9823e884dbec-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ecb9fe2fbb99c31f567e9823e884dbec-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ecb9fe2fbb99c31f567e9823e884dbec-Supplemental.pdf | Detecting out-of-distribution (OOD) data is crucial for robust machine learning systems. Normalizing flows are flexible deep generative models that often surprisingly fail to distinguish between in- and out-of-distribution data: a flow trained on pictures of clothing assigns higher likelihood to handwritten digits. We investigate why normalizing flows perform poorly for OOD detection. We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations which are not specific to the target image datasets, focusing on flows based on coupling layers. We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data, improving OOD detection. Our investigation reveals that properties that enable flows to generate high-fidelity images can have a detrimental effect on OOD detection. |
Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay | https://papers.nips.cc/paper_files/paper/2020/hash/eccd2a86bae4728b38627162ba297828-Abstract.html | Joao Marques-Silva, Thomas Gerspacher, Martin Cooper, Alexey Ignatiev, Nina Narodytska | https://papers.nips.cc/paper_files/paper/2020/hash/eccd2a86bae4728b38627162ba297828-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eccd2a86bae4728b38627162ba297828-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11453-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eccd2a86bae4728b38627162ba297828-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eccd2a86bae4728b38627162ba297828-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eccd2a86bae4728b38627162ba297828-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eccd2a86bae4728b38627162ba297828-Supplemental.pdf | Recent work proposed the computation of so-called PI-explanations of Naive Bayes Classifiers (NBCs). PI-explanations are subset-minimal sets of feature-value pairs that are sufficient for the prediction, and have been computed with state-of-the-art exact algorithms that are worst-case exponential in time and space. In contrast, we show that the computation of one PI-explanation for an NBC can be achieved in log-linear time, and that the same result also applies to the more general class of linear classifiers. Furthermore, we show that the enumeration of PI-explanations can be obtained with polynomial delay. Experimental results demonstrate the performance gains of the new algorithms when compared with earlier work. The experimental results also investigate ways to measure the quality of heuristic explanations. |
Unsupervised Translation of Programming Languages | https://papers.nips.cc/paper_files/paper/2020/hash/ed23fbf18c2cd35f8c7f8de44f85c08d-Abstract.html | Baptiste Roziere, Marie-Anne Lachaux, Lowik Chanussot, Guillaume Lample | https://papers.nips.cc/paper_files/paper/2020/hash/ed23fbf18c2cd35f8c7f8de44f85c08d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ed23fbf18c2cd35f8c7f8de44f85c08d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11454-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ed23fbf18c2cd35f8c7f8de44f85c08d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ed23fbf18c2cd35f8c7f8de44f85c08d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ed23fbf18c2cd35f8c7f8de44f85c08d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ed23fbf18c2cd35f8c7f8de44f85c08d-Supplemental.pdf | A transcompiler, also known as source-to-source translator, is a system that converts source code from a high-level programming language (such as C++ or Python) to another. Transcompilers are primarily used for interoperability, and to port codebases written in an obsolete or deprecated language (e.g. COBOL, Python 2) to a modern one. They typically rely on handcrafted rewrite rules, applied to the source code abstract syntax tree. Unfortunately, the resulting translations often lack readability, fail to respect the target language conventions, and require manual modifications in order to work properly. The overall translation process is time-consuming and requires expertise in both the source and target languages, making code-translation projects expensive. Although neural models significantly outperform their rule-based counterparts in the context of natural language translation, their applications to transcompilation have been limited due to the scarcity of parallel data in this domain. In this paper, we propose to leverage recent approaches in unsupervised machine translation to train a fully unsupervised neural transcompiler. We train our model on source code from open source GitHub projects, and show that it can translate functions between C++, Java, and Python with high accuracy. Our method relies exclusively on monolingual source code, requires no expertise in the source or target languages, and can easily be generalized to other programming languages. We also build and release a test set composed of 852 parallel functions, along with unit tests to check the correctness of translations. We show that our model outperforms rule-based commercial baselines by a significant margin. |
Adversarial Style Mining for One-Shot Unsupervised Domain Adaptation | https://papers.nips.cc/paper_files/paper/2020/hash/ed265bc903a5a097f61d3ec064d96d2e-Abstract.html | Yawei Luo, Ping Liu, Tao Guan, Junqing Yu, Yi Yang | https://papers.nips.cc/paper_files/paper/2020/hash/ed265bc903a5a097f61d3ec064d96d2e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ed265bc903a5a097f61d3ec064d96d2e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11455-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ed265bc903a5a097f61d3ec064d96d2e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ed265bc903a5a097f61d3ec064d96d2e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ed265bc903a5a097f61d3ec064d96d2e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ed265bc903a5a097f61d3ec064d96d2e-Supplemental.pdf | We aim at the problem named One-Shot Unsupervised Domain Adaptation. Unlike traditional Unsupervised Domain Adaptation, it assumes that only one unlabeled target sample can be available when learning to adapt. This setting is realistic but more challenging, in which conventional adaptation approaches are prone to failure due to the scarce of unlabeled target data. To this end, we propose a novel Adversarial Style Mining approach, which combines the style transfer module and task-specific module into an adversarial manner. Specifically, the style transfer module iteratively searches for harder stylized images around the one-shot target sample according to the current learning state, leading the task model to explore the potential styles that are difficult to solve in the almost unseen target domain,
thus boosting the adaptation performance in a data-scarce scenario. The adversarial learning framework makes the style transfer module and task-specific module benefit each other during the competition. Extensive experiments on both cross-domain classification and segmentation benchmarks verify that ASM achieves state-of-the-art adaptation performance under the challenging one-shot setting. |
Optimally Deceiving a Learning Leader in Stackelberg Games | https://papers.nips.cc/paper_files/paper/2020/hash/ed383ec94720d62a939bfb6bdd98f50c-Abstract.html | Georgios Birmpas, Jiarui Gan, Alexandros Hollender, Francisco Marmolejo, Ninad Rajgopal, Alexandros Voudouris | https://papers.nips.cc/paper_files/paper/2020/hash/ed383ec94720d62a939bfb6bdd98f50c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ed383ec94720d62a939bfb6bdd98f50c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11456-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ed383ec94720d62a939bfb6bdd98f50c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ed383ec94720d62a939bfb6bdd98f50c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ed383ec94720d62a939bfb6bdd98f50c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ed383ec94720d62a939bfb6bdd98f50c-Supplemental.pdf | Recent results in the ML community have revealed that learning algorithms used to compute the optimal strategy for the leader to commit to in a Stackelberg game, are susceptible to manipulation by the follower. Such a learning algorithm operates by querying the best responses or the payoffs of the follower, who consequently can deceive the algorithm by responding as if their payoffs were much different than what they actually are. For this strategic behavior to be successful, the main challenge faced by the follower is to pinpoint the payoffs that would make the learning algorithm compute a commitment so that best responding to it maximizes the follower's utility, according to the true payoffs. While this problem has been considered before, the related literature only focused on the simplified scenario in which the payoff space is finite, thus leaving the general version of the problem unanswered. In this paper, we fill this gap by showing that it is always possible for the follower to efficiently compute (near-)optimal payoffs for various scenarios of learning interaction between the leader and the follower. |
Online Optimization with Memory and Competitive Control | https://papers.nips.cc/paper_files/paper/2020/hash/ed46558a56a4a26b96a68738a0d28273-Abstract.html | Guanya Shi, Yiheng Lin, Soon-Jo Chung, Yisong Yue, Adam Wierman | https://papers.nips.cc/paper_files/paper/2020/hash/ed46558a56a4a26b96a68738a0d28273-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ed46558a56a4a26b96a68738a0d28273-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11457-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ed46558a56a4a26b96a68738a0d28273-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ed46558a56a4a26b96a68738a0d28273-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ed46558a56a4a26b96a68738a0d28273-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ed46558a56a4a26b96a68738a0d28273-Supplemental.zip | This paper presents competitive algorithms for a novel class of online optimization problems with memory. We consider a setting where the learner seeks to minimize the sum of a hitting cost and a switching cost that depends on the previous $p$ decisions. This setting generalizes Smoothed Online Convex Optimization. The proposed approach, Optimistic Regularized Online Balanced Descent, achieves a constant, dimension-free competitive ratio. Further, we show a connection between online optimization with memory and online control with adversarial disturbances. This connection, in turn, leads to a new constant-competitive policy for a rich class of online control problems. |
IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method | https://papers.nips.cc/paper_files/paper/2020/hash/ed77eab0b8ff85d0a6a8365df1846978-Abstract.html | Yossi Arjevani, Joan Bruna, Bugra Can, Mert Gurbuzbalaban, Stefanie Jegelka, Hongzhou Lin | https://papers.nips.cc/paper_files/paper/2020/hash/ed77eab0b8ff85d0a6a8365df1846978-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ed77eab0b8ff85d0a6a8365df1846978-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11458-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ed77eab0b8ff85d0a6a8365df1846978-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ed77eab0b8ff85d0a6a8365df1846978-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ed77eab0b8ff85d0a6a8365df1846978-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ed77eab0b8ff85d0a6a8365df1846978-Supplemental.zip | We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex. Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method, thereby providing a systematic way for deriving several well-known decentralized algorithms including EXTRA and SSDA. When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds. We provide experimental results that demonstrate the effectiveness of the proposed algorithm on highly ill-conditioned problems. |
Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation | https://papers.nips.cc/paper_files/paper/2020/hash/eddb904a6db773755d2857aacadb1cb0-Abstract.html | Zhiwei Deng, Karthik Narasimhan, Olga Russakovsky | https://papers.nips.cc/paper_files/paper/2020/hash/eddb904a6db773755d2857aacadb1cb0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eddb904a6db773755d2857aacadb1cb0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11459-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eddb904a6db773755d2857aacadb1cb0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eddb904a6db773755d2857aacadb1cb0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eddb904a6db773755d2857aacadb1cb0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eddb904a6db773755d2857aacadb1cb0-Supplemental.pdf | The ability to perform effective planning is crucial for building an instruction-following agent. When navigating through a new environment, an agent is challenged with (1) connecting the natural language instructions with its progressively growing knowledge of the world; and (2) performing long-range planning and decision making in the form of effective exploration and error correction. Current methods are still limited on both fronts despite extensive efforts. In this paper, we introduce Evolving Graphical Planner (EGP), a module that allows global planning for navigation based on raw sensory input. The module dynamically constructs a graphical representation, generalizes the local action space to allow for more flexible decision making, and performs efficient planning on a proxy representation. We demonstrate our model on a challenging Vision-and-Language Navigation (VLN) task with photorealistic images, and achieve superior performance compared to previous navigation architectures. Concretely, we achieve 53% success rate on the test split of Room-to-Room navigation task (Anderson et al.) through pure imitation learning, outperforming previous architectures by up to 5%. |
Learning from Failure: De-biasing Classifier from Biased Classifier | https://papers.nips.cc/paper_files/paper/2020/hash/eddc3427c5d77843c2253f1e799fe933-Abstract.html | Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, Jinwoo Shin | https://papers.nips.cc/paper_files/paper/2020/hash/eddc3427c5d77843c2253f1e799fe933-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eddc3427c5d77843c2253f1e799fe933-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11460-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eddc3427c5d77843c2253f1e799fe933-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eddc3427c5d77843c2253f1e799fe933-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eddc3427c5d77843c2253f1e799fe933-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eddc3427c5d77843c2253f1e799fe933-Supplemental.pdf | Neural networks often learn to make predictions that overly rely on spurious corre- lation existing in the dataset, which causes the model to be biased. While previous work tackles this issue by using explicit labeling on the spuriously correlated attributes or presuming a particular bias type, we instead utilize a cheaper, yet generic form of human knowledge, which can be widely applicable to various types of bias. We first observe that neural networks learn to rely on the spurious correlation only when it is “easier” to learn than the desired knowledge, and such reliance is most prominent during the early phase of training. Based on the obser- vations, we propose a failure-based debiasing scheme by training a pair of neural networks simultaneously. Our main idea is twofold; (a) we intentionally train the first network to be biased by repeatedly amplifying its “prejudice”, and (b) we debias the training of the second network by focusing on samples that go against the prejudice of the biased network in (a). Extensive experiments demonstrate that our method significantly improves the training of network against various types of biases in both synthetic and real-world datasets. Surprisingly, our framework even occasionally outperforms the debiasing methods requiring explicit supervision of the spuriously correlated attributes. |
Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder | https://papers.nips.cc/paper_files/paper/2020/hash/eddea82ad2755b24c4e168c5fc2ebd40-Abstract.html | Zhisheng Xiao, Qing Yan, Yali Amit | https://papers.nips.cc/paper_files/paper/2020/hash/eddea82ad2755b24c4e168c5fc2ebd40-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eddea82ad2755b24c4e168c5fc2ebd40-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11461-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eddea82ad2755b24c4e168c5fc2ebd40-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eddea82ad2755b24c4e168c5fc2ebd40-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eddea82ad2755b24c4e168c5fc2ebd40-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eddea82ad2755b24c4e168c5fc2ebd40-Supplemental.pdf | Deep probabilistic generative models enable modeling the likelihoods of very high dimensional data. An important application of generative modeling should be the ability to detect out-of-distribution (OOD) samples by setting a threshold on the likelihood. However, a recent study shows that probabilistic generative models can, in some cases, assign higher likelihoods on certain types of OOD samples, making the OOD detection rules based on likelihood threshold problematic. To address this issue, several OOD detection methods have been proposed for deep generative models. In this paper, we make the observation that some of these methods fail when applied to generative models based on Variational Auto-encoders (VAE). As an alternative, we propose Likelihood Regret, an efficient OOD score for VAEs. We benchmark our proposed method over existing approaches, and empirical results suggest that our method obtains the best overall OOD detection performances compared with other OOD method applied on VAE. |
Deep Diffusion-Invariant Wasserstein Distributional Classification | https://papers.nips.cc/paper_files/paper/2020/hash/ede7e2b6d13a41ddf9f4bdef84fdc737-Abstract.html | Sung Woo Park, Dong Wook Shu, Junseok Kwon | https://papers.nips.cc/paper_files/paper/2020/hash/ede7e2b6d13a41ddf9f4bdef84fdc737-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ede7e2b6d13a41ddf9f4bdef84fdc737-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11462-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ede7e2b6d13a41ddf9f4bdef84fdc737-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ede7e2b6d13a41ddf9f4bdef84fdc737-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ede7e2b6d13a41ddf9f4bdef84fdc737-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ede7e2b6d13a41ddf9f4bdef84fdc737-Supplemental.pdf | In this paper, we present a novel classification method called deep diffusion-invariant Wasserstein distributional classification (DeepWDC). DeepWDC represents input data and labels as probability measures to address severe perturbations in input data. It can output the optimal label measure in terms of diffusion invariance, where the label measure is stationary over time and becomes equivalent to a Gaussian measure. Furthermore, DeepWDC minimizes the 2-Wasserstein distance between the optimal label measure and Gaussian measure, which reduces the Wasserstein uncertainty. Experimental results demonstrate that DeepWDC can substantially enhance the accuracy of several baseline deterministic classification methods and outperforms state-of-the-art-methods on 2D and 3D data containing various types of perturbations (e.g., rotations, impulse noise, and down-scaling). |
Finding All $\epsilon$-Good Arms in Stochastic Bandits | https://papers.nips.cc/paper_files/paper/2020/hash/edf0320adc8658b25ca26be5351b6c4a-Abstract.html | Blake Mason, Lalit Jain, Ardhendu Tripathy, Robert Nowak | https://papers.nips.cc/paper_files/paper/2020/hash/edf0320adc8658b25ca26be5351b6c4a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/edf0320adc8658b25ca26be5351b6c4a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11463-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/edf0320adc8658b25ca26be5351b6c4a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/edf0320adc8658b25ca26be5351b6c4a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/edf0320adc8658b25ca26be5351b6c4a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/edf0320adc8658b25ca26be5351b6c4a-Supplemental.zip | The pure-exploration problem in stochastic multi-armed bandits aims to find one or more arms with the largest (or near largest) means. Examples include finding an $\epsilon$-good arm, best-arm identification, top-$k$ arm identification, and finding all arms with means above a specified threshold. However, the problem of finding \emph{all} $\epsilon$-good arms has been overlooked in past work, although arguably this may be the most natural objective in many applications. For example, a virologist may conduct preliminary laboratory experiments on a large candidate set of treatments and move all $\epsilon$-good treatments into more expensive clinical trials. Since the ultimate clinical efficacy is uncertain, it is important to identify all $\epsilon$-good candidates. Mathematically, the all-$\epsilon$-good arm identification problem is presents significant new challenges and surprises that do not arise in the pure-exploration objectives studied in the past. We introduce two algorithms to overcome these and demonstrate their great empirical performance on a large-scale crowd-sourced dataset of $2.2$M ratings collected by the New Yorker Caption Contest as well as a dataset testing hundreds of possible cancer drugs.
|
Meta-Learning through Hebbian Plasticity in Random Networks | https://papers.nips.cc/paper_files/paper/2020/hash/ee23e7ad9b473ad072d57aaa9b2a5222-Abstract.html | Elias Najarro, Sebastian Risi | https://papers.nips.cc/paper_files/paper/2020/hash/ee23e7ad9b473ad072d57aaa9b2a5222-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ee23e7ad9b473ad072d57aaa9b2a5222-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11464-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ee23e7ad9b473ad072d57aaa9b2a5222-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ee23e7ad9b473ad072d57aaa9b2a5222-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ee23e7ad9b473ad072d57aaa9b2a5222-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ee23e7ad9b473ad072d57aaa9b2a5222-Supplemental.zip | Lifelong learning and adaptability are two defining aspects of biological agents. Modern reinforcement learning (RL) approaches
have shown significant progress in solving complex tasks, however once training is concluded, the found solutions are typically static and incapable of adapting to new information or perturbations. While it is still not completely understood how biological brains learn and adapt so efficiently from experience, it is believed that synaptic plasticity plays a prominent role in this process. Inspired by this biological mechanism, we propose a search method that, instead of optimizing the weight parameters of neural networks directly, only searches for synapse-specific Hebbian learning rules that allow the network to continuously self-organize its weights during the lifetime of the agent. We demonstrate our approach on several reinforcement learning tasks with different sensory modalities and more than 450K trainable plasticity parameters. We find that starting from completely random weights, the discovered Hebbian rules enable an agent to navigate a dynamical 2D-pixel environment; likewise they allow a simulated 3D quadrupedal robot to learn how to walk while adapting to morphological damage not seen during training and in the absence of any explicit reward or error signal in less than 100 timesteps. |
A Computational Separation between Private Learning and Online Learning | https://papers.nips.cc/paper_files/paper/2020/hash/ee715daa76f1b51d80343f45547be570-Abstract.html | Mark Bun | https://papers.nips.cc/paper_files/paper/2020/hash/ee715daa76f1b51d80343f45547be570-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ee715daa76f1b51d80343f45547be570-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11465-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ee715daa76f1b51d80343f45547be570-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ee715daa76f1b51d80343f45547be570-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ee715daa76f1b51d80343f45547be570-Review.html | null | A recent line of work has shown a qualitative equivalence between differentially private PAC learning and online learning: A concept class is privately learnable if and only if it is online learnable with a finite mistake bound. However, both directions of this equivalence incur significant losses in both sample and computational efficiency.
Studying a special case of this connection, Gonen, Hazan, and Moran (NeurIPS 2019) showed that uniform or highly sample-efficient pure-private learners can be time-efficiently compiled into online learners. We show that, assuming the existence of one-way functions, such an efficient conversion is impossible even for general pure-private learners with polynomial sample complexity. This resolves a question of Neel, Roth, and Wu (FOCS 2019). |
Top-KAST: Top-K Always Sparse Training | https://papers.nips.cc/paper_files/paper/2020/hash/ee76626ee11ada502d5dbf1fb5aae4d2-Abstract.html | Siddhant Jayakumar, Razvan Pascanu, Jack Rae, Simon Osindero, Erich Elsen | https://papers.nips.cc/paper_files/paper/2020/hash/ee76626ee11ada502d5dbf1fb5aae4d2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ee76626ee11ada502d5dbf1fb5aae4d2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11466-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ee76626ee11ada502d5dbf1fb5aae4d2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ee76626ee11ada502d5dbf1fb5aae4d2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ee76626ee11ada502d5dbf1fb5aae4d2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ee76626ee11ada502d5dbf1fb5aae4d2-Supplemental.pdf | Sparse neural networks are becoming increasingly important as the field seeks to improve the performance of existing models by scaling them up, while simultaneously trying to reduce power consumption and computational footprint. Unfortunately, most existing methods for inducing performant sparse models still entail the instantiation of dense parameters, or dense gradients in the backward-pass, during training. For very large models this requirement can be prohibitive. In this work we propose Top-KAST, a method that preserves constant sparsity throughout training (in both the forward and backward-passes). We demonstrate the efficacy of our approach by showing that it performs comparably to or better than previous works when training models on the established ImageNet benchmark, whilst fully maintaining sparsity. In addition to our ImageNet results, we also demonstrate our approach in the domain of language modeling where the current best performing architectures tend to have tens of billions of parameters and scaling up does not yet seem to have saturated performance. Sparse versions of these architectures can be run with significantly fewer resources, making them more widely accessible and applicable. Furthermore, in addition to being effective, our approach is straightforward and can easily be implemented in a wide range of existing machine learning frameworks with only a few additional lines of code. We therefore hope that our contribution will help enable the broader community to explore the potential held by massive models, without incurring massive computational cost. |
Meta-Learning with Adaptive Hyperparameters | https://papers.nips.cc/paper_files/paper/2020/hash/ee89223a2b625b5152132ed77abbcc79-Abstract.html | Sungyong Baik, Myungsub Choi, Janghoon Choi, Heewon Kim, Kyoung Mu Lee | https://papers.nips.cc/paper_files/paper/2020/hash/ee89223a2b625b5152132ed77abbcc79-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ee89223a2b625b5152132ed77abbcc79-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11467-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ee89223a2b625b5152132ed77abbcc79-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ee89223a2b625b5152132ed77abbcc79-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ee89223a2b625b5152132ed77abbcc79-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ee89223a2b625b5152132ed77abbcc79-Supplemental.pdf | Despite its popularity, several recent works question the effectiveness of MAML when test tasks are different from training tasks, thus suggesting various task-conditioned methodology to improve the initialization. Instead of searching for better task-aware initialization, we focus on a complementary factor in MAML framework, inner-loop optimization (or fast adaptation). Consequently, we propose a new weight update rule that greatly enhances the fast adaptation process. Specifically, we introduce a small meta-network that can adaptively generate per-step hyperparameters: learning rate and weight decay coefficients. The experimental results validate that the Adaptive Learning of hyperparameters for Fast Adaptation (ALFA) is the equally important ingredient that was often neglected in the recent few-shot learning approaches. Surprisingly, fast adaptation from random initialization with ALFA can already outperform MAML. |
Tight last-iterate convergence rates for no-regret learning in multi-player games | https://papers.nips.cc/paper_files/paper/2020/hash/eea5d933e9dce59c7dd0f6532f9ea81b-Abstract.html | Noah Golowich, Sarath Pattathil, Constantinos Daskalakis | https://papers.nips.cc/paper_files/paper/2020/hash/eea5d933e9dce59c7dd0f6532f9ea81b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eea5d933e9dce59c7dd0f6532f9ea81b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11468-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eea5d933e9dce59c7dd0f6532f9ea81b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eea5d933e9dce59c7dd0f6532f9ea81b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eea5d933e9dce59c7dd0f6532f9ea81b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eea5d933e9dce59c7dd0f6532f9ea81b-Supplemental.pdf | We study the question of obtaining last-iterate convergence rates for no-regret learning algorithms in multi-player games. We show that the optimistic gradient (OG) algorithm with a constant step-size, which is no-regret, achieves a last-iterate rate of O(1/√T) with respect to the gap function in smooth monotone games. This result addresses a question of Mertikopoulos & Zhou (2018), who asked whether extra-gradient approaches (such as OG) can be applied to achieve improved guarantees in the multi-agent learning setting. The proof of our upper bound uses a new technique centered around an adaptive choice of potential function at each iteration. We also show that the O(1/√T) rate is tight for all p-SCLI algorithms, which includes OG as a special case. As a byproduct of our lower bound analysis we additionally present a proof of a conjecture of Arjevani et al. (2015) which is more direct than previous approaches. |
Curvature Regularization to Prevent Distortion in Graph Embedding | https://papers.nips.cc/paper_files/paper/2020/hash/eeb29740e8e9bcf14dc26c2fff8cca81-Abstract.html | Hongbin Pei, Bingzhe Wei, Kevin Chang, Chunxu Zhang, Bo Yang | https://papers.nips.cc/paper_files/paper/2020/hash/eeb29740e8e9bcf14dc26c2fff8cca81-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eeb29740e8e9bcf14dc26c2fff8cca81-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11469-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eeb29740e8e9bcf14dc26c2fff8cca81-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eeb29740e8e9bcf14dc26c2fff8cca81-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eeb29740e8e9bcf14dc26c2fff8cca81-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eeb29740e8e9bcf14dc26c2fff8cca81-Supplemental.pdf | Recent research on graph embedding has achieved success in various applications. Most graph embedding methods preserve the proximity in a graph into a manifold in an embedding space. We argue an important but neglected problem about this proximity-preserving strategy: Graph topology patterns, while preserved well into an embedding manifold by preserving proximity, may distort in the ambient embedding Euclidean space, and hence to detect them becomes difficult for machine learning models. To address the problem, we propose curvature regularization, to enforce flatness for embedding manifolds, thereby preventing the distortion. We present a novel angle-based sectional curvature, termed ABS curvature, and accordingly three kinds of curvature regularization to induce flat embedding manifolds during graph embedding. We integrate curvature regularization into five popular proximity-preserving embedding methods, and empirical results in two applications show significant improvements on a wide range of open graph datasets. |
Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability | https://papers.nips.cc/paper_files/paper/2020/hash/eefc7bfe8fd6e2c8c01aa6ca7b1aab1a-Abstract.html | Nathan Inkawhich, Kevin Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, Yiran Chen | https://papers.nips.cc/paper_files/paper/2020/hash/eefc7bfe8fd6e2c8c01aa6ca7b1aab1a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eefc7bfe8fd6e2c8c01aa6ca7b1aab1a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11470-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eefc7bfe8fd6e2c8c01aa6ca7b1aab1a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eefc7bfe8fd6e2c8c01aa6ca7b1aab1a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eefc7bfe8fd6e2c8c01aa6ca7b1aab1a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eefc7bfe8fd6e2c8c01aa6ca7b1aab1a-Supplemental.pdf | We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers. Rather than focusing on crossing decision boundaries at the output layer of the source model, our method perturbs representations throughout the extracted feature hierarchy to resemble other classes. We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance between ImageNet DNNs. We also show the superiority of our feature space methods under a relaxation of the common assumption that the source and target models are trained on the same dataset and label space, in some instances achieving a $10\times$ increase in targeted success rate relative to other blackbox transfer methods. Finally, we analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed. |
Statistical and Topological Properties of Sliced Probability Divergences | https://papers.nips.cc/paper_files/paper/2020/hash/eefc9e10ebdc4a2333b42b2dbb8f27b6-Abstract.html | Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Simsekli | https://papers.nips.cc/paper_files/paper/2020/hash/eefc9e10ebdc4a2333b42b2dbb8f27b6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/eefc9e10ebdc4a2333b42b2dbb8f27b6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11471-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/eefc9e10ebdc4a2333b42b2dbb8f27b6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/eefc9e10ebdc4a2333b42b2dbb8f27b6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/eefc9e10ebdc4a2333b42b2dbb8f27b6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/eefc9e10ebdc4a2333b42b2dbb8f27b6-Supplemental.pdf | The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between \emph{one-dimensional random projections} of the two measures. However, the topological, statistical, and computational consequences of this technique have not yet been well-established. In this paper, we aim at bridging this gap and derive various theoretical properties of sliced probability divergences. First, we show that slicing preserves the metric axioms and the weak continuity of the divergence, implying that the sliced divergence will share similar topological properties. We then precise the results in the case where the base divergence belongs to the class of integral probability metrics. On the other hand, we establish that, under mild conditions, the sample complexity of a sliced divergence does not depend on the problem dimension. We finally apply our general results to several base divergences, and illustrate our theory on both synthetic and real data experiments. |
Probabilistic Active Meta-Learning | https://papers.nips.cc/paper_files/paper/2020/hash/ef0d17b3bdb4ee2aa741ba28c7255c53-Abstract.html | Jean Kaddour, Steindor Saemundsson, Marc Deisenroth (he/him) | https://papers.nips.cc/paper_files/paper/2020/hash/ef0d17b3bdb4ee2aa741ba28c7255c53-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ef0d17b3bdb4ee2aa741ba28c7255c53-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11472-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ef0d17b3bdb4ee2aa741ba28c7255c53-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ef0d17b3bdb4ee2aa741ba28c7255c53-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ef0d17b3bdb4ee2aa741ba28c7255c53-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ef0d17b3bdb4ee2aa741ba28c7255c53-Supplemental.pdf | Data-efficient learning algorithms are essential in many practical applications where data collection is expensive, e.g., in robotics due to the wear and tear. To address this problem, meta-learning algorithms use prior experience about tasks to learn new, related tasks efficiently. Typically, a set of training tasks is assumed given or randomly chosen. However, this setting does not take into account the sequential nature that naturally arises when training a model from scratch in real-life: how do we collect a set of training tasks in a data-efficient manner? In this work, we introduce task selection based on prior experience into a meta-learning algorithm by conceptualizing the learner and the active meta-learning setting using a probabilistic latent variable model. We provide empirical evidence that our approach improves data-efficiency when compared to strong baselines on simulated robotic experiments. |
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher | https://papers.nips.cc/paper_files/paper/2020/hash/ef0d3930a7b6c95bd2b32ed45989c61f-Abstract.html | Guangda Ji, Zhanxing Zhu | https://papers.nips.cc/paper_files/paper/2020/hash/ef0d3930a7b6c95bd2b32ed45989c61f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ef0d3930a7b6c95bd2b32ed45989c61f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11473-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ef0d3930a7b6c95bd2b32ed45989c61f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ef0d3930a7b6c95bd2b32ed45989c61f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ef0d3930a7b6c95bd2b32ed45989c61f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ef0d3930a7b6c95bd2b32ed45989c61f-Supplemental.zip | Knowledge distillation is a strategy of training a student network with guide of the soft output from a teacher network. It has been a successful method of model compression and knowledge transfer. However, currently knowledge distillation lacks a convincing theoretical understanding. On the other hand, recent finding on neural tangent kernel enables us to approximate a wide neural network with a linear model of the network's random features. In this paper, we theoretically analyze the knowledge distillation of a wide neural network. First we provide a transfer risk bound for the linearized model of the network. Then we propose a metric of the task's training difficulty, called data inefficiency. Based on this metric, we show that for a perfect teacher, a high ratio of teacher's soft labels can be beneficial. Finally, for the case of imperfect teacher, we find that hard labels can correct teacher's wrong prediction, which explains the practice of mixing hard and soft labels. |
Adversarial Attacks on Deep Graph Matching | https://papers.nips.cc/paper_files/paper/2020/hash/ef126722e64e98d1c33933783e52eafc-Abstract.html | Zijie Zhang, Zeru Zhang, Yang Zhou, Yelong Shen, Ruoming Jin, Dejing Dou | https://papers.nips.cc/paper_files/paper/2020/hash/ef126722e64e98d1c33933783e52eafc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ef126722e64e98d1c33933783e52eafc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11474-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ef126722e64e98d1c33933783e52eafc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ef126722e64e98d1c33933783e52eafc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ef126722e64e98d1c33933783e52eafc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ef126722e64e98d1c33933783e52eafc-Supplemental.pdf | Despite achieving remarkable performance, deep graph learning models, such as node classification and network embedding, suffer from harassment caused by small adversarial perturbations. However, the vulnerability analysis of graph matching under adversarial attacks has not been fully investigated yet. This paper proposes an adversarial attack model with two novel attack techniques to perturb the graph structure and degrade the quality of deep graph matching: (1) a kernel density estimation approach is utilized to estimate and maximize node densities to derive imperceptible perturbations, by pushing attacked nodes to dense regions in two graphs, such that they are indistinguishable from many neighbors; and (2) a meta learning-based projected gradient descent method is developed to well choose attack starting points and to improve the search performance for producing effective perturbations. We evaluate the effectiveness of the attack model on real datasets and validate that the attacks can be transferable to other graph learning models. |
The Generalization-Stability Tradeoff In Neural Network Pruning | https://papers.nips.cc/paper_files/paper/2020/hash/ef2ee09ea9551de88bc11fd7eeea93b0-Abstract.html | Brian Bartoldson, Ari Morcos, Adrian Barbu, Gordon Erlebacher | https://papers.nips.cc/paper_files/paper/2020/hash/ef2ee09ea9551de88bc11fd7eeea93b0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ef2ee09ea9551de88bc11fd7eeea93b0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11475-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ef2ee09ea9551de88bc11fd7eeea93b0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ef2ee09ea9551de88bc11fd7eeea93b0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ef2ee09ea9551de88bc11fd7eeea93b0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ef2ee09ea9551de88bc11fd7eeea93b0-Supplemental.zip | Pruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting. This motivation is particularly relevant given the perhaps surprising observation that a wide variety of pruning approaches increase test accuracy despite sometimes massive reductions in parameter counts. To better understand this phenomenon, we analyze the behavior of pruning over the course of training, finding that pruning's benefit to generalization increases with pruning's instability (defined as the drop in test accuracy immediately following pruning). We demonstrate that this "generalization-stability tradeoff'' is present across a wide variety of pruning settings and propose a mechanism for its cause: pruning regularizes similarly to noise injection. Supporting this, we find less pruning stability leads to more model flatness and the benefits of pruning do not depend on permanent parameter removal. These results explain the compatibility of pruning-based generalization improvements and the high generalization recently observed in overparameterized networks. |
Gradient-EM Bayesian Meta-Learning | https://papers.nips.cc/paper_files/paper/2020/hash/ef48e3ef07e359006f7869b04fa07f5e-Abstract.html | Yayi Zou, Xiaoqi Lu | https://papers.nips.cc/paper_files/paper/2020/hash/ef48e3ef07e359006f7869b04fa07f5e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ef48e3ef07e359006f7869b04fa07f5e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11476-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ef48e3ef07e359006f7869b04fa07f5e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ef48e3ef07e359006f7869b04fa07f5e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ef48e3ef07e359006f7869b04fa07f5e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ef48e3ef07e359006f7869b04fa07f5e-Supplemental.pdf | Bayesian meta-learning enables robust and fast adaptation to new tasks with uncertainty assessment. The key idea behind Bayesian meta-learning is empirical Bayes inference of hierarchical model. In this work, we extend this framework to include a variety of existing methods, before proposing our variant based on gradient-EM algorithm. Our method improves computational efficiency by avoiding back-propagation computation in the meta-update step, which is exhausting for deep neural networks. Furthermore, it provides flexibility to the inner-update optimization procedure by decoupling it from meta-update. Experiments on sinusoidal regression, few-shot image classification, and policy-based reinforcement learning show that our method not only achieves better accuracy with less computation cost, but is also more robust to uncertainty. |
Logarithmic Regret Bound in Partially Observable Linear Dynamical Systems | https://papers.nips.cc/paper_files/paper/2020/hash/ef8b5fcc338e003145ac9c134754db71-Abstract.html | Sahin Lale, Kamyar Azizzadenesheli, Babak Hassibi, Anima Anandkumar | https://papers.nips.cc/paper_files/paper/2020/hash/ef8b5fcc338e003145ac9c134754db71-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ef8b5fcc338e003145ac9c134754db71-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11477-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ef8b5fcc338e003145ac9c134754db71-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ef8b5fcc338e003145ac9c134754db71-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ef8b5fcc338e003145ac9c134754db71-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ef8b5fcc338e003145ac9c134754db71-Supplemental.pdf | We study the problem of system identification and adaptive control in partially observable linear dynamical systems. Adaptive and closed-loop system identification is a challenging problem due to correlations introduced in data collection. In this paper, we present the first model estimation method with finite-time guarantees in both open and closed-loop system identification. Deploying this estimation method, we propose adaptive control online learning (AdapOn), an efficient reinforcement learning algorithm that adaptively learns the system dynamics and continuously updates its controller through online learning steps. AdapOn estimates the model dynamics by occasionally solving a linear regression problem through interactions with the environment. Using policy re-parameterization and the estimated model, AdapOn constructs counterfactual loss functions to be used for updating the controller through online gradient descent. Over time, AdapOn improves its model estimates and obtains more accurate gradient updates to improve the controller. We show that AdapOn achieves a regret upper bound of $\text{polylog}\left(T\right)$, after $T$ time steps of agent-environment interaction. To the best of our knowledge, AdapOn is the first algorithm that achieves $\text{polylog}\left(T\right)$ regret in adaptive control of \textit{unknown} partially observable linear dynamical systems which includes linear quadratic Gaussian (LQG) control. |
Linearly Converging Error Compensated SGD | https://papers.nips.cc/paper_files/paper/2020/hash/ef9280fbc5317f17d480e4d4f61b3751-Abstract.html | Eduard Gorbunov, Dmitry Kovalev, Dmitry Makarenko, Peter Richtarik | https://papers.nips.cc/paper_files/paper/2020/hash/ef9280fbc5317f17d480e4d4f61b3751-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ef9280fbc5317f17d480e4d4f61b3751-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11478-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ef9280fbc5317f17d480e4d4f61b3751-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ef9280fbc5317f17d480e4d4f61b3751-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ef9280fbc5317f17d480e4d4f61b3751-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ef9280fbc5317f17d480e4d4f61b3751-Supplemental.pdf | In this paper, we propose a unified analysis of variants of distributed SGD with arbitrary compressions and delayed updates. Our framework is general enough to cover different variants of quantized SGD, Error-Compensated SGD (EC-SGD), and SGD with delayed updates (D-SGD). Via single theorem, we derive the complexity results for all the methods that fit our framework. For the existing methods, this theorem gives the best-known complexity results. Moreover, using our general scheme, we develop new variants of SGD that combine variance reduction or arbitrary sampling with error feedback and quantization and derive the convergence rates for these methods beating the state-of-the-art results. In order to illustrate the strength of our framework, we develop 16 new methods that fit this. In particular, we propose the first method called EC-SGD-DIANA that is based on error-feedback for biased compression operator and quantization of gradient differences and prove the convergence guarantees showing that EC-SGD-DIANA converges to the exact optimum asymptotically in expectation with constant learning rate for both convex and strongly convex objectives when workers compute full gradients of their loss functions. Moreover, for the case when the loss function of the worker has the form of finite sum, we modified the method and got a new one called EC-LSVRG-DIANA which is the first distributed stochastic method with error feedback and variance reduction that converges to the exact optimum asymptotically in expectation with constant learning rate. |
Canonical 3D Deformer Maps: Unifying parametric and non-parametric methods for dense weakly-supervised category reconstruction | https://papers.nips.cc/paper_files/paper/2020/hash/efe34c4e2190e97d1adc625902822b13-Abstract.html | David Novotny, Roman Shapovalov, Andrea Vedaldi | https://papers.nips.cc/paper_files/paper/2020/hash/efe34c4e2190e97d1adc625902822b13-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/efe34c4e2190e97d1adc625902822b13-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11479-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/efe34c4e2190e97d1adc625902822b13-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/efe34c4e2190e97d1adc625902822b13-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/efe34c4e2190e97d1adc625902822b13-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/efe34c4e2190e97d1adc625902822b13-Supplemental.pdf | We propose the Canonical 3D Deformer Map, a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects. Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings, combining their individual advantages. In particular, it learns to associate each image pixel with a deformation model of the corresponding 3D object point which is canonical, i.e. intrinsic to the identity of the point and shared across objects of the category. The result is a method that, given only sparse 2D supervision at training time, can, at test time, reconstruct the 3D shape and texture of objects from single views, while establishing meaningful dense correspondences between object instances. It also achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds. |
A Self-Tuning Actor-Critic Algorithm | https://papers.nips.cc/paper_files/paper/2020/hash/f02208a057804ee16ac72ff4d3cec53b-Abstract.html | Tom Zahavy, Zhongwen Xu, Vivek Veeriah, Matteo Hessel, Junhyuk Oh, Hado P. van Hasselt, David Silver, Satinder Singh | https://papers.nips.cc/paper_files/paper/2020/hash/f02208a057804ee16ac72ff4d3cec53b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f02208a057804ee16ac72ff4d3cec53b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11480-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f02208a057804ee16ac72ff4d3cec53b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f02208a057804ee16ac72ff4d3cec53b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f02208a057804ee16ac72ff4d3cec53b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f02208a057804ee16ac72ff4d3cec53b-Supplemental.pdf | Reinforcement learning algorithms are highly sensitive to the choice of hyperparameters, typically requiring significant manual effort to identify hyperparameters that perform well on a new domain. In this paper, we take a step towards addressing this issue by using metagradients to automatically adapt hyperparameters online by meta-gradient descent (Xu et al., 2018). We apply our algorithm, Self-Tuning Actor-Critic (STAC), to self-tune all the differentiable hyperparameters of an actor-critic loss function, to discover auxiliary tasks, and to improve off-policy learning using a novel leaky V-trace operator. STAC is simple to use, sample efficient and does not require a significant increase in compute. Ablative studies show that the overall performance of STAC improved as we adapt more hyperparameters. When applied to the Arcade Learning Environment (Bellemare et al. 2012), STAC improved the median human normalized score in 200M steps from 243% to 364%. When applied to the DM Control suite (Tassa et al., 2018), STAC improved the mean score in 30M steps from 217 to 389 when learning with features, from 108 to 202 when learning from pixels, and from 195 to 295 in the Real-World Reinforcement Learning Challenge (Dulac-Arnold et al., 2020). |
The Cone of Silence: Speech Separation by Localization | https://papers.nips.cc/paper_files/paper/2020/hash/f056bfa71038e04a2400266027c169f9-Abstract.html | Teerapat Jenrungrot, Vivek Jayaram, Steve Seitz, Ira Kemelmacher-Shlizerman | https://papers.nips.cc/paper_files/paper/2020/hash/f056bfa71038e04a2400266027c169f9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f056bfa71038e04a2400266027c169f9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11481-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f056bfa71038e04a2400266027c169f9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f056bfa71038e04a2400266027c169f9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f056bfa71038e04a2400266027c169f9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f056bfa71038e04a2400266027c169f9-Supplemental.zip | Given a multi-microphone recording of an unknown number of speakers talking concurrently, we simultaneously localize the sources and separate the individual speakers. At the core of our method is a deep network, in the waveform domain, which isolates sources within an angular region $\theta \pm w/2$, given an angle of interest $\theta$ and angular window size $w$. By exponentially decreasing $w$, we can perform a binary search to localize and separate all sources in logarithmic time. Our algorithm also allows for an arbitrary number of potentially moving speakers at test time, including more speakers than seen during training. Experiments demonstrate state of the art performance for both source separation and source localization, particularly in high levels of background noise. |
High-Dimensional Bayesian Optimization via Nested Riemannian Manifolds | https://papers.nips.cc/paper_files/paper/2020/hash/f05da679342107f92111ad9d65959cd3-Abstract.html | Noémie Jaquier, Leonel Rozo | https://papers.nips.cc/paper_files/paper/2020/hash/f05da679342107f92111ad9d65959cd3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f05da679342107f92111ad9d65959cd3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11482-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f05da679342107f92111ad9d65959cd3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f05da679342107f92111ad9d65959cd3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f05da679342107f92111ad9d65959cd3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f05da679342107f92111ad9d65959cd3-Supplemental.pdf | Despite the recent success of Bayesian optimization (BO) in a variety of applications where sample efficiency is imperative, its performance may be seriously compromised in settings characterized by high-dimensional parameter spaces. A solution to preserve the sample efficiency of BO in such problems is to introduce domain knowledge into its formulation. In this paper, we propose to exploit the geometry of non-Euclidean search spaces, which often arise in a variety of domains, to learn structure-preserving mappings and optimize the acquisition function of BO in low-dimensional latent spaces. Our approach, built on Riemannian manifolds theory, features geometry-aware Gaussian processes that jointly learn a nested-manifolds embedding and a representation of the objective function in the latent space. We test our approach in several benchmark artificial landscapes and report that it not only outperforms other high-dimensional BO approaches in several settings, but consistently optimizes the objective functions, as opposed to geometry-unaware BO methods. |
Train-by-Reconnect: Decoupling Locations of Weights from Their Values | https://papers.nips.cc/paper_files/paper/2020/hash/f0682320ccbbb1f1fb1e795de5e5639a-Abstract.html | Yushi Qiu, Reiji Suda | https://papers.nips.cc/paper_files/paper/2020/hash/f0682320ccbbb1f1fb1e795de5e5639a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f0682320ccbbb1f1fb1e795de5e5639a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11483-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f0682320ccbbb1f1fb1e795de5e5639a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f0682320ccbbb1f1fb1e795de5e5639a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f0682320ccbbb1f1fb1e795de5e5639a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f0682320ccbbb1f1fb1e795de5e5639a-Supplemental.zip | What makes untrained deep neural networks (DNNs) different from the trained performant ones? By zooming into the weights in well-trained DNNs, we found that it is the location of weights that holds most of the information encoded by the training. Motivated by this observation, we hypothesized that weights in DNNs trained using stochastic gradient-based methods can be separated into two dimensions: the location of weights, and their exact values. To assess our hypothesis, we propose a novel method called lookahead permutation (LaPerm) to train DNNs by reconnecting the weights. We empirically demonstrate LaPerm's versatility while producing extensive evidence to support our hypothesis: when the initial weights are random and dense, our method demonstrates speed and performance similar to or better than that of regular optimizers, e.g., Adam. When the initial weights are random and sparse (many zeros), our method changes the way neurons connect, achieving accuracy comparable to that of a well-trained dense network. When the initial weights share a single value, our method finds a weight agnostic neural network with far-better-than-chance accuracy. |
Learning discrete distributions: user vs item-level privacy | https://papers.nips.cc/paper_files/paper/2020/hash/f06edc8ab534b2c7ecbd4c2051d9cb1e-Abstract.html | Yuhan Liu, Ananda Theertha Suresh, Felix Xinnan X. Yu, Sanjiv Kumar, Michael Riley | https://papers.nips.cc/paper_files/paper/2020/hash/f06edc8ab534b2c7ecbd4c2051d9cb1e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f06edc8ab534b2c7ecbd4c2051d9cb1e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11484-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f06edc8ab534b2c7ecbd4c2051d9cb1e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f06edc8ab534b2c7ecbd4c2051d9cb1e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f06edc8ab534b2c7ecbd4c2051d9cb1e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f06edc8ab534b2c7ecbd4c2051d9cb1e-Supplemental.pdf | Much of the literature on differential privacy focuses on item-level privacy, where loosely speaking, the goal is to provide privacy per item or training example. However, recently many practical applications such as federated learning require preserving privacy for all items of a single user, which is much harder to achieve. Therefore understanding the theoretical limit of user-level privacy becomes crucial.
We study the fundamental problem of learning discrete distributions over $k$ symbols with user-level differential privacy. If each user has $m$ samples, we show that straightforward applications of Laplace or Gaussian mechanisms require the number of users to be $\mathcal{O}(k/(m\alpha^2) + k/\epsilon\alpha)$ to achieve an $\ell_1$ distance of $\alpha$ between the true and estimated distributions, with the privacy-induced penalty $k/\epsilon\alpha$ independent of the number of samples per user $m$. Moreover, we show that any mechanism that only operates on the final aggregate should require a user complexity of the same order. We then propose a mechanism such that the number of users scales as $\tilde{\mathcal{O}}(k/(m\alpha^2) + k/\sqrt{m}\epsilon\alpha)$ and further show that it is nearly-optimal under certain regimes. Thus the privacy penalty is $\tilde{\Theta}(\sqrt{m})$ times smaller compared to the standard mechanisms.
We also propose general techniques for obtaining lower bounds on restricted differentially private estimators and a lower bound on the total variation between binomial distributions, both of which might be of independent interest. |
Matrix Completion with Quantified Uncertainty through Low Rank Gaussian Copula | https://papers.nips.cc/paper_files/paper/2020/hash/f076073b2082f8741a9cd07b789c77a0-Abstract.html | Yuxuan Zhao, Madeleine Udell | https://papers.nips.cc/paper_files/paper/2020/hash/f076073b2082f8741a9cd07b789c77a0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f076073b2082f8741a9cd07b789c77a0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11485-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f076073b2082f8741a9cd07b789c77a0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f076073b2082f8741a9cd07b789c77a0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f076073b2082f8741a9cd07b789c77a0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f076073b2082f8741a9cd07b789c77a0-Supplemental.pdf | Modern large scale datasets are often plagued with missing entries. For tabular data with missing values, a flurry of imputation algorithms solve for a complete matrix which minimizes some penalized reconstruction error. However, almost none of them can estimate the uncertainty of its imputations. This paper pro- poses a probabilistic and scalable framework for missing value imputation with quantified uncertainty. Our model, the Low Rank Gaussian Copula, augments a standard probabilistic model, Probabilistic Principal Component Analysis, with marginal transformations for each column that allow the model to better match the distribution of the data. It naturally handles Boolean, ordinal, and real-valued observations and quantifies the uncertainty in each imputation. The time required to fit the model scales linearly with the number of rows and the number of columns in the dataset. Empirical results show the method yields state-of-the-art imputation accuracy across a wide range of data types, including those with high rank. Our uncertainty measure predicts imputation error well: entries with lower uncertainty do have lower imputation error (on average). Moreover, for real-valued data, the resulting confidence intervals are well-calibrated. |
Sparse and Continuous Attention Mechanisms | https://papers.nips.cc/paper_files/paper/2020/hash/f0b76267fbe12b936bd65e203dc675c1-Abstract.html | André Martins, António Farinhas, Marcos Treviso, Vlad Niculae, Pedro Aguiar, Mario Figueiredo | https://papers.nips.cc/paper_files/paper/2020/hash/f0b76267fbe12b936bd65e203dc675c1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f0b76267fbe12b936bd65e203dc675c1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11486-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f0b76267fbe12b936bd65e203dc675c1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f0b76267fbe12b936bd65e203dc675c1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f0b76267fbe12b936bd65e203dc675c1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f0b76267fbe12b936bd65e203dc675c1-Supplemental.pdf | Exponential families are widely used in machine learning; they include many distributions in continuous and discrete domains (e.g., Gaussian, Dirichlet, Poisson, and categorical distributions via the softmax transformation). Distributions in each of these families have fixed support. In contrast, for finite domains, there has been recent work on sparse alternatives to softmax (e.g., sparsemax and alpha-entmax), which have varying support, being able to assign zero probability to irrelevant categories. These discrete sparse mappings have been used for improving interpretability of neural attention mechanisms. This paper expands that work in two directions: first, we extend alpha-entmax to continuous domains, revealing a link with Tsallis statistics and deformed exponential families. Second, we introduce continuous-domain attention mechanisms, deriving efficient gradient backpropagation algorithms for alpha in {1,2}. Experiments on attention-based text classification, machine translation, and visual question answering illustrate the use of continuous attention in 1D and 2D, showing that it allows attending to time intervals and compact regions. |
Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection | https://papers.nips.cc/paper_files/paper/2020/hash/f0bda020d2470f2e74990a07a607ebd9-Abstract.html | Xiang Li, Wenhai Wang, Lijun Wu, Shuo Chen, Xiaolin Hu, Jun Li, Jinhui Tang, Jian Yang | https://papers.nips.cc/paper_files/paper/2020/hash/f0bda020d2470f2e74990a07a607ebd9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f0bda020d2470f2e74990a07a607ebd9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11487-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f0bda020d2470f2e74990a07a607ebd9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f0bda020d2470f2e74990a07a607ebd9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f0bda020d2470f2e74990a07a607ebd9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f0bda020d2470f2e74990a07a607ebd9-Supplemental.zip | One-stage detector basically formulates object detection as dense classification and localization (i.e., bounding box regression). The classification is usually optimized by Focal Loss and the box location is commonly learned under Dirac delta distribution. A recent trend for one-stage detectors is to introduce an \emph{individual} prediction branch to estimate the quality of localization, where the predicted quality facilitates the classification to improve detection performance. This paper delves into the \emph{representations} of the above three fundamental elements: quality estimation, classification and localization. Two problems are discovered in existing practices, including (1) the inconsistent usage of the quality estimation and classification between training and inference, and (2) the inflexible Dirac delta distribution for localization. To address the problems, we design new representations for these elements. Specifically, we merge the quality estimation into the class prediction vector to form a joint representation, and use a vector to represent arbitrary distribution of box locations. The improved representations eliminate the inconsistency risk and accurately depict the flexible distribution in real data, but contain \emph{continuous} labels, which is beyond the scope of Focal Loss. We then propose Generalized Focal Loss (GFL) that generalizes Focal Loss from its discrete form to the \emph{continuous} version for successful optimization. On COCO {\tt test-dev}, GFL achieves 45.0\% AP using ResNet-101 backbone, surpassing state-of-the-art SAPD (43.5\%) and ATSS (43.6\%) with higher or comparable inference speed. |
Learning by Minimizing the Sum of Ranked Range | https://papers.nips.cc/paper_files/paper/2020/hash/f0d7053396e765bf52de12133cf1afe8-Abstract.html | Shu Hu, Yiming Ying, xin wang, Siwei Lyu | https://papers.nips.cc/paper_files/paper/2020/hash/f0d7053396e765bf52de12133cf1afe8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f0d7053396e765bf52de12133cf1afe8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11488-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f0d7053396e765bf52de12133cf1afe8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f0d7053396e765bf52de12133cf1afe8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f0d7053396e765bf52de12133cf1afe8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f0d7053396e765bf52de12133cf1afe8-Supplemental.zip | In forming learning objectives, one oftentimes needs to aggregate a set of individual values to a single output. Such cases occur in the aggregate loss, which combines individual losses of a learning model over each training sample, and in the individual loss for multi-label learning, which combines prediction scores over all class labels. In this work, we introduce the sum of ranked range (SoRR) as a general approach to form learning objectives. A ranked range is a consecutive sequence of sorted values of a set of real numbers. The minimization of SoRR is solved with the difference of convex algorithm (DCA). We explore two applications in machine learning of the minimization of the SoRR framework, namely the AoRR aggregate loss for binary classification and the TKML individual loss for multi-label/multi-class classification. Our empirical results highlight the effectiveness of the proposed optimization framework and demonstrate the applicability of proposed losses using synthetic and real datasets. |
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations | https://papers.nips.cc/paper_files/paper/2020/hash/f0eb6568ea114ba6e293f903c34d7488-Abstract.html | Huan Zhang, Hongge Chen, Chaowei Xiao, Bo Li, Mingyan Liu, Duane Boning, Cho-Jui Hsieh | https://papers.nips.cc/paper_files/paper/2020/hash/f0eb6568ea114ba6e293f903c34d7488-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f0eb6568ea114ba6e293f903c34d7488-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11489-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f0eb6568ea114ba6e293f903c34d7488-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f0eb6568ea114ba6e293f903c34d7488-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f0eb6568ea114ba6e293f903c34d7488-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f0eb6568ea114ba6e293f903c34d7488-Supplemental.pdf | A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises. Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions. Several works have shown this vulnerability via adversarial attacks, but how to improve the robustness of DRL under this setting has not been well studied. We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, are ineffective for many RL tasks. We propose the state-adversarial Markov decision process (SA-MDP) to study the fundamental properties of this problem, and develop a theoretically principled policy regularization which can be applied to a large family of DRL algorithms, including deep deterministic policy gradient (DDPG), proximal policy optimization (PPO) and deep Q networks (DQN), for both discrete and continuous action control problems. We significantly improve the robustness of DDPG, PPO and DQN agents under a suite of strong white box adversarial attacks, including two new attacks of our own. Additionally, we find that a robust policy noticeably improves DRL performance in a number of environments. |
Understanding Anomaly Detection with Deep Invertible Networks through Hierarchies of Distributions and Features | https://papers.nips.cc/paper_files/paper/2020/hash/f106b7f99d2cb30c3db1c3cc0fde9ccb-Abstract.html | Robin Schirrmeister, Yuxuan Zhou, Tonio Ball, Dan Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/f106b7f99d2cb30c3db1c3cc0fde9ccb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f106b7f99d2cb30c3db1c3cc0fde9ccb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11490-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f106b7f99d2cb30c3db1c3cc0fde9ccb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f106b7f99d2cb30c3db1c3cc0fde9ccb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f106b7f99d2cb30c3db1c3cc0fde9ccb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f106b7f99d2cb30c3db1c3cc0fde9ccb-Supplemental.pdf | Deep generative networks trained via maximum likelihood on a natural image dataset like CIFAR10 often assign high likelihoods to images from datasets with different objects (e.g., SVHN). We refine previous investigations of this failure at anomaly detection for invertible generative networks and provide a clear explanation of it as a combination of model bias and domain prior: Convolutional networks learn similar low-level feature distributions when trained on any natural image dataset and these low-level features dominate the likelihood. Hence, when the discriminative features between inliers and outliers are on a high-level, e.g., object shapes, anomaly detection becomes particularly challenging. To remove the negative impact of model bias and domain prior on detecting high-level differences, we propose two methods, first, using the log likelihood ratios of two identical models, one trained on the in-distribution data (e.g., CIFAR10) and the other one on a more general distribution of images (e.g., 80 Million Tiny Images). We also derive a novel outlier loss for the in-distribution network on samples from the more general distribution to further improve the performance. Secondly, using a multi-scale model like Glow, we show that low-level features are mainly captured at early scales. Therefore, using only the likelihood contribution of the final scale performs remarkably well for detecting high-level feature differences of the out-of-distribution and the in-distribution. This method is especially useful if one does not have access to a suitable general distribution. Overall, our methods achieve strong anomaly detection performance in the unsupervised setting, and only slightly underperform state-of-the-art classifier-based methods in the supervised setting. Code can be found at https://github.com/boschresearch/hierarchicalanomalydetection. |
Fair Hierarchical Clustering | https://papers.nips.cc/paper_files/paper/2020/hash/f10f2da9a238b746d2bac55759915f0d-Abstract.html | Sara Ahmadian, Alessandro Epasto, Marina Knittel, Ravi Kumar, Mohammad Mahdian, Benjamin Moseley, Philip Pham, Sergei Vassilvitskii, Yuyan Wang | https://papers.nips.cc/paper_files/paper/2020/hash/f10f2da9a238b746d2bac55759915f0d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f10f2da9a238b746d2bac55759915f0d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11491-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f10f2da9a238b746d2bac55759915f0d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f10f2da9a238b746d2bac55759915f0d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f10f2da9a238b746d2bac55759915f0d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f10f2da9a238b746d2bac55759915f0d-Supplemental.pdf | In this paper we extend this notion to hierarchical clustering, where the goal is to recursively partition the data to optimize a specific objective. For various natural objectives, we obtain simple, efficient algorithms to find a provably good fair hierarchical clustering. Empirically, we show that our algorithms can find a fair hierarchical clustering, with only a negligible loss in the objective. |
Self-training Avoids Using Spurious Features Under Domain Shift | https://papers.nips.cc/paper_files/paper/2020/hash/f1298750ed09618717f9c10ea8d1d3b0-Abstract.html | Yining Chen, Colin Wei, Ananya Kumar, Tengyu Ma | https://papers.nips.cc/paper_files/paper/2020/hash/f1298750ed09618717f9c10ea8d1d3b0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f1298750ed09618717f9c10ea8d1d3b0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11492-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f1298750ed09618717f9c10ea8d1d3b0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f1298750ed09618717f9c10ea8d1d3b0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f1298750ed09618717f9c10ea8d1d3b0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f1298750ed09618717f9c10ea8d1d3b0-Supplemental.zip | In unsupervised domain adaptation, existing theory focuses on situations where the source and target domains are close. In practice, conditional entropy minimization and pseudo-labeling work even when the domain shifts are much larger than those analyzed by existing theory. We identify and analyze one particular setting where the domain shift can be large, but these algorithms provably work: certain spurious features correlate with the label in the source domain but are independent of the label in the target. Our analysis considers linear classification where the spurious features are Gaussian and the non-spurious features are a mixture of log-concave distributions. For this setting, we prove that entropy minimization on unlabeled target data will avoid using the spurious feature if initialized with a decently accurate source classifier, even though the objective is non-convex and contains multiple bad local minima using the spurious features. We verify our theory for spurious domain shift tasks on semi-synthetic Celeb-A and MNIST datasets. Our results suggest that practitioners collect and self-train on large, diverse datasets to reduce biases in classifiers even if labeling is impractical. |
Improving Online Rent-or-Buy Algorithms with Sequential Decision Making and ML Predictions | https://papers.nips.cc/paper_files/paper/2020/hash/f12a6a7477077af66212ef0813bcf332-Abstract.html | Shom Banerjee | https://papers.nips.cc/paper_files/paper/2020/hash/f12a6a7477077af66212ef0813bcf332-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f12a6a7477077af66212ef0813bcf332-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11493-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f12a6a7477077af66212ef0813bcf332-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f12a6a7477077af66212ef0813bcf332-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f12a6a7477077af66212ef0813bcf332-Review.html | null | In this work we study online rent-or-buy problems as a sequential decision making problem. We show how one can integrate predictions, typically coming from a machine learning (ML) setup, into this framework. Specifically, we consider the ski-rental problem and the dynamic TCP acknowledgment problem. We present new online algorithms and obtain explicit performance bounds in-terms of the accuracy of the prediction. Our algorithms are close to optimal with accurate predictions while hedging against less accurate predictions. |
CircleGAN: Generative Adversarial Learning across Spherical Circles | https://papers.nips.cc/paper_files/paper/2020/hash/f14bc21be7eaeed046fed206a492e652-Abstract.html | Woohyeon Shim, Minsu Cho | https://papers.nips.cc/paper_files/paper/2020/hash/f14bc21be7eaeed046fed206a492e652-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f14bc21be7eaeed046fed206a492e652-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11494-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f14bc21be7eaeed046fed206a492e652-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f14bc21be7eaeed046fed206a492e652-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f14bc21be7eaeed046fed206a492e652-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f14bc21be7eaeed046fed206a492e652-Supplemental.pdf | We present a novel discriminator for GANs that improves realness and diversity of generated samples by learning a structured hypersphere embedding space using spherical circles.
The proposed discriminator learns to populate realistic samples around the longest spherical circle, i.e., a great circle, while pushing unrealistic samples toward the poles perpendicular to the great circle. Since longer circles occupy larger area on the hypersphere, they encourage more diversity in representation learning, and vice versa. Discriminating samples based on their corresponding spherical circles can thus naturally induce diversity to generated samples.
We also extend the proposed method for conditional settings with class labels by creating a hypersphere for each category and performing class-wise discrimination and update. In experiments, we validate the effectiveness for both unconditional and conditional generation on standard benchmarks, achieving the state of the art. |
WOR and $p$'s: Sketches for $\ell_p$-Sampling Without Replacement | https://papers.nips.cc/paper_files/paper/2020/hash/f1507aba9fc82ffa7cc7373c58f8a613-Abstract.html | Edith Cohen, Rasmus Pagh, David Woodruff | https://papers.nips.cc/paper_files/paper/2020/hash/f1507aba9fc82ffa7cc7373c58f8a613-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f1507aba9fc82ffa7cc7373c58f8a613-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11495-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f1507aba9fc82ffa7cc7373c58f8a613-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f1507aba9fc82ffa7cc7373c58f8a613-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f1507aba9fc82ffa7cc7373c58f8a613-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f1507aba9fc82ffa7cc7373c58f8a613-Supplemental.pdf | Weighted sampling is a fundamental tool in data analysis and machine learning pipelines. Samples are used for efficient estimation of statistics or as sparse representations of the data. When weight distributions are skewed, as is often the case in practice, without-replacement (WOR) sampling is much more effective than with-replacement (WR) sampling: It provides a broader representation and higher accuracy for the same number of samples. We design novel composable sketches for WOR {\em $\ell_p$ sampling}, weighted sampling of keys according to a power $p\in[0,2]$ of their frequency (or for signed data, sum of updates). Our sketches have size that grows only linearly with sample size. Our design is simple and practical, despite intricate analysis, and based on off-the-shelf use of widely implemented heavy hitters sketches such as \texttt{CountSketch}. Our method is the first to provide WOR sampling in the important regime of $p>1$ and the first to handle signed updates for $p>0$. |
Hypersolvers: Toward Fast Continuous-Depth Models | https://papers.nips.cc/paper_files/paper/2020/hash/f1686b4badcf28d33ed632036c7ab0b8-Abstract.html | Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park | https://papers.nips.cc/paper_files/paper/2020/hash/f1686b4badcf28d33ed632036c7ab0b8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f1686b4badcf28d33ed632036c7ab0b8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11496-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f1686b4badcf28d33ed632036c7ab0b8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f1686b4badcf28d33ed632036c7ab0b8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f1686b4badcf28d33ed632036c7ab0b8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f1686b4badcf28d33ed632036c7ab0b8-Supplemental.zip | The infinite-depth paradigm pioneered by Neural ODEs has launched a renaissance in the search for novel dynamical system-inspired deep learning primitives; however, their utilization in problems of non-trivial size has often proved impossible due to poor computational scalability. This work paves the way for scalable Neural ODEs with time-to-prediction comparable to traditional discrete networks. We introduce hypersolvers, neural networks designed to solve ODEs with low overhead and theoretical guarantees on accuracy. The synergistic combination of hypersolvers and Neural ODEs allows for cheap inference and unlocks a new frontier for practical application of continuous-depth models. Experimental evaluations on standard benchmarks, such as sampling for continuous normalizing flows, reveal consistent pareto efficiency over classical numerical methods. |
Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable Neural Distribution Alignment | https://papers.nips.cc/paper_files/paper/2020/hash/f169b1a771215329737c91f70b5bf05c-Abstract.html | Ben Usman, Avneesh Sud, Nick Dufour, Kate Saenko | https://papers.nips.cc/paper_files/paper/2020/hash/f169b1a771215329737c91f70b5bf05c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f169b1a771215329737c91f70b5bf05c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11497-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f169b1a771215329737c91f70b5bf05c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f169b1a771215329737c91f70b5bf05c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f169b1a771215329737c91f70b5bf05c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f169b1a771215329737c91f70b5bf05c-Supplemental.pdf | Distribution alignment has many applications in deep learning, including domain adaptation and unsupervised image-to-image translation. Most prior work on unsupervised distribution alignment relies either on minimizing simple non-parametric statistical distances such as maximum mean discrepancy or on adversarial alignment. However, the former fails to capture the structure of complex real-world distributions, while the latter is difficult to train and does not provide any universal convergence guarantees or automatic quantitative validation procedures. In this paper, we propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows. We show that, under certain assumptions, this combination yields a deep neural likelihood-based minimization objective that attains a known lower bound upon convergence. We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains. |
Escaping the Gravitational Pull of Softmax | https://papers.nips.cc/paper_files/paper/2020/hash/f1cf2a082126bf02de0b307778ce73a7-Abstract.html | Jincheng Mei, Chenjun Xiao, Bo Dai, Lihong Li, Csaba Szepesvari, Dale Schuurmans | https://papers.nips.cc/paper_files/paper/2020/hash/f1cf2a082126bf02de0b307778ce73a7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f1cf2a082126bf02de0b307778ce73a7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11498-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f1cf2a082126bf02de0b307778ce73a7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f1cf2a082126bf02de0b307778ce73a7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f1cf2a082126bf02de0b307778ce73a7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f1cf2a082126bf02de0b307778ce73a7-Supplemental.pdf | The softmax is the standard transformation used in machine learning to map real-valued vectors to categorical distributions. Unfortunately, this transform poses serious drawbacks for gradient descent (ascent) optimization. We reveal this difficulty by establishing two negative results: (1) optimizing any expectation with respect to the softmax must exhibit sensitivity to parameter initialization (softmax gravity well''), and (2) optimizing log-probabilities under the softmax must exhibit slow convergence (softmax damping''). Both findings are based on an analysis of convergence rates using the Non-uniform \L{}ojasiewicz (N\L{}) inequalities. To circumvent these shortcomings we investigate an alternative transformation, the \emph{escort} mapping, that demonstrates better optimization properties. The disadvantages of the softmax and the effectiveness of the escort transformation are further explained using the concept of N\L{} coefficient. In addition to proving bounds on convergence rates to firmly establish these results, we also provide experimental evidence for the superiority of the escort transformation. |
Regret in Online Recommendation Systems | https://papers.nips.cc/paper_files/paper/2020/hash/f1daf122cde863010844459363cd31db-Abstract.html | Kaito Ariu, Narae Ryu, Se-Young Yun, Alexandre Proutiere | https://papers.nips.cc/paper_files/paper/2020/hash/f1daf122cde863010844459363cd31db-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f1daf122cde863010844459363cd31db-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11499-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f1daf122cde863010844459363cd31db-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f1daf122cde863010844459363cd31db-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f1daf122cde863010844459363cd31db-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f1daf122cde863010844459363cd31db-Supplemental.pdf | This paper proposes a theoretical analysis of recommendation systems in an online setting, where items are sequentially recommended to users over time. In each round, a user, randomly picked from a population of $m$ users, arrives. The decision-maker observes the user and selects an item from a catalogue of $n$ items. Importantly, an item cannot be recommended twice to the same user. The probabilities that a user likes each item are unknown, and the performance of the recommendation algorithm is captured through its regret, considering as a reference an Oracle algorithm aware of these probabilities. We investigate various structural assumptions on these probabilities: we derive for each of them regret lower bounds, and devise algorithms achieving these limits. Interestingly, our analysis reveals the relative weights of the different components of regret: the component due to the constraint of not presenting the same item twice to the same user, that due to learning the chances users like items, and finally that arising when learning the underlying structure. |
On Convergence and Generalization of Dropout Training | https://papers.nips.cc/paper_files/paper/2020/hash/f1de5100906f31712aaa5166689bfdf4-Abstract.html | Poorya Mianjy, Raman Arora | https://papers.nips.cc/paper_files/paper/2020/hash/f1de5100906f31712aaa5166689bfdf4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f1de5100906f31712aaa5166689bfdf4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11500-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f1de5100906f31712aaa5166689bfdf4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f1de5100906f31712aaa5166689bfdf4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f1de5100906f31712aaa5166689bfdf4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f1de5100906f31712aaa5166689bfdf4-Supplemental.pdf | We study dropout in two-layer neural networks with rectified linear unit (ReLU) activations. Under mild overparametrization and assuming that the limiting kernel can separate the data distribution with a positive margin, we show that the dropout training with logistic loss achieves $\epsilon$-suboptimality in the test error in $O(1/\epsilon)$ iterations. |
Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking | https://papers.nips.cc/paper_files/paper/2020/hash/f1ea154c843f7cf3677db7ce922a2d17-Abstract.html | Isidoros Tziotis, Constantine Caramanis, Aryan Mokhtari | https://papers.nips.cc/paper_files/paper/2020/hash/f1ea154c843f7cf3677db7ce922a2d17-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f1ea154c843f7cf3677db7ce922a2d17-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11501-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f1ea154c843f7cf3677db7ce922a2d17-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f1ea154c843f7cf3677db7ce922a2d17-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f1ea154c843f7cf3677db7ce922a2d17-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f1ea154c843f7cf3677db7ce922a2d17-Supplemental.pdf | In this paper we study the problem of escaping from saddle points and achieving second-order optimality in a decentralized setting where a group of agents collaborate to minimize their aggregate objective function. We provide a non-asymptotic (finite-time) analysis and show that by following the idea of perturbed gradient descent, it is possible to converge to a second-order stationary point in a number of iterations which depends linearly on dimension and polynomially on the accuracy of second-order stationary point. Doing this in a communication-efficient manner requires overcoming several challenges, from identifying (first order) stationary points in a distributed manner, to adapting the perturbed gradient framework without prohibitive communication complexity. Our proposed Perturbed Decentralized Gradient Tracking (PDGT) method consists of two major stages: (i) a gradient-based step to find a first-order stationary point and (ii) a perturbed gradient descent step to escape from a first-order stationary point, if it is a saddle point with sufficient curvature. As a side benefit of our result, in the case that all saddle points are non-degenerate (strict), the proposed PDGT method finds a local minimum of the considered decentralized optimization problem in a finite number of iterations. |
Implicit Regularization in Deep Learning May Not Be Explainable by Norms | https://papers.nips.cc/paper_files/paper/2020/hash/f21e255f89e0f258accbe4e984eef486-Abstract.html | Noam Razin, Nadav Cohen | https://papers.nips.cc/paper_files/paper/2020/hash/f21e255f89e0f258accbe4e984eef486-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f21e255f89e0f258accbe4e984eef486-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11502-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f21e255f89e0f258accbe4e984eef486-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f21e255f89e0f258accbe4e984eef486-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f21e255f89e0f258accbe4e984eef486-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f21e255f89e0f258accbe4e984eef486-Supplemental.pdf | Mathematically characterizing the implicit regularization induced by gradient-based optimization is a longstanding pursuit in the theory of deep learning. A widespread hope is that a characterization based on minimization of norms may apply, and a standard test-bed for studying this prospect is matrix factorization (matrix completion via linear neural networks). It is an open question whether norms can explain the implicit regularization in matrix factorization. The current paper resolves this open question in the negative, by proving that there exist natural matrix factorization problems on which the implicit regularization drives all norms (and quasi-norms) towards infinity. Our results suggest that, rather than perceiving the implicit regularization via norms, a potentially more useful interpretation is minimization of rank. We demonstrate empirically that this interpretation extends to a certain class of non-linear neural networks, and hypothesize that it may be key to explaining generalization in deep learning. |
POMO: Policy Optimization with Multiple Optima for Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/f231f2107df69eab0a3862d50018a9b2-Abstract.html | Yeong-Dae Kwon, Jinho Choo, Byoungjip Kim, Iljoo Yoon, Youngjune Gwon, Seungjai Min | https://papers.nips.cc/paper_files/paper/2020/hash/f231f2107df69eab0a3862d50018a9b2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f231f2107df69eab0a3862d50018a9b2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11503-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f231f2107df69eab0a3862d50018a9b2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f231f2107df69eab0a3862d50018a9b2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f231f2107df69eab0a3862d50018a9b2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f231f2107df69eab0a3862d50018a9b2-Supplemental.pdf | In neural combinatorial optimization (CO), reinforcement learning (RL) can turn a deep neural net into a fast, powerful heuristic solver of NP-hard problems. This approach has a great potential in practical applications because it allows near-optimal solutions to be found without expert guides armed with substantial domain knowledge. We introduce Policy Optimization with Multiple Optima (POMO), an end-to-end approach for building such a heuristic solver. POMO is applicable to a wide range of CO problems. It is designed to exploit the symmetries in the representation of a CO solution. POMO uses a modified REINFORCE algorithm that forces diverse rollouts towards all optimal solutions. Empirically, the low-variance baseline of POMO makes RL training fast and stable, and it is more resistant to local minima compared to previous approaches. We also introduce a new augmentation-based inference method, which accompanies POMO nicely. We demonstrate the effectiveness of POMO by solving three popular NP-hard problems, namely, traveling salesman (TSP), capacitated vehicle routing (CVRP), and 0-1 knapsack (KP). For all three, our solver based on POMO shows a significant improvement in performance over all recent learned heuristics. In particular, we achieve the optimality gap of 0.14% with TSP100 while reducing inference time by more than an order of magnitude. |
Uncertainty-aware Self-training for Few-shot Text Classification | https://papers.nips.cc/paper_files/paper/2020/hash/f23d125da1e29e34c552f448610ff25f-Abstract.html | Subhabrata Mukherjee, Ahmed Awadallah | https://papers.nips.cc/paper_files/paper/2020/hash/f23d125da1e29e34c552f448610ff25f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f23d125da1e29e34c552f448610ff25f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11504-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f23d125da1e29e34c552f448610ff25f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f23d125da1e29e34c552f448610ff25f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f23d125da1e29e34c552f448610ff25f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f23d125da1e29e34c552f448610ff25f-Supplemental.zip | Recent success of pre-trained language models crucially hinges on fine-tuning them on large amounts of labeled data for the downstream task, that are typically expensive to acquire or difficult to access for many applications. We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task. Standard self-training mechanism randomly samples instances from the unlabeled pool to generate pseudo-labels and augment labeled data. We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning. Specifically, we propose (i) acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout, and (ii) learning mechanism leveraging model confidence for self-training. As an application, we focus on text classification with five benchmark datasets. We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation perform within 3% of fully supervised pre-trained language models fine-tuned on thousands of labels with an aggregate accuracy of 91% and improvement of up to 12% over baselines. |
Learning to Learn with Feedback and Local Plasticity | https://papers.nips.cc/paper_files/paper/2020/hash/f291e10ec3263bd7724556d62e70e25d-Abstract.html | Jack Lindsey, Ashok Litwin-Kumar | https://papers.nips.cc/paper_files/paper/2020/hash/f291e10ec3263bd7724556d62e70e25d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f291e10ec3263bd7724556d62e70e25d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11505-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f291e10ec3263bd7724556d62e70e25d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f291e10ec3263bd7724556d62e70e25d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f291e10ec3263bd7724556d62e70e25d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f291e10ec3263bd7724556d62e70e25d-Supplemental.pdf | Interest in biologically inspired alternatives to backpropagation is driven by the desire to both advance connections between deep learning and neuroscience and address backpropagation's shortcomings on tasks such as online, continual learning. However, local synaptic learning rules like those employed by the brain have so far failed to match the performance of backpropagation in deep networks. In this study, we employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules. Importantly, the feedback connections are not tied to the feedforward weights, avoiding biologically implausible weight transport. Our experiments show that meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures. Surprisingly, this approach matches or exceeds a state-of-the-art gradient-based online meta-learning algorithm on regression and classification tasks, excelling in particular at continual learning. Analysis of the weight updates employed by these models reveals that they differ qualitatively from gradient descent in a way that reduces interference between updates. Our results suggest the existence of a class of biologically plausible learning mechanisms that not only match gradient descent-based learning, but also overcome its limitations. |
Every View Counts: Cross-View Consistency in 3D Object Detection with Hybrid-Cylindrical-Spherical Voxelization | https://papers.nips.cc/paper_files/paper/2020/hash/f2fc990265c712c49d51a18a32b39f0c-Abstract.html | Qi Chen, Lin Sun, Ernest Cheung, Alan L. Yuille | https://papers.nips.cc/paper_files/paper/2020/hash/f2fc990265c712c49d51a18a32b39f0c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f2fc990265c712c49d51a18a32b39f0c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11506-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f2fc990265c712c49d51a18a32b39f0c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f2fc990265c712c49d51a18a32b39f0c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f2fc990265c712c49d51a18a32b39f0c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f2fc990265c712c49d51a18a32b39f0c-Supplemental.pdf | Recent voxel-based 3D object detectors for autonomous vehicles learn point cloud representations either from bird eye view (BEV) or range view (RV, a.k.a. the perspective view). However, each view has its own strengths and weaknesses. In this paper, we present a novel framework to unify and leverage the benefits from both BEV and RV. The widely-used cuboid-shaped voxels in Cartesian coordinate system only benefit learning BEV feature map. Therefore, to enable learning both BEV and RV feature maps, we introduce Hybrid-Cylindrical-Spherical voxelization. Our findings show that simply adding detection on another view as auxiliary supervision will lead to poor performance. We proposed a pair of cross-view transformers to transform the feature maps into the other view and introduce cross-view consistency loss on them. Comprehensive experiments on the challenging NuScenes Dataset validate the effectiveness of our proposed method by virtue of joint optimization and complementary information on both views. Remarkably, our approach achieved mAP of 55.8%, outperforming all published approaches by at least 3% in overall performance and up to 16.5% in safety-crucial categories like cyclist. |
Sharper Generalization Bounds for Pairwise Learning | https://papers.nips.cc/paper_files/paper/2020/hash/f3173935ed8ac4bf073c1bcd63171f8a-Abstract.html | Yunwen Lei, Antoine Ledent, Marius Kloft | https://papers.nips.cc/paper_files/paper/2020/hash/f3173935ed8ac4bf073c1bcd63171f8a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f3173935ed8ac4bf073c1bcd63171f8a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11507-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f3173935ed8ac4bf073c1bcd63171f8a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f3173935ed8ac4bf073c1bcd63171f8a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f3173935ed8ac4bf073c1bcd63171f8a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f3173935ed8ac4bf073c1bcd63171f8a-Supplemental.pdf | Pairwise learning refers to learning tasks with loss functions depending on a pair of training examples, which includes ranking and metric learning as specific examples. Recently, there has been an increasing amount of attention on the generalization analysis of pairwise learning to understand its practical behavior. However, the existing stability analysis provides suboptimal high-probability generalization bounds. In this paper, we provide a refined stability analysis by developing generalization bounds which can be $\sqrt{n}$-times faster than the existing results, where $n$ is the sample size. This implies excess risk bounds of the order $O(n^{-1/2})$ (up to a logarithmic factor) for both regularized risk minimization and stochastic gradient descent. We also introduce a new on-average stability measure to develop optimistic bounds in a low noise setting. We apply our results to ranking and metric learning, and clearly show the advantage of our generalization bounds over the existing analysis. |
A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings | https://papers.nips.cc/paper_files/paper/2020/hash/f340f1b1f65b6df5b5e3f94d95b11daf-Abstract.html | Junhyung Park, Krikamol Muandet | https://papers.nips.cc/paper_files/paper/2020/hash/f340f1b1f65b6df5b5e3f94d95b11daf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f340f1b1f65b6df5b5e3f94d95b11daf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11508-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f340f1b1f65b6df5b5e3f94d95b11daf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f340f1b1f65b6df5b5e3f94d95b11daf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f340f1b1f65b6df5b5e3f94d95b11daf-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f340f1b1f65b6df5b5e3f94d95b11daf-Supplemental.pdf | We present a new operator-free, measure-theoretic approach to the conditional mean embedding as a random variable taking values in a reproducing kernel Hilbert space. While the kernel mean embedding of marginal distributions has been defined rigorously, the existing operator-based approach of the conditional version lacks a rigorous treatment, and depends on strong assumptions that hinder its analysis. Our approach does not impose any of the assumptions that the operator-based counterpart requires. We derive a natural regression interpretation to obtain empirical estimates, and provide a thorough analysis of its properties, including universal consistency with improved convergence rates. As natural by-products, we obtain the conditional analogues of the Maximum Mean Discrepancy and Hilbert-Schmidt Independence Criterion, and demonstrate their behaviour via simulations. |
Quantifying the Empirical Wasserstein Distance to a Set of Measures: Beating the Curse of Dimensionality | https://papers.nips.cc/paper_files/paper/2020/hash/f3507289cfdc8c9ae93f4098111a13f9-Abstract.html | Nian Si, Jose Blanchet, Soumyadip Ghosh, Mark Squillante | https://papers.nips.cc/paper_files/paper/2020/hash/f3507289cfdc8c9ae93f4098111a13f9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f3507289cfdc8c9ae93f4098111a13f9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11509-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f3507289cfdc8c9ae93f4098111a13f9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f3507289cfdc8c9ae93f4098111a13f9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f3507289cfdc8c9ae93f4098111a13f9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f3507289cfdc8c9ae93f4098111a13f9-Supplemental.zip | We consider the problem of estimating the Wasserstein distance between the empirical measure and a set of probability measures whose expectations over a class of functions (hypothesis class) are constrained. If this class is sufficiently rich to characterize a particular distribution (e.g., all Lipschitz functions), then our formulation recovers the Wasserstein distance to such a distribution. We establish a strong duality result that generalizes the celebrated Kantorovich-Rubinstein duality. We also show that our formulation can be used to beat the curse of dimensionality, which is well known to affect the rates of statistical convergence of the empirical Wasserstein distance. In particular, examples of infinite-dimensional hypothesis classes are presented, informed by a complex correlation structure, for which it is shown that the empirical Wasserstein distance to such classes converges to zero at the standard parametric rate. Our formulation provides insights that help clarify why, despite the curse of dimensionality, the Wasserstein distance enjoys favorable empirical performance across a wide range of statistical applications. |
Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning | https://papers.nips.cc/paper_files/paper/2020/hash/f3ada80d5c4ee70142b17b8192b2958e-Abstract.html | Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, koray kavukcuoglu, Remi Munos, Michal Valko | https://papers.nips.cc/paper_files/paper/2020/hash/f3ada80d5c4ee70142b17b8192b2958e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f3ada80d5c4ee70142b17b8192b2958e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11510-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f3ada80d5c4ee70142b17b8192b2958e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f3ada80d5c4ee70142b17b8192b2958e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f3ada80d5c4ee70142b17b8192b2958e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f3ada80d5c4ee70142b17b8192b2958e-Supplemental.pdf | We introduce Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning. BYOL relies on two neural networks, referred to as online and target networks, that interact and learn from each other. From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view. At the same time, we update the target network with a slow-moving average of the online network. While state-of-the art methods intrinsically rely on negative pairs, BYOL achieves a new state of the art without them. BYOL reaches 74.3% top-1 classification accuracy on ImageNet using the standard linear evaluation protocol with a standard ResNet-50 architecture and 79.6% with a larger ResNet. We also show that BYOL performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks. |
Towards Theoretically Understanding Why Sgd Generalizes Better Than Adam in Deep Learning | https://papers.nips.cc/paper_files/paper/2020/hash/f3f27a324736617f20abbf2ffd806f6d-Abstract.html | Pan Zhou, Jiashi Feng, Chao Ma, Caiming Xiong, Steven Chu Hong Hoi, Weinan E | https://papers.nips.cc/paper_files/paper/2020/hash/f3f27a324736617f20abbf2ffd806f6d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f3f27a324736617f20abbf2ffd806f6d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11511-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f3f27a324736617f20abbf2ffd806f6d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f3f27a324736617f20abbf2ffd806f6d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f3f27a324736617f20abbf2ffd806f6d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f3f27a324736617f20abbf2ffd806f6d-Supplemental.pdf | It is not clear yet why ADAM-alike adaptive gradient algorithms suffer from worse generalization performance than SGD despite their faster training speed. This work aims to provide understandings on this generalization gap by analyzing their local convergence behaviors. Specifically, we observe the heavy tails of gradient noise in these algorithms. This motivates us to analyze these algorithms through their Levy-driven stochastic differential equations (SDEs) because of the similar convergence behaviors of an algorithm and its SDE. Then we establish the escaping time of these SDEs from a local basin. The result shows that (1) the escaping time of both SGD and ADAM~depends on the Radon measure of the basin positively and the heaviness of gradient noise negatively; (2) for the same basin, SGD enjoys smaller escaping time than ADAM, mainly because (a) the geometry adaptation in ADAM~via adaptively scaling each gradient coordinate well diminishes the anisotropic structure in gradient noise and results in larger Radon measure of a basin; (b) the exponential gradient average in ADAM~smooths its gradient and leads to lighter gradient noise tails than SGD. So SGD is more locally unstable than ADAM~at sharp minima defined as the minima whose local basins have small Radon measure, and can better escape from them to flatter ones with larger Radon measure. As flat minima here which often refer to the minima at flat or asymmetric basins/valleys often generalize better than sharp ones~\cite{keskar2016large,he2019asymmetric}, our result explains the better generalization performance of SGD over ADAM. Finally, experimental results confirm our heavy-tailed gradient noise assumption and theoretical affirmation. |
RSKDD-Net: Random Sample-based Keypoint Detector and Descriptor | https://papers.nips.cc/paper_files/paper/2020/hash/f40723ed94042ea9ea36bfb5ad4157b2-Abstract.html | Fan Lu, Guang Chen, Yinlong Liu, Zhongnan Qu, Alois Knoll | https://papers.nips.cc/paper_files/paper/2020/hash/f40723ed94042ea9ea36bfb5ad4157b2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f40723ed94042ea9ea36bfb5ad4157b2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11512-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f40723ed94042ea9ea36bfb5ad4157b2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f40723ed94042ea9ea36bfb5ad4157b2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f40723ed94042ea9ea36bfb5ad4157b2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f40723ed94042ea9ea36bfb5ad4157b2-Supplemental.zip | Keypoint detector and descriptor are two main components of point cloud registration. Previous learning-based keypoint detectors rely on saliency estimation for each point or farthest point sample (FPS) for candidate points selection, which are inefficient and not applicable in large scale scenes. This paper proposes Random Sample-based Keypoint Detector and Descriptor Network (RSKDD-Net) for large scale point cloud registration. The key idea is using random sampling to efficiently select candidate points and using a learning-based method to jointly generate keypoints and corresponding descriptors. To tackle the information loss of random sampling, we exploit a novel random dilation cluster strategy to enlarge the receptive field of each sampled point and an attention mechanism to aggregate the positions and features of neighbor points. Furthermore, we propose a matching loss to train the descriptor in a weakly supervised manner. Extensive experiments on two large scale outdoor LiDAR datasets show that the proposed RSKDD-Net achieves state-of-the-art performance with more than 15 times faster than existing methods. Our code is available at https://github.com/ispc-lab/RSKDD-Net. |
Efficient Clustering for Stretched Mixtures: Landscape and Optimality | https://papers.nips.cc/paper_files/paper/2020/hash/f40ee694989b3e2161be989e7b9907fc-Abstract.html | Kaizheng Wang, Yuling Yan, Mateo Diaz | https://papers.nips.cc/paper_files/paper/2020/hash/f40ee694989b3e2161be989e7b9907fc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f40ee694989b3e2161be989e7b9907fc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11513-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f40ee694989b3e2161be989e7b9907fc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f40ee694989b3e2161be989e7b9907fc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f40ee694989b3e2161be989e7b9907fc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f40ee694989b3e2161be989e7b9907fc-Supplemental.pdf | This paper considers a canonical clustering problem where one receives unlabeled samples drawn from a balanced mixture of two elliptical distributions and aims for a classifier to estimate the labels. Many popular methods including PCA and k-means require individual components of the mixture to be somewhat spherical, and perform poorly when they are stretched. To overcome this issue, we propose a non-convex program seeking for an affine transform to turn the data into a one-dimensional point cloud concentrating around -1 and 1, after which clustering becomes easy. Our theoretical contributions are two-fold: (1) we show that the non-convex loss function exhibits desirable geometric properties when the sample size exceeds some constant multiple of the dimension, and (2) we leverage this to prove that an efficient first-order algorithm achieves near-optimal statistical precision without good initialization. We also propose a general methodology for clustering with flexible choices of feature transforms and loss objectives. |
A Group-Theoretic Framework for Data Augmentation | https://papers.nips.cc/paper_files/paper/2020/hash/f4573fc71c731d5c362f0d7860945b88-Abstract.html | Shuxiao Chen, Edgar Dobriban, Jane Lee | https://papers.nips.cc/paper_files/paper/2020/hash/f4573fc71c731d5c362f0d7860945b88-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f4573fc71c731d5c362f0d7860945b88-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11514-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f4573fc71c731d5c362f0d7860945b88-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f4573fc71c731d5c362f0d7860945b88-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f4573fc71c731d5c362f0d7860945b88-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f4573fc71c731d5c362f0d7860945b88-Supplemental.pdf | Data augmentation has become an important part of modern deep learning pipelines and is typically needed to achieve state of the art performance for many learning tasks. It utilizes invariant transformations of the data, such as rotation, scale, and color shift, and the transformed images are added to the training set. However, these transformations are often chosen heuristically and a clear theoretical framework to explain the performance benefits of data augmentation is not available. In this paper, we develop such a framework to explain data augmentation as averaging over the orbits of the group that keeps the data distribution approximately invariant, and show that it leads to variance reduction. We study finite-sample and asymptotic empirical risk minimization and work out as examples the variance reduction in certain two-layer neural networks. We further propose a strategy to exploit the benefits of data augmentation for general learning tasks. |
The Statistical Cost of Robust Kernel Hyperparameter Turning | https://papers.nips.cc/paper_files/paper/2020/hash/f4661398cb1a3abd3ffe58600bf11322-Abstract.html | Raphael Meyer, Christopher Musco | https://papers.nips.cc/paper_files/paper/2020/hash/f4661398cb1a3abd3ffe58600bf11322-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f4661398cb1a3abd3ffe58600bf11322-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11515-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f4661398cb1a3abd3ffe58600bf11322-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f4661398cb1a3abd3ffe58600bf11322-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f4661398cb1a3abd3ffe58600bf11322-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f4661398cb1a3abd3ffe58600bf11322-Supplemental.pdf | This paper studies the statistical complexity of kernel hyperparameter tuning in the setting of active regression under adversarial noise. We consider the problem of finding the best interpolant from a class of kernels with unknown hyperparameters, assuming only that the noise is square-integrable. We provide finite-sample guarantees for the problem, characterizing how increasing the complexity of the kernel class increases the complexity of learning kernel hyperparameters. For common kernel classes (e.g. squared-exponential kernels with unknown lengthscale), our results show that hyperparameter optimization increases sample complexity by just a logarithmic factor, in comparison to the setting where optimal parameters are known in advance. Our result is based on a subsampling guarantee for linear regression under multiple design matrices which may be of independent interest. |
How does Weight Correlation Affect Generalisation Ability of Deep Neural Networks? | https://papers.nips.cc/paper_files/paper/2020/hash/f48c04ffab49ff0e5d1176244fdfb65c-Abstract.html | Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang | https://papers.nips.cc/paper_files/paper/2020/hash/f48c04ffab49ff0e5d1176244fdfb65c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f48c04ffab49ff0e5d1176244fdfb65c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11516-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f48c04ffab49ff0e5d1176244fdfb65c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f48c04ffab49ff0e5d1176244fdfb65c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f48c04ffab49ff0e5d1176244fdfb65c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f48c04ffab49ff0e5d1176244fdfb65c-Supplemental.pdf | This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability. For fully-connected layers, the weight correlation is defined as the average cosine similarity between weight vectors of neurons, and for convolutional layers, the weight correlation is defined as the cosine similarity between filter matrices. Theoretically, we show that, weight correlation can, and should, be incorporated into the PAC Bayesian framework for the generalisation of neural networks, and the resulting generalisation bound is monotonic with respect to the weight correlation. We formulate a new complexity measure, which lifts the PAC Bayes measure
with weight correlation, and experimentally confirm that it is able to rank the generalisation errors of a set of networks more precisely than existing measures. More importantly, we develop a new regulariser for training, and provide extensive experiments that show that the generalisation error can be greatly reduced with our novel approach. |
ContraGAN: Contrastive Learning for Conditional Image Generation | https://papers.nips.cc/paper_files/paper/2020/hash/f490c742cd8318b8ee6dca10af2a163f-Abstract.html | Minguk Kang, Jaesik Park | https://papers.nips.cc/paper_files/paper/2020/hash/f490c742cd8318b8ee6dca10af2a163f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f490c742cd8318b8ee6dca10af2a163f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11517-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f490c742cd8318b8ee6dca10af2a163f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f490c742cd8318b8ee6dca10af2a163f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f490c742cd8318b8ee6dca10af2a163f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f490c742cd8318b8ee6dca10af2a163f-Supplemental.pdf | Conditional image generation is the task of generating diverse images using class label information. Although many conditional Generative Adversarial Networks (GAN) have shown realistic results, such methods consider pairwise relations between the embedding of an image and the embedding of the corresponding label (data-to-class relations) as the conditioning losses. In this paper, we propose ContraGAN that considers relations between multiple image embeddings in the same batch (data-to-data relations) as well as the data-to-class relations by using a conditional contrastive loss. The discriminator of ContraGAN discriminates the authenticity of given samples and minimizes a contrastive objective to learn the relations between training images. Simultaneously, the generator tries to generate realistic images that deceive the authenticity and have a low contrastive loss. The experimental results show that ContraGAN outperforms state-of-the-art-models by 7.3% and 7.7% on Tiny ImageNet and ImageNet datasets, respectively. Besides, we experimentally demonstrate that ContraGAN helps to relieve the overfitting of the discriminator. For a fair comparison, we re-implement twelve state-of-the-art GANs using the PyTorch library. The software package is available at https://github.com/POSTECH-CVLab/PyTorch-StudioGAN. |
On the distance between two neural networks and the stability of learning | https://papers.nips.cc/paper_files/paper/2020/hash/f4b31bee138ff5f7b84ce1575a738f95-Abstract.html | Jeremy Bernstein, Arash Vahdat, Yisong Yue, Ming-Yu Liu | https://papers.nips.cc/paper_files/paper/2020/hash/f4b31bee138ff5f7b84ce1575a738f95-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f4b31bee138ff5f7b84ce1575a738f95-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11518-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f4b31bee138ff5f7b84ce1575a738f95-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f4b31bee138ff5f7b84ce1575a738f95-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f4b31bee138ff5f7b84ce1575a738f95-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f4b31bee138ff5f7b84ce1575a738f95-Supplemental.pdf | This paper relates parameter distance to gradient breakdown for a broad class of nonlinear compositional functions. The analysis leads to a new distance function called deep relative trust and a descent lemma for neural networks. Since the resulting learning rule seems to require little to no learning rate tuning, it may unlock a simpler workflow for training deeper and more complex neural networks. The Python code used in this paper is here: https://github.com/jxbz/fromage. |
A Topological Filter for Learning with Label Noise | https://papers.nips.cc/paper_files/paper/2020/hash/f4e3ce3e7b581ff32e40968298ba013d-Abstract.html | Pengxiang Wu, Songzhu Zheng, Mayank Goswami, Dimitris Metaxas, Chao Chen | https://papers.nips.cc/paper_files/paper/2020/hash/f4e3ce3e7b581ff32e40968298ba013d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f4e3ce3e7b581ff32e40968298ba013d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11519-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f4e3ce3e7b581ff32e40968298ba013d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f4e3ce3e7b581ff32e40968298ba013d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f4e3ce3e7b581ff32e40968298ba013d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f4e3ce3e7b581ff32e40968298ba013d-Supplemental.pdf | Noisy labels can impair the performance of deep neural networks. To tackle this problem, in this paper, we propose a new method for filtering label noise. Unlike most existing methods relying on the posterior probability of a noisy classifier, we focus on the much richer spatial behavior of data in the latent representational space. By leveraging the high-order topological information of data, we are able to collect most of the clean data and train a high-quality model. Theoretically we prove that this topological approach is guaranteed to collect the clean data with high probability. Empirical results show that our method outperforms the state-of-the-arts and is robust to a broad spectrum of noise types and levels. |
Personalized Federated Learning with Moreau Envelopes | https://papers.nips.cc/paper_files/paper/2020/hash/f4f1f13c8289ac1b1ee0ff176b56fc60-Abstract.html | Canh T. Dinh, Nguyen Tran, Josh Nguyen | https://papers.nips.cc/paper_files/paper/2020/hash/f4f1f13c8289ac1b1ee0ff176b56fc60-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f4f1f13c8289ac1b1ee0ff176b56fc60-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11520-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f4f1f13c8289ac1b1ee0ff176b56fc60-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f4f1f13c8289ac1b1ee0ff176b56fc60-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f4f1f13c8289ac1b1ee0ff176b56fc60-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f4f1f13c8289ac1b1ee0ff176b56fc60-Supplemental.pdf | Federated learning (FL) is a decentralized and privacy-preserving machine learning technique in which a group of clients collaborate with a server to learn a global model without sharing clients' data. One challenge associated with FL is statistical diversity among clients, which restricts the global model from delivering good performance on each client's task. To address this, we propose an algorithm for personalized FL (pFedMe) using Moreau envelopes as clients' regularized loss functions, which help decouple personalized model optimization from the global model learning in a bi-level problem stylized for personalized FL. Theoretically, we show that pFedMe convergence rate is state-of-the-art: achieving quadratic speedup for strongly convex and sublinear speedup of order 2/3 for smooth nonconvex objectives. Experimentally, we verify that pFedMe excels at empirical performance compared with the vanilla FedAvg and Per-FedAvg, a meta-learning based personalized FL algorithm. |
Avoiding Side Effects in Complex Environments | https://papers.nips.cc/paper_files/paper/2020/hash/f50a6c02a3fc5a3a5d4d9391f05f3efc-Abstract.html | Alex Turner, Neale Ratzlaff, Prasad Tadepalli | https://papers.nips.cc/paper_files/paper/2020/hash/f50a6c02a3fc5a3a5d4d9391f05f3efc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f50a6c02a3fc5a3a5d4d9391f05f3efc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11521-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f50a6c02a3fc5a3a5d4d9391f05f3efc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f50a6c02a3fc5a3a5d4d9391f05f3efc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f50a6c02a3fc5a3a5d4d9391f05f3efc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f50a6c02a3fc5a3a5d4d9391f05f3efc-Supplemental.pdf | Reward function specification can be difficult. Rewarding the agent for making a widget may be easy, but penalizing the multitude of possible negative side effects is hard. In toy environments, Attainable Utility Preservation (AUP) avoided side effects by penalizing shifts in the ability to achieve randomly generated goals. We scale this approach to large, randomly generated environments based on Conway's Game of Life. By preserving optimal value for a single randomly generated reward function, AUP incurs modest overhead while leading the agent to complete the specified task and avoid many side effects. Videos and code are available at https://avoiding-side-effects.github.io/. |
No-regret Learning in Price Competitions under Consumer Reference Effects | https://papers.nips.cc/paper_files/paper/2020/hash/f51238cd02c93b89d8fbee5667d077fc-Abstract.html | Negin Golrezaei, Patrick Jaillet, Jason Cheuk Nam Liang | https://papers.nips.cc/paper_files/paper/2020/hash/f51238cd02c93b89d8fbee5667d077fc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f51238cd02c93b89d8fbee5667d077fc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11522-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f51238cd02c93b89d8fbee5667d077fc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f51238cd02c93b89d8fbee5667d077fc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f51238cd02c93b89d8fbee5667d077fc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f51238cd02c93b89d8fbee5667d077fc-Supplemental.pdf | We study long-run market stability for repeated price competitions between two firms, where consumer demand depends on firms' posted prices and consumers’ price expectations called reference prices. Consumers' reference prices vary over time according to a memory-based dynamic, which is a weighted average of all historical prices. We focus on the setting where firms are not aware of demand functions and how reference prices are formed but have access to an oracle that provides a measure of consumers' responsiveness to the current posted prices. We show that if the firms run no-regret algorithms, in particular, online mirror descent (OMD), with decreasing step sizes, the market stabilizes in the sense that firms' prices and reference prices converge to a stable Nash Equilibrium (SNE). Interestingly, we also show that there exist constant step sizes under which the market stabilizes. We further characterize the rate of convergence to the SNE for both decreasing and constant OMD step sizes. |
Geometric Dataset Distances via Optimal Transport | https://papers.nips.cc/paper_files/paper/2020/hash/f52a7b2610fb4d3f74b4106fb80b233d-Abstract.html | David Alvarez-Melis, Nicolo Fusi | https://papers.nips.cc/paper_files/paper/2020/hash/f52a7b2610fb4d3f74b4106fb80b233d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f52a7b2610fb4d3f74b4106fb80b233d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11523-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f52a7b2610fb4d3f74b4106fb80b233d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f52a7b2610fb4d3f74b4106fb80b233d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f52a7b2610fb4d3f74b4106fb80b233d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f52a7b2610fb4d3f74b4106fb80b233d-Supplemental.pdf | The notion of task similarity is at the core of various machine learning paradigms, such as domain adaptation and meta-learning. Current methods to quantify it are often heuristic, make strong assumptions on the label sets across the tasks, and many are architecture-dependent, relying on task-specific optimal parameters (e.g., require training a model on each dataset). In this work we propose an alternative notion of distance between datasets that (i) is model-agnostic, (ii) does not involve training, (iii) can compare datasets even if their label sets are completely disjoint and (iv) has solid theoretical footing. This distance relies on optimal transport, which provides it with rich geometry awareness, interpretable correspondences and well-understood properties. Our results show that this novel distance provides meaningful comparison of datasets, and correlates well with transfer learning hardness across various experimental settings and datasets. |
Task-Agnostic Amortized Inference of Gaussian Process Hyperparameters | https://papers.nips.cc/paper_files/paper/2020/hash/f52db9f7c0ae7017ee41f63c2a7353bc-Abstract.html | Sulin Liu, Xingyuan Sun, Peter J. Ramadge, Ryan P. Adams | https://papers.nips.cc/paper_files/paper/2020/hash/f52db9f7c0ae7017ee41f63c2a7353bc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/f52db9f7c0ae7017ee41f63c2a7353bc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11524-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/f52db9f7c0ae7017ee41f63c2a7353bc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/f52db9f7c0ae7017ee41f63c2a7353bc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/f52db9f7c0ae7017ee41f63c2a7353bc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/f52db9f7c0ae7017ee41f63c2a7353bc-Supplemental.pdf | Gaussian processes (GPs) are flexible priors for modeling functions. However, their success depends on the kernel accurately reflecting the properties of the data. One of the appeals of the GP framework is that the marginal likelihood of the kernel hyperparameters is often available in closed form, enabling optimization and sampling procedures to fit these hyperparameters to data. Unfortunately, point-wise evaluation of the marginal likelihood is expensive due to the need to solve a linear system; searching or sampling the space of hyperparameters thus often dominates the practical cost of using GPs. We introduce an approach to the identification of kernel hyperparameters in GP regression and related problems that sidesteps the need for costly marginal likelihoods. Our strategy is to "amortize" inference over hyperparameters by training a single neural network, which consumes a set of regression data and produces an estimate of the kernel function, useful across different tasks. To accommodate the varying dimension and cardinality of different regression problems, we use a hierarchical self-attention-based neural network that produces estimates of the hyperparameters which are invariant to the order of the input data points and data dimensions. We show that a single neural model trained on synthetic data is able to generalize directly to several different unseen real-world GP use cases. Our experiments demonstrate that the estimated hyperparameters are comparable in quality to those from the conventional model selection procedures, while being much faster to obtain, significantly accelerating GP regression and its related applications such as Bayesian optimization and Bayesian quadrature. The code and pre-trained model are available at https://github.com/PrincetonLIPS/AHGP. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.