title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
Handling Missing Data with Graph Representation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/dc36f18a9a0a776671d4879cae69b551-Abstract.html
Jiaxuan You, Xiaobai Ma, Yi Ding, Mykel J. Kochenderfer, Jure Leskovec
https://papers.nips.cc/paper_files/paper/2020/hash/dc36f18a9a0a776671d4879cae69b551-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/dc36f18a9a0a776671d4879cae69b551-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11325-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/dc36f18a9a0a776671d4879cae69b551-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/dc36f18a9a0a776671d4879cae69b551-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/dc36f18a9a0a776671d4879cae69b551-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/dc36f18a9a0a776671d4879cae69b551-Supplemental.pdf
Machine learning with missing data has been approached in many different ways, including feature imputation where missing feature values are estimated based on observed values and label prediction where downstream labels are learned directly from incomplete data. However, existing imputation models tend to have strong prior assumptions and cannot learn from downstream tasks, while models targeting label predictions often involve heuristics and can encounter scalability issues. Here we propose GRAPE, a framework for feature imputation as well as label prediction. GRAPE tackles the missing data problem using graph representation, where the observations and features are viewed as two types of nodes in a bipartite graph, and the observed feature values as edges. Under the GRAPE framework, the feature imputation is formulated as an edge-level prediction task and the label prediction as a node-level prediction task. These tasks are then solved with Graph Neural Networks. Experimental results on nine benchmark datasets show that GRAPE yields 20% lower mean absolute error for imputation tasks and 10% lower for label prediction tasks, compared with existing state-of-the-art methods.
Improving Auto-Augment via Augmentation-Wise Weight Sharing
https://papers.nips.cc/paper_files/paper/2020/hash/dc49dfebb0b00fd44aeff5c60cc1f825-Abstract.html
Keyu Tian, Chen Lin, Ming Sun, Luping Zhou, Junjie Yan, Wanli Ouyang
https://papers.nips.cc/paper_files/paper/2020/hash/dc49dfebb0b00fd44aeff5c60cc1f825-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/dc49dfebb0b00fd44aeff5c60cc1f825-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11326-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/dc49dfebb0b00fd44aeff5c60cc1f825-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/dc49dfebb0b00fd44aeff5c60cc1f825-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/dc49dfebb0b00fd44aeff5c60cc1f825-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/dc49dfebb0b00fd44aeff5c60cc1f825-Supplemental.pdf
The recent progress on automatically searching augmentation policies has boosted the performance substantially for various tasks. A key component of automatic augmentation search is the evaluation process for a particular augmentation policy, which is utilized to return reward and usually runs thousands of times. A plain evaluation process, which includes full model training and validation, would be time-consuming. To achieve efficiency, many choose to sacrifice evaluation reliability for speed. In this paper, we dive into the dynamics of augmented training of the model. This inspires us to design a powerful and efficient proxy task based on the Augmentation-Wise Weight Sharing (AWS) to form a fast yet accurate evaluation process in an elegant way. Comprehensive analysis verifies the superiority of this approach in terms of effectiveness and efficiency. The augmentation policies found by our method achieve superior accuracies compared with existing auto-augmentation search methods. On CIFAR-10, we achieve a top-1 error rate of 1.24%, which is currently the best performing single model without extra training data. On ImageNet, we get a top-1 error rate of 20.36% for ResNet-50, which leads to 3.34% absolute error rate reduction over the baseline augmentation.
MMA Regularization: Decorrelating Weights of Neural Networks by Maximizing the Minimal Angles
https://papers.nips.cc/paper_files/paper/2020/hash/dcd2f3f312b6705fb06f4f9f1b55b55c-Abstract.html
Zhennan Wang, Canqun Xiang, Wenbin Zou, Chen Xu
https://papers.nips.cc/paper_files/paper/2020/hash/dcd2f3f312b6705fb06f4f9f1b55b55c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/dcd2f3f312b6705fb06f4f9f1b55b55c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11327-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/dcd2f3f312b6705fb06f4f9f1b55b55c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/dcd2f3f312b6705fb06f4f9f1b55b55c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/dcd2f3f312b6705fb06f4f9f1b55b55c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/dcd2f3f312b6705fb06f4f9f1b55b55c-Supplemental.zip
The strong correlation between neurons or filters can significantly weaken the generalization ability of neural networks. Inspired by the well-known Tammes problem, we propose a novel diversity regularization method to address this issue, which makes the normalized weight vectors of neurons or filters distributed on a hypersphere as uniformly as possible, through maximizing the minimal pairwise angles (MMA). This method can easily exert its effect by plugging the MMA regularization term into the loss function with negligible computational overhead. The MMA regularization is simple, efficient, and effective. Therefore, it can be used as a basic regularization method in neural network training. Extensive experiments demonstrate that MMA regularization is able to enhance the generalization ability of various modern models and achieves considerable performance improvements on CIFAR100 and TinyImageNet datasets. In addition, experiments on face verification show that MMA regularization is also effective for feature learning. Code is available at: https://github.com/wznpub/MMA_Regularization.
HRN: A Holistic Approach to One Class Learning
https://papers.nips.cc/paper_files/paper/2020/hash/dd1970fb03877a235d530476eb727dab-Abstract.html
Wenpeng Hu, Mengyu Wang, Qi Qin, Jinwen Ma, Bing Liu
https://papers.nips.cc/paper_files/paper/2020/hash/dd1970fb03877a235d530476eb727dab-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/dd1970fb03877a235d530476eb727dab-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11328-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/dd1970fb03877a235d530476eb727dab-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/dd1970fb03877a235d530476eb727dab-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/dd1970fb03877a235d530476eb727dab-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/dd1970fb03877a235d530476eb727dab-Supplemental.zip
Existing neural network based one-class learning methods mainly use various forms of auto-encoders or GAN style adversarial training to learn a latent representation of the given one class of data. This paper proposes an entirely different approach based on a novel regularization, called holistic regularization (or H-regularization), which enables the system to consider the data holistically, not to produce a model that biases towards some features. Combined with a proposed 2-norm instance-level data normalization, we obtain an effective one-class learning method, called HRN. To our knowledge, the proposed regularization and the normalization method have not been reported before. Experimental evaluation using both benchmark image classification and traditional anomaly detection datasets show that HRN markedly outperforms the state-of-the-art existing deep/non-deep learning models.
The Generalized Lasso with Nonlinear Observations and Generative Priors
https://papers.nips.cc/paper_files/paper/2020/hash/dd45045f8c68db9f54e70c67048d32e8-Abstract.html
Zhaoqiang Liu, Jonathan Scarlett
https://papers.nips.cc/paper_files/paper/2020/hash/dd45045f8c68db9f54e70c67048d32e8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/dd45045f8c68db9f54e70c67048d32e8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11329-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/dd45045f8c68db9f54e70c67048d32e8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/dd45045f8c68db9f54e70c67048d32e8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/dd45045f8c68db9f54e70c67048d32e8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/dd45045f8c68db9f54e70c67048d32e8-Supplemental.pdf
In this paper, we study the problem of signal estimation from noisy non-linear measurements when the unknown $n$-dimensional signal is in the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs. We make the assumption of sub-Gaussian measurements, which is satisfied by a wide range of measurement models, such as linear, logistic, 1-bit, and other quantized models. In addition, we consider the impact of adversarial corruptions on these measurements. Our analysis is based on a generalized Lasso approach (Plan and Vershynin, 2016). We first provide a non-uniform recovery guarantee, which states that under i.i.d.~Gaussian measurements, roughly $O\left(\frac{k}{\epsilon^2}\log L\right)$ samples suffice for recovery with an $\ell_2$-error of $\epsilon$, and that this scheme is robust to adversarial noise. Then, we apply this result to neural network generative models, and discuss various extensions to other models and non-i.i.d.~measurements. Moreover, we show that our result can be extended to the uniform recovery guarantee under the assumption of a so-called local embedding property, which is satisfied by the 1-bit and censored Tobit models.
Fair regression via plug-in estimator and recalibration with statistical guarantees
https://papers.nips.cc/paper_files/paper/2020/hash/ddd808772c035aed516d42ad3559be5f-Abstract.html
Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
https://papers.nips.cc/paper_files/paper/2020/hash/ddd808772c035aed516d42ad3559be5f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/ddd808772c035aed516d42ad3559be5f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11330-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/ddd808772c035aed516d42ad3559be5f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/ddd808772c035aed516d42ad3559be5f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/ddd808772c035aed516d42ad3559be5f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/ddd808772c035aed516d42ad3559be5f-Supplemental.pdf
We study the problem of learning an optimal regression function subject to a fairness constraint. It requires that, conditionally on the sensitive feature, the distribution of the function output remains the same. This constraint naturally extends the notion of demographic parity, often used in classification, to the regression setting. We tackle this problem by leveraging on a proxy-discretized version, for which we derive an explicit expression of the optimal fair predictor. This result naturally suggests a two stage approach, in which we first estimate the (unconstrained) regression function from a set of labeled data and then we recalibrate it with another set of unlabeled data. The recalibration step can be efficiently performed via a smooth optimization. We derive rates of convergence of the proposed estimator to the optimal fair predictor both in terms of the risk and fairness constraint. Finally, we present numerical experiments illustrating that the proposed method is often superior or competitive with state-of-the-art methods.
Modeling Shared responses in Neuroimaging Studies through MultiView ICA
https://papers.nips.cc/paper_files/paper/2020/hash/de03beffeed9da5f3639a621bcab5dd4-Abstract.html
Hugo Richard, Luigi Gresele, Aapo Hyvarinen, Bertrand Thirion, Alexandre Gramfort, Pierre Ablin
https://papers.nips.cc/paper_files/paper/2020/hash/de03beffeed9da5f3639a621bcab5dd4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/de03beffeed9da5f3639a621bcab5dd4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11331-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/de03beffeed9da5f3639a621bcab5dd4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/de03beffeed9da5f3639a621bcab5dd4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/de03beffeed9da5f3639a621bcab5dd4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/de03beffeed9da5f3639a621bcab5dd4-Supplemental.pdf
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization. However, the aggregation of data coming from multiple subjects is challenging, since it requires accounting for large variability in anatomy, functional topography and stimulus response across individuals. Data modeling is especially hard for ecologically relevant conditions such as movie watching, where the experimental setup does not imply well-defined cognitive operations. We propose a novel MultiView Independent Component Analysis (ICA) model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise. Contrary to most group-ICA procedures, the likelihood of the model is available in closed form. We develop an alternate quasi-Newton method for maximizing the likelihood, which is robust and converges quickly. We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects. Moreover, the sources recovered by our model exhibit lower between-sessions variability than other methods. On magnetoencephalography (MEG) data, our method yields more accurate source localization on phantom data. Applied on 200 subjects from the Cam-CAN dataset, it reveals a clear sequence of evoked activity in sensor and source space.
Efficient Planning in Large MDPs with Weak Linear Function Approximation
https://papers.nips.cc/paper_files/paper/2020/hash/de07edeeba9f475c9395959494cd8f64-Abstract.html
Roshan Shariff, Csaba Szepesvari
https://papers.nips.cc/paper_files/paper/2020/hash/de07edeeba9f475c9395959494cd8f64-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/de07edeeba9f475c9395959494cd8f64-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11332-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/de07edeeba9f475c9395959494cd8f64-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/de07edeeba9f475c9395959494cd8f64-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/de07edeeba9f475c9395959494cd8f64-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/de07edeeba9f475c9395959494cd8f64-Supplemental.pdf
Large-scale Markov decision processes (MDPs) require planning algorithms with runtime independent of the number of states of the MDP. We consider the planning problem in MDPs using linear value function approximation with only weak requirements: low approximation error for the optimal value function, and a small set of “core” states whose features span those of other states. In particular, we make no assumptions about the representability of policies or value functions of non-optimal policies. Our algorithm produces almost-optimal actions for any state using a generative oracle (simulator) for the MDP, while its computation time scales polynomially with the number of features, core states, and actions and the effective horizon.
Efficient Learning of Generative Models via Finite-Difference Score Matching
https://papers.nips.cc/paper_files/paper/2020/hash/de6b1cf3fb0a3aa1244d30f7b8c29c41-Abstract.html
Tianyu Pang, Kun Xu, Chongxuan LI, Yang Song, Stefano Ermon, Jun Zhu
https://papers.nips.cc/paper_files/paper/2020/hash/de6b1cf3fb0a3aa1244d30f7b8c29c41-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/de6b1cf3fb0a3aa1244d30f7b8c29c41-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11333-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/de6b1cf3fb0a3aa1244d30f7b8c29c41-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/de6b1cf3fb0a3aa1244d30f7b8c29c41-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/de6b1cf3fb0a3aa1244d30f7b8c29c41-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/de6b1cf3fb0a3aa1244d30f7b8c29c41-Supplemental.pdf
Several machine learning applications involve the optimization of higher-order derivatives (e.g., gradients of gradients) during training, which can be expensive with respect to memory and computation even with automatic differentiation. As a typical example in generative modeling, score matching~(SM) involves the optimization of the trace of a Hessian. To improve computing efficiency, we rewrite the SM objective and its variants in terms of directional derivatives, and present a generic strategy to efficiently approximate any-order directional derivative with finite difference~(FD). Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations. Thus, it reduces the total computational cost while also improving numerical stability. We provide two instantiations by reformulating variants of SM objectives into the FD forms. Empirically, we demonstrate that our methods produce results comparable to the gradient-based counterparts while being much more computationally efficient.
Semialgebraic Optimization for Lipschitz Constants of ReLU Networks
https://papers.nips.cc/paper_files/paper/2020/hash/dea9ddb25cbf2352cf4dec30222a02a5-Abstract.html
Tong Chen, Jean B. Lasserre, Victor Magron, Edouard Pauwels
https://papers.nips.cc/paper_files/paper/2020/hash/dea9ddb25cbf2352cf4dec30222a02a5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/dea9ddb25cbf2352cf4dec30222a02a5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11334-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/dea9ddb25cbf2352cf4dec30222a02a5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/dea9ddb25cbf2352cf4dec30222a02a5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/dea9ddb25cbf2352cf4dec30222a02a5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/dea9ddb25cbf2352cf4dec30222a02a5-Supplemental.pdf
The Lipschitz constant of a network plays an important role in many applications of deep learning, such as robustness certification and Wasserstein Generative Adversarial Network. We introduce a semidefinite programming hierarchy to estimate the global and local Lipschitz constant of a multiple layer deep neural network. The novelty is to combine a polynomial lifting for ReLU functions derivatives with a weak generalization of Putinar's positivity certificate. This idea could also apply to other, nearly sparse, polynomial optimization problems in machine learning. We empirically demonstrate that our method provides a trade-off with respect to state of the art linear programming approach, and in some cases we obtain better bounds in less time.
Linear-Sample Learning of Low-Rank Distributions
https://papers.nips.cc/paper_files/paper/2020/hash/df0b8fb21c53254b7afa62e020447c81-Abstract.html
Ayush Jain, Alon Orlitsky
https://papers.nips.cc/paper_files/paper/2020/hash/df0b8fb21c53254b7afa62e020447c81-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/df0b8fb21c53254b7afa62e020447c81-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11335-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/df0b8fb21c53254b7afa62e020447c81-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/df0b8fb21c53254b7afa62e020447c81-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/df0b8fb21c53254b7afa62e020447c81-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/df0b8fb21c53254b7afa62e020447c81-Supplemental.pdf
Many latent-variable applications, including community detection, collaborative filtering, genomic analysis, and NLP, model data as generated by low-rank matrices. Yet despite considerable research, except for very special cases, the number of samples required to efficiently recover the underlying matrices has not been known. We determine the onset of learning in several common latent-variable settings. For all of them, we show that learning $k\times k$, rank-$r$, matrices to normalized $L_1$ distance $\epsilon$ requires $\Omega(\frac{kr}{\epsilon^2})$ samples, and propose an algorithm that uses ${\cal O}(\frac{kr}{\epsilon^2}\log^2\frac r\epsilon)$ samples, a number linear in the high dimension, and nearly linear in the, typically low, rank. The algorithm improves on existing spectral techniques and runs in polynomial time. The proofs establish new results on the rapid convergence of the spectral distance between the model and observation matrices, and may be of independent interest.
Transferable Calibration with Lower Bias and Variance in Domain Adaptation
https://papers.nips.cc/paper_files/paper/2020/hash/df12ecd077efc8c23881028604dbb8cc-Abstract.html
Ximei Wang, Mingsheng Long, Jianmin Wang, Michael Jordan
https://papers.nips.cc/paper_files/paper/2020/hash/df12ecd077efc8c23881028604dbb8cc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/df12ecd077efc8c23881028604dbb8cc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11336-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/df12ecd077efc8c23881028604dbb8cc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/df12ecd077efc8c23881028604dbb8cc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/df12ecd077efc8c23881028604dbb8cc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/df12ecd077efc8c23881028604dbb8cc-Supplemental.zip
Domain Adaptation (DA) enables transferring a learning machine from a labeled source domain to an unlabeled target one. While remarkable advances have been made, most of the existing DA methods focus on improving the target accuracy at inference. How to estimate the predictive uncertainty of DA models is vital for decision-making in safety-critical scenarios but remains the boundary to explore. In this paper, we delve into the open problem of Calibration in DA, which is extremely challenging due to the coexistence of domain shift and the lack of target labels. We first reveal the dilemma that DA models learn higher accuracy at the expense of well-calibrated probabilities. Driven by this finding, we propose Transferable Calibration (TransCal) to achieve more accurate calibration with lower bias and variance in a unified hyperparameter-free optimization framework. As a general post-hoc calibration method, TransCal can be easily applied to recalibrate existing DA methods. Its efficacy has been justified both theoretically and empirically.
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics
https://papers.nips.cc/paper_files/paper/2020/hash/df1a336b7e0b0cb186de6e66800c43a9-Abstract.html
Taiji Suzuki
https://papers.nips.cc/paper_files/paper/2020/hash/df1a336b7e0b0cb186de6e66800c43a9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/df1a336b7e0b0cb186de6e66800c43a9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11337-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/df1a336b7e0b0cb186de6e66800c43a9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/df1a336b7e0b0cb186de6e66800c43a9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/df1a336b7e0b0cb186de6e66800c43a9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/df1a336b7e0b0cb186de6e66800c43a9-Supplemental.pdf
We introduce a new theoretical framework to analyze deep learning optimization with connection to its generalization error. Existing frameworks such as mean field theory and neural tangent kernel theory for neural network optimization analysis typically require taking limit of infinite width of the network to show its global convergence. This potentially makes it difficult to directly deal with finite width network; especially in the neural tangent kernel regime, we cannot reveal favorable properties of neural networks {\it beyond kernel methods}. To realize more natural analysis, we consider a completely different approach in which we formulate the parameter training as a transportation map estimation and show its global convergence via the theory of the {\it infinite dimensional Langevin dynamics}. This enables us to analyze narrow and wide networks in a unifying manner. Moreover, we give generalization gap and excess risk bounds for the solution obtained by the dynamics. The excess risk bound achieves the so-called fast learning rate. In particular, we show an exponential convergence for a classification problem and a minimax optimal rate for a regression problem.
Online Bayesian Goal Inference for Boundedly Rational Planning Agents
https://papers.nips.cc/paper_files/paper/2020/hash/df3aebc649f9e3b674eeb790a4da224e-Abstract.html
Tan Zhi-Xuan, Jordyn Mann, Tom Silver, Josh Tenenbaum, Vikash Mansinghka
https://papers.nips.cc/paper_files/paper/2020/hash/df3aebc649f9e3b674eeb790a4da224e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/df3aebc649f9e3b674eeb790a4da224e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11338-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/df3aebc649f9e3b674eeb790a4da224e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/df3aebc649f9e3b674eeb790a4da224e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/df3aebc649f9e3b674eeb790a4da224e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/df3aebc649f9e3b674eeb790a4da224e-Supplemental.zip
People routinely infer the goals of others by observing their actions over time. Remarkably, we can do so even when those actions lead to failure, enabling us to assist others when we detect that they might not achieve their goals. How might we endow machines with similar capabilities? Here we present an architecture capable of inferring an agent’s goals online from both optimal and non-optimal sequences of actions. Our architecture models agents as boundedly-rational planners that interleave search with execution by replanning, thereby accounting for sub-optimal behavior. These models are specified as probabilistic programs, allowing us to represent and perform efficient Bayesian inference over an agent's goals and internal planning processes. To perform such inference, we develop Sequential Inverse Plan Search (SIPS), a sequential Monte Carlo algorithm that exploits the online replanning assumption of these models, limiting computation by incrementally extending inferred plans as new actions are observed. We present experiments showing that this modeling and inference architecture outperforms Bayesian inverse reinforcement learning baselines, accurately inferring goals from both optimal and non-optimal trajectories involving failure and back-tracking, while generalizing across domains with compositional structure and sparse rewards.
BayReL: Bayesian Relational Learning for Multi-omics Data Integration
https://papers.nips.cc/paper_files/paper/2020/hash/df5511886da327a5e2877c3cd733d9d7-Abstract.html
Ehsan Hajiramezanali, Arman Hasanzadeh, Nick Duffield, Krishna Narayanan, Xiaoning Qian
https://papers.nips.cc/paper_files/paper/2020/hash/df5511886da327a5e2877c3cd733d9d7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/df5511886da327a5e2877c3cd733d9d7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11339-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/df5511886da327a5e2877c3cd733d9d7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/df5511886da327a5e2877c3cd733d9d7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/df5511886da327a5e2877c3cd733d9d7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/df5511886da327a5e2877c3cd733d9d7-Supplemental.pdf
High-throughput molecular profiling technologies have produced high-dimensional multi-omics data, enabling systematic understanding of living systems at the genome scale. Studying molecular interactions across different data types helps reveal signal transduction mechanisms across different classes of molecules. In this paper, we develop a novel Bayesian representation learning method that infers the relational interactions across multi-omics data types. Our method, Bayesian Relational Learning (BayReL) for multi-omics data integration, takes advantage of a priori known relationships among the same class of molecules, modeled as a graph at each corresponding view, to learn view-specific latent variables as well as a multi-partite graph that encodes the interactions across views. Our experiments on several real-world datasets demonstrate enhanced performance of BayReL in inferring meaningful interactions compared to existing baselines.
Weakly Supervised Deep Functional Maps for Shape Matching
https://papers.nips.cc/paper_files/paper/2020/hash/dfb84a11f431c62436cfb760e30a34fe-Abstract.html
Abhishek Sharma, Maks Ovsjanikov
https://papers.nips.cc/paper_files/paper/2020/hash/dfb84a11f431c62436cfb760e30a34fe-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/dfb84a11f431c62436cfb760e30a34fe-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11340-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/dfb84a11f431c62436cfb760e30a34fe-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/dfb84a11f431c62436cfb760e30a34fe-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/dfb84a11f431c62436cfb760e30a34fe-Review.html
null
A variety of deep functional maps have been proposed recently, from fully supervised to totally unsupervised, with a range of loss functions as well as different regularization terms. However, it is still not clear what are minimum ingredients of a deep functional map pipeline and whether such ingredients unify or generalize all recent work on deep functional maps. We show empirically the minimum components for obtaining state-of-the-art results with different loss functions, supervised as well as unsupervised. Furthermore, we propose a novel framework designed for both full-to-full as well as partial to full shape matching that achieves state of the art results on several benchmark datasets outperforming, even the fully supervised methods. Our code is publicly available at \url{https://github.com/Not-IITian/Weakly-supervised-Functional-map}
Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift
https://papers.nips.cc/paper_files/paper/2020/hash/dfbfa7ddcfffeb581f50edcf9a0204bb-Abstract.html
Remi Tachet des Combes, Han Zhao, Yu-Xiang Wang, Geoffrey J. Gordon
https://papers.nips.cc/paper_files/paper/2020/hash/dfbfa7ddcfffeb581f50edcf9a0204bb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/dfbfa7ddcfffeb581f50edcf9a0204bb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11341-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/dfbfa7ddcfffeb581f50edcf9a0204bb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/dfbfa7ddcfffeb581f50edcf9a0204bb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/dfbfa7ddcfffeb581f50edcf9a0204bb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/dfbfa7ddcfffeb581f50edcf9a0204bb-Supplemental.pdf
Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting, by learning domain-invariant representations. However, recent work has shown limitations of this approach when label distributions differ between the source and target domains. In this paper, we propose a new assumption, \textit{generalized label shift} ($\glsa$), to improve robustness against mismatched label distributions. $\glsa$ states that, conditioned on the label, there exists a representation of the input that is invariant between the source and target domains. Under $\glsa$, we provide theoretical guarantees on the transfer performance of any classifier. We also devise necessary and sufficient conditions for $\glsa$ to hold, by using an estimation of the relative class weights between domains and an appropriate reweighting of samples. Our weight estimation method could be straightforwardly and generically applied in existing domain adaptation (DA) algorithms that learn domain-invariant representations, with small computational overhead. In particular, we modify three DA algorithms, JAN, DANN and CDAN, and evaluate their performance on standard and artificial DA tasks. Our algorithms outperform the base versions, with vast improvements for large label distribution mismatches. Our code is available at \url{https://tinyurl.com/y585xt6j}.
Rethinking the Value of Labels for Improving Class-Imbalanced Learning
https://papers.nips.cc/paper_files/paper/2020/hash/e025b6279c1b88d3ec0eca6fcb6e6280-Abstract.html
Yuzhe Yang, Zhi Xu
https://papers.nips.cc/paper_files/paper/2020/hash/e025b6279c1b88d3ec0eca6fcb6e6280-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e025b6279c1b88d3ec0eca6fcb6e6280-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11342-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e025b6279c1b88d3ec0eca6fcb6e6280-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e025b6279c1b88d3ec0eca6fcb6e6280-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e025b6279c1b88d3ec0eca6fcb6e6280-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e025b6279c1b88d3ec0eca6fcb6e6280-Supplemental.pdf
Real-world data often exhibits long-tailed distributions with heavy class imbalance, posing great challenges for deep recognition models. We identify a persisting dilemma on the value of labels in the context of imbalanced learning: on the one hand, supervision from labels typically leads to better results than its unsupervised counterparts; on the other hand, heavily imbalanced data naturally incurs ''label bias'' in the classifier, where the decision boundary can be drastically altered by the majority classes. In this work, we systematically investigate these two facets of labels. We demonstrate, theoretically and empirically, that class-imbalanced learning can significantly benefit in both semi-supervised and self-supervised manners. Specifically, we confirm that (1) positively, imbalanced labels are valuable: given more unlabeled data, the original labels can be leveraged with the extra data to reduce label bias in a semi-supervised manner, which greatly improves the final classifier; (2) negatively however, we argue that imbalanced labels are not useful always: classifiers that are first pre-trained in a self-supervised manner consistently outperform their corresponding baselines. Extensive experiments on large-scale imbalanced datasets verify our theoretically grounded strategies, showing superior performance over previous state-of-the-arts. Our intriguing findings highlight the need to rethink the usage of imbalanced labels in realistic long-tailed tasks. Code is available at https://github.com/YyzHarry/imbalanced-semi-self.
Provably Robust Metric Learning
https://papers.nips.cc/paper_files/paper/2020/hash/e038453073d221a4f32d0bab94ca7cee-Abstract.html
Lu Wang, Xuanqing Liu, Jinfeng Yi, Yuan Jiang, Cho-Jui Hsieh
https://papers.nips.cc/paper_files/paper/2020/hash/e038453073d221a4f32d0bab94ca7cee-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e038453073d221a4f32d0bab94ca7cee-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11343-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e038453073d221a4f32d0bab94ca7cee-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e038453073d221a4f32d0bab94ca7cee-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e038453073d221a4f32d0bab94ca7cee-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e038453073d221a4f32d0bab94ca7cee-Supplemental.pdf
Metric learning is an important family of algorithms for classification and similarity search, but the robustness of learned metrics against small adversarial perturbations is less studied. In this paper, we show that existing metric learning algorithms, which focus on boosting the clean accuracy, can result in metrics that are less robust than the Euclidean distance. To overcome this problem, we propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations, and the robustness of the resulting model is certifiable. Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors (errors under adversarial attacks). Furthermore, unlike neural network defenses which usually encounter a trade-off between clean and robust errors, our method does not sacrifice clean errors compared with previous metric learning methods.
Iterative Deep Graph Learning for Graph Neural Networks: Better and Robust Node Embeddings
https://papers.nips.cc/paper_files/paper/2020/hash/e05c7ba4e087beea9410929698dc41a6-Abstract.html
Yu Chen, Lingfei Wu, Mohammed Zaki
https://papers.nips.cc/paper_files/paper/2020/hash/e05c7ba4e087beea9410929698dc41a6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e05c7ba4e087beea9410929698dc41a6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11344-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e05c7ba4e087beea9410929698dc41a6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e05c7ba4e087beea9410929698dc41a6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e05c7ba4e087beea9410929698dc41a6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e05c7ba4e087beea9410929698dc41a6-Supplemental.pdf
In this paper, we propose an end-to-end graph learning framework, namely \textbf{I}terative \textbf{D}eep \textbf{G}raph \textbf{L}earning (\alg), for jointly and iteratively learning graph structure and graph embedding. The key rationale of \alg is to learn a better graph structure based on better node embeddings, and vice versa (i.e., better node embeddings based on a better graph structure). Our iterative method dynamically stops when the learned graph structure approaches close enough to the graph optimized for the downstream prediction task. In addition, we cast the graph learning problem as a similarity metric learning problem and leverage adaptive graph regularization for controlling the quality of the learned graph. Finally, combining the anchor-based approximation technique, we further propose a scalable version of \alg, namely \salg, which significantly reduces the time and space complexity of \alg without compromising the performance. Our extensive experiments on nine benchmarks show that our proposed \alg models can consistently outperform or match the state-of-the-art baselines. Furthermore, \alg can be more robust to adversarial graphs and cope with both transductive and inductive learning.
COPT: Coordinated Optimal Transport on Graphs
https://papers.nips.cc/paper_files/paper/2020/hash/e0640c93b05097a9380870aa06aa0df4-Abstract.html
Yihe Dong, Will Sawin
https://papers.nips.cc/paper_files/paper/2020/hash/e0640c93b05097a9380870aa06aa0df4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e0640c93b05097a9380870aa06aa0df4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11345-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e0640c93b05097a9380870aa06aa0df4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e0640c93b05097a9380870aa06aa0df4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e0640c93b05097a9380870aa06aa0df4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e0640c93b05097a9380870aa06aa0df4-Supplemental.pdf
We introduce COPT, a novel distance metric between graphs defined via an optimization routine, computing a coordinated pair of optimal transport maps simultaneously. This gives an unsupervised way to learn general-purpose graph representation, applicable to both graph sketching and graph comparison. COPT involves simultaneously optimizing dual transport plans, one between the vertices of two graphs, and another between graph signal probability distributions. We show theoretically that our method preserves important global structural information on graphs, in particular spectral information, and analyze connections to existing studies. Empirically, COPT outperforms state of the art methods in graph classification on both synthetic and real datasets.
No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained Classification Problems
https://papers.nips.cc/paper_files/paper/2020/hash/e0688d13958a19e087e123148555e4b4-Abstract.html
Nimit Sohoni, Jared Dunnmon, Geoffrey Angus, Albert Gu, Christopher Ré
https://papers.nips.cc/paper_files/paper/2020/hash/e0688d13958a19e087e123148555e4b4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e0688d13958a19e087e123148555e4b4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11346-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e0688d13958a19e087e123148555e4b4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e0688d13958a19e087e123148555e4b4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e0688d13958a19e087e123148555e4b4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e0688d13958a19e087e123148555e4b4-Supplemental.pdf
In real-world classification tasks, each class often comprises multiple finer-grained "subclasses." As the subclass labels are frequently unavailable, models trained using only the coarser-grained class labels often exhibit highly variable performance across different subclasses. This phenomenon, known as hidden stratification, has important consequences for models deployed in safety-critical applications such as medicine. We propose GEORGE, a method to both measure and mitigate hidden stratification even when subclass labels are unknown. We first observe that unlabeled subclasses are often separable in the feature space of deep models, and exploit this fact to estimate subclass labels for the training data via clustering techniques. We then use these approximate subclass labels as a form of noisy supervision in a distributionally robust optimization objective. We theoretically characterize the performance of GEORGE in terms of the worst-case generalization error across any subclass. We empirically validate GEORGE on a mix of real-world and benchmark image classification datasets, and show that our approach boosts worst-case subclass accuracy by up to 15 percentage points compared to standard training techniques, without requiring any information about the subclasses.
Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets
https://papers.nips.cc/paper_files/paper/2020/hash/e069ea4c9c233d36ff9c7f329bc08ff1-Abstract.html
Kai Han, Yunhe Wang, Qiulin Zhang, Wei Zhang, Chunjing XU, Tong Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/e069ea4c9c233d36ff9c7f329bc08ff1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e069ea4c9c233d36ff9c7f329bc08ff1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11347-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e069ea4c9c233d36ff9c7f329bc08ff1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e069ea4c9c233d36ff9c7f329bc08ff1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e069ea4c9c233d36ff9c7f329bc08ff1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e069ea4c9c233d36ff9c7f329bc08ff1-Supplemental.pdf
To obtain excellent deep neural architectures, a series of techniques are carefully designed in EfficientNets. The giant formula for simultaneously enlarging the resolution, depth and width provides us a Rubik’s cube for neural networks. So that we can find networks with high efficiency and excellent performance by twisting the three dimensions. This paper aims to explore the twisting rules for obtaining deep neural networks with minimum model sizes and computational costs. Different from the network enlarging, we observe that resolution and depth are more important than width for tiny networks. Therefore, the original method, \ie the compound scaling in EfficientNet is no longer suitable. To this end, we summarize a tiny formula for downsizing neural architectures through a series of smaller models derived from the EfficientNet-B0 with the FLOPs constraint. Experimental results on the ImageNet benchmark illustrate that our TinyNet performs much better than the smaller version of EfficientNets using the inversed giant formula. For instance, our TinyNet-E achieves a 59.9\% Top-1 accuracy with only 24M FLOPs, which is about 1.9\% higher than that of the previous best MobileNetV3 with similar computational cost. Code will be available at \url{https://github.com/huawei-noah/CV-Backbones/tree/master/tinynet}, and \url{https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/tinynet}.
Self-Adaptive Training: beyond Empirical Risk Minimization
https://papers.nips.cc/paper_files/paper/2020/hash/e0ab531ec312161511493b002f9be2ee-Abstract.html
Lang Huang, Chao Zhang, Hongyang Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/e0ab531ec312161511493b002f9be2ee-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e0ab531ec312161511493b002f9be2ee-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11348-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e0ab531ec312161511493b002f9be2ee-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e0ab531ec312161511493b002f9be2ee-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e0ab531ec312161511493b002f9be2ee-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e0ab531ec312161511493b002f9be2ee-Supplemental.pdf
We propose self-adaptive training---a new training algorithm that dynamically calibrates training process by model predictions without incurring extra computational cost---to improve generalization of deep learning for potentially corrupted training data. This problem is important to robustly learning from data that are corrupted by, e.g., random noises and adversarial examples. The standard empirical risk minimization (ERM) for such data, however, may easily overfit noises and thus suffers from sub-optimal performance. In this paper, we observe that model predictions can substantially benefit the training process: self-adaptive training significantly mitigates the overfitting issue and improves generalization over ERM under both random and adversarial noises. Besides, in sharp contrast to the recently-discovered double-descent phenomenon in ERM, self-adaptive training exhibits a single-descent error-capacity curve, indicating that such a phenomenon might be a result of overfitting of noises. Experiments on the CIFAR and ImageNet datasets verify the effectiveness of our approach in two applications: classification with label noise and selective classification.
Effective Dimension Adaptive Sketching Methods for Faster Regularized Least-Squares Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/e105b88b3e1ac23ec811a708cd7edebf-Abstract.html
Jonathan Lacotte, Mert Pilanci
https://papers.nips.cc/paper_files/paper/2020/hash/e105b88b3e1ac23ec811a708cd7edebf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e105b88b3e1ac23ec811a708cd7edebf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11349-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e105b88b3e1ac23ec811a708cd7edebf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e105b88b3e1ac23ec811a708cd7edebf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e105b88b3e1ac23ec811a708cd7edebf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e105b88b3e1ac23ec811a708cd7edebf-Supplemental.pdf
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching. We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT). While current randomized solvers for least-squares optimization prescribe an embedding dimension at least greater than the data dimension, we show that the embedding dimension can be reduced to the effective dimension of the optimization problem, and still preserve high-probability convergence guarantees. In this regard, we derive sharp matrix deviation inequalities over ellipsoids for both Gaussian and SRHT embeddings. Specifically, we improve on the constant of a classical Gaussian concentration bound whereas, for SRHT embeddings, our deviation inequality involves a novel technical approach. Leveraging these bounds, we are able to design a practical and adaptive algorithm which does not require to know the effective dimension beforehand. Our method starts with an initial embedding dimension equal to 1 and, over iterations, increases the embedding dimension up to the effective one at most. Hence, our algorithm improves the state-of-the-art computational complexity for solving regularized least-squares problems. Further, we show numerically that it outperforms standard iterative solvers such as the conjugate gradient method and its pre-conditioned version on several standard machine learning datasets.
Near-Optimal Comparison Based Clustering
https://papers.nips.cc/paper_files/paper/2020/hash/e11943a6031a0e6114ae69c257617980-Abstract.html
Michaël Perrot, Pascal Esser, Debarghya Ghoshdastidar
https://papers.nips.cc/paper_files/paper/2020/hash/e11943a6031a0e6114ae69c257617980-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e11943a6031a0e6114ae69c257617980-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11350-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e11943a6031a0e6114ae69c257617980-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e11943a6031a0e6114ae69c257617980-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e11943a6031a0e6114ae69c257617980-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e11943a6031a0e6114ae69c257617980-Supplemental.pdf
The goal of clustering is to group similar objects into meaningful partitions. This process is well understood when an explicit similarity measure between the objects is given. However, far less is known when this information is not readily available and, instead, one only observes ordinal comparisons such as ``object i is more similar to j than to k.'' In this paper, we tackle this problem using a two-step procedure: we estimate a pairwise similarity matrix from the comparisons before using a clustering method based on semi-definite programming (SDP). We theoretically show that our approach can exactly recover a planted clustering using a near-optimal number of passive comparisons. We empirically validate our theoretical findings and demonstrate the good behaviour of our method on real data.
Multi-Task Temporal Shift Attention Networks for On-Device Contactless Vitals Measurement
https://papers.nips.cc/paper_files/paper/2020/hash/e1228be46de6a0234ac22ded31417bc7-Abstract.html
Xin Liu, Josh Fromm, Shwetak Patel, Daniel McDuff
https://papers.nips.cc/paper_files/paper/2020/hash/e1228be46de6a0234ac22ded31417bc7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e1228be46de6a0234ac22ded31417bc7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11351-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e1228be46de6a0234ac22ded31417bc7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e1228be46de6a0234ac22ded31417bc7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e1228be46de6a0234ac22ded31417bc7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e1228be46de6a0234ac22ded31417bc7-Supplemental.zip
Telehealth and remote health monitoring have become increasingly important during the SARS-CoV-2 pandemic and it is widely expected that this will have a lasting impact on healthcare practices. These tools can help reduce the risk of exposing patients and medical staff to infection, make healthcare services more accessible, and allow providers to see more patients. However, objective measurement of vital signs is challenging without direct contact with a patient. We present a video-based and on-device optical cardiopulmonary vital sign measurement approach. It leverages a novel multi-task temporal shift convolutional attention network (MTTS-CAN) and enables real-time cardiovascular and respiratory measurements on mobile platforms. We evaluate our system on an Advanced RISC Machine (ARM) CPU and achieve state-of-the-art accuracy while running at over 150 frames per second which enables real-time applications. Systematic experimentation on large benchmark datasets reveals that our approach leads to substantial (20%-50%) reductions in error and generalizes well across datasets.
A new convergent variant of Q-learning with linear function approximation
https://papers.nips.cc/paper_files/paper/2020/hash/e1696007be4eefb81b1a1d39ce48681b-Abstract.html
Diogo Carvalho, Francisco S. Melo, Pedro Santos
https://papers.nips.cc/paper_files/paper/2020/hash/e1696007be4eefb81b1a1d39ce48681b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e1696007be4eefb81b1a1d39ce48681b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11352-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e1696007be4eefb81b1a1d39ce48681b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e1696007be4eefb81b1a1d39ce48681b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e1696007be4eefb81b1a1d39ce48681b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e1696007be4eefb81b1a1d39ce48681b-Supplemental.pdf
In this work, we identify a novel set of conditions that ensure convergence with probability 1 of Q-learning with linear function approximation, by proposing a two time-scale variation thereof. In the faster time scale, the algorithm features an update similar to that of DQN, where the impact of bootstrapping is attenuated by using a Q-value estimate akin to that of the target network in DQN. The slower time-scale, in turn, can be seen as a modified target network update. We establish the convergence of our algorithm, provide an error bound and discuss our results in light of existing convergence results on reinforcement learning with function approximation. Finally, we illustrate the convergent behavior of our method in domains where standard Q-learning has previously been shown to diverge.
TaylorGAN: Neighbor-Augmented Policy Update Towards Sample-Efficient Natural Language Generation
https://papers.nips.cc/paper_files/paper/2020/hash/e1fc9c082df6cfff8cbcfff2b5a722ef-Abstract.html
Chun-Hsing Lin, Siang-Ruei Wu, Hung-yi Lee, Yun-Nung Chen
https://papers.nips.cc/paper_files/paper/2020/hash/e1fc9c082df6cfff8cbcfff2b5a722ef-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e1fc9c082df6cfff8cbcfff2b5a722ef-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11353-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e1fc9c082df6cfff8cbcfff2b5a722ef-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e1fc9c082df6cfff8cbcfff2b5a722ef-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e1fc9c082df6cfff8cbcfff2b5a722ef-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e1fc9c082df6cfff8cbcfff2b5a722ef-Supplemental.pdf
Score function-based natural language generation (NLG) approaches such as REINFORCE, in general, suffer from low sample efficiency and training instability problems. This is mainly due to the non-differentiable nature of the discrete space sampling and thus these methods have to treat the discriminator as a black box and ignore the gradient information. To improve the sample efficiency and reduce the variance of REINFORCE, we propose a novel approach, TaylorGAN, which augments the gradient estimation by off-policy update and the first-order Taylor expansion. This approach enables us to train NLG models from scratch with smaller batch size --- without maximum likelihood pre-training, and outperforms existing GAN-based methods on multiple metrics of quality and diversity.
Neural Networks with Small Weights and Depth-Separation Barriers
https://papers.nips.cc/paper_files/paper/2020/hash/e1fe6165cad3f7f3f57d409f78e4415f-Abstract.html
Gal Vardi, Ohad Shamir
https://papers.nips.cc/paper_files/paper/2020/hash/e1fe6165cad3f7f3f57d409f78e4415f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e1fe6165cad3f7f3f57d409f78e4415f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11354-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e1fe6165cad3f7f3f57d409f78e4415f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e1fe6165cad3f7f3f57d409f78e4415f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e1fe6165cad3f7f3f57d409f78e4415f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e1fe6165cad3f7f3f57d409f78e4415f-Supplemental.pdf
In studying the expressiveness of neural networks, an important question is whether there are functions which can only be approximated by sufficiently deep networks, assuming their size is bounded. However, for constant depths, existing results are limited to depths $2$ and $3$, and achieving results for higher depths has been an important open question. In this paper, we focus on feedforward ReLU networks, and prove fundamental barriers to proving such results beyond depth $4$, by reduction to open problems and natural-proof barriers in circuit complexity. To show this, we study a seemingly unrelated problem of independent interest: Namely, whether there are polynomially-bounded functions which require super-polynomial weights in order to approximate with constant-depth neural networks. We provide a negative and constructive answer to that question, by showing that if a function can be approximated by a polynomially-sized, constant depth $k$ network with arbitrarily large weights, it can also be approximated by a polynomially-sized, depth $3k+3$ network, whose weights are polynomially bounded.
Untangling tradeoffs between recurrence and self-attention in artificial neural networks
https://papers.nips.cc/paper_files/paper/2020/hash/e2065cb56f5533494522c46a72f1dfb0-Abstract.html
Giancarlo Kerg, Bhargav Kanuparthi, Anirudh Goyal ALIAS PARTH GOYAL, Kyle Goyette, Yoshua Bengio, Guillaume Lajoie
https://papers.nips.cc/paper_files/paper/2020/hash/e2065cb56f5533494522c46a72f1dfb0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e2065cb56f5533494522c46a72f1dfb0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11355-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e2065cb56f5533494522c46a72f1dfb0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e2065cb56f5533494522c46a72f1dfb0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e2065cb56f5533494522c46a72f1dfb0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e2065cb56f5533494522c46a72f1dfb0-Supplemental.zip
Attention and self-attention mechanisms, are now central to state-of-the-art deep learning on sequential tasks. However, most recent progress hinges on heuristic approaches with limited understanding of attention's role in model optimization and computation, and rely on considerable memory and computational resources that scale poorly. In this work, we present a formal analysis of how self-attention affects gradient propagation in recurrent networks, and prove that it mitigates the problem of vanishing gradients when trying to capture long-term dependencies by establishing concrete bounds for gradient norms. Building on these results, we propose a relevancy screening mechanism, inspired by the cognitive process of memory consolidation, that allows for a scalable use of sparse self-attention with recurrence. While providing guarantees to avoid vanishing gradients, we use simple numerical experiments to demonstrate the tradeoffs in performance and computational resources by efficiently balancing attention and recurrence. Based on our results, we propose a concrete direction of research to improve scalability of attentive networks.
Dual-Free Stochastic Decentralized Optimization with Variance Reduction
https://papers.nips.cc/paper_files/paper/2020/hash/e22312179bf43e61576081a2f250f845-Abstract.html
Hadrien Hendrikx, Francis Bach, Laurent Massoulié
https://papers.nips.cc/paper_files/paper/2020/hash/e22312179bf43e61576081a2f250f845-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e22312179bf43e61576081a2f250f845-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11356-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e22312179bf43e61576081a2f250f845-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e22312179bf43e61576081a2f250f845-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e22312179bf43e61576081a2f250f845-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e22312179bf43e61576081a2f250f845-Supplemental.zip
We consider the problem of training machine learning models on distributed data in a decentralized way. For finite-sum problems, fast single-machine algorithms for large datasets rely on stochastic updates combined with variance reduction. Yet, existing decentralized stochastic algorithms either do not obtain the full speedup allowed by stochastic updates, or require oracles that are more expensive than regular gradients. In this work, we introduce a Decentralized stochastic algorithm with Variance Reduction called DVR. DVR only requires computing stochastic gradients of the local functions, and is computationally as fast as a standard stochastic variance-reduced algorithms run on a $1/n$ fraction of the dataset, where $n$ is the number of nodes. To derive DVR, we use Bregman coordinate descent on a well-chosen dual problem, and obtain a dual-free algorithm using a specific Bregman divergence. We give an accelerated version of DVR based on the Catalyst framework, and illustrate its effectiveness with simulations on real data.
Online Learning in Contextual Bandits using Gated Linear Networks
https://papers.nips.cc/paper_files/paper/2020/hash/e287f0b2e730059c55d97fa92649f4f2-Abstract.html
Eren Sezener, Marcus Hutter, David Budden, Jianan Wang, Joel Veness
https://papers.nips.cc/paper_files/paper/2020/hash/e287f0b2e730059c55d97fa92649f4f2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e287f0b2e730059c55d97fa92649f4f2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11357-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e287f0b2e730059c55d97fa92649f4f2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e287f0b2e730059c55d97fa92649f4f2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e287f0b2e730059c55d97fa92649f4f2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e287f0b2e730059c55d97fa92649f4f2-Supplemental.pdf
We introduce a new and completely online contextual bandit algorithm called Gated Linear Contextual Bandits (GLCB). This algorithm is based on Gated Linear Networks (GLNs), a recently introduced deep learning architecture with properties well-suited to the online setting. Leveraging data-dependent gating properties of the GLN we are able to estimate prediction uncertainty with effectively zero algorithmic overhead. We empirically evaluate GLCB compared to 9 state-of-the-art algorithms that leverage deep neural networks, on a standard benchmark suite of discrete and continuous contextual bandit problems. GLCB obtains mean first-place despite being the only online method, and we further support these results with a theoretical study of its convergence properties.
Throughput-Optimal Topology Design for Cross-Silo Federated Learning
https://papers.nips.cc/paper_files/paper/2020/hash/e29b722e35040b88678e25a1ec032a21-Abstract.html
Othmane MARFOQ, CHUAN XU, Giovanni Neglia, Richard Vidal
https://papers.nips.cc/paper_files/paper/2020/hash/e29b722e35040b88678e25a1ec032a21-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e29b722e35040b88678e25a1ec032a21-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11358-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e29b722e35040b88678e25a1ec032a21-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e29b722e35040b88678e25a1ec032a21-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e29b722e35040b88678e25a1ec032a21-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e29b722e35040b88678e25a1ec032a21-Supplemental.pdf
Federated learning usually employs a client-server architecture where an orchestrator iteratively aggregates model updates from remote clients and pushes them back a refined model. This approach may be inefficient in cross-silo settings, as close-by data silos with high-speed access links may exchange information faster than with the orchestrator, and the orchestrator may become a communication bottleneck. In this paper we define the problem of topology design for cross-silo federated learning using the theory of max-plus linear systems to compute the system throughput---number of communication rounds per time unit. We also propose practical algorithms that, under the knowledge of measurable network characteristics, find a topology with the largest throughput or with provable throughput guarantees. In realistic Internet networks with 10~Gbps access links for silos, our algorithms speed up training by a factor 9 and 1.5 in comparison to the master-slave architecture and to state-of-the-art MATCHA, respectively. Speedups are even larger with slower access links.
Quantized Variational Inference
https://papers.nips.cc/paper_files/paper/2020/hash/e2a23af417a2344fe3a23e652924091f-Abstract.html
Amir Dib
https://papers.nips.cc/paper_files/paper/2020/hash/e2a23af417a2344fe3a23e652924091f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e2a23af417a2344fe3a23e652924091f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11359-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e2a23af417a2344fe3a23e652924091f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e2a23af417a2344fe3a23e652924091f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e2a23af417a2344fe3a23e652924091f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e2a23af417a2344fe3a23e652924091f-Supplemental.zip
We present Quantized Variational Inference, a new algorithm for Evidence Lower Bound minimization. We show how Optimal Voronoi Tesselation produces variance free gradients for Evidence Lower Bound (ELBO) optimization at the cost of introducing asymptotically decaying bias. Subsequently, we propose a Richardson extrapolation type method to improve this bound. We show that using the Quantized Variational Inference framework leads to fast convergence for both score function and the reparametrized gradient estimator at a comparable computational cost. Finally, we propose several experiments to assess the performance of our method and its limitations.
Asymptotically Optimal Exact Minibatch Metropolis-Hastings
https://papers.nips.cc/paper_files/paper/2020/hash/e2a7555f7cabd6e31aef45cb8cda4999-Abstract.html
Ruqi Zhang, A. Feder Cooper, Christopher M. De Sa
https://papers.nips.cc/paper_files/paper/2020/hash/e2a7555f7cabd6e31aef45cb8cda4999-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e2a7555f7cabd6e31aef45cb8cda4999-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11360-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e2a7555f7cabd6e31aef45cb8cda4999-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e2a7555f7cabd6e31aef45cb8cda4999-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e2a7555f7cabd6e31aef45cb8cda4999-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e2a7555f7cabd6e31aef45cb8cda4999-Supplemental.pdf
Metropolis-Hastings (MH) is a commonly-used MCMC algorithm, but it can be intractable on large datasets due to requiring computations over the whole dataset. In this paper, we study \emph{minibatch MH} methods, which instead use subsamples to enable scaling. We observe that most existing minibatch MH methods are inexact (i.e. they may change the target distribution), and show that this inexactness can cause arbitrarily large errors in inference. We propose a new exact minibatch MH method, \emph{TunaMH}, which exposes a tunable trade-off between its minibatch size and its theoretically guaranteed convergence rate. We prove a lower bound on the batch size that any minibatch MH method \emph{must} use to retain exactness while guaranteeing fast convergence---the first such bound for minibatch MH---and show TunaMH is asymptotically optimal in terms of the batch size. Empirically, we show TunaMH outperforms other exact minibatch MH methods on robust linear regression, truncated Gaussian mixtures, and logistic regression.
Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search
https://papers.nips.cc/paper_files/paper/2020/hash/e2ce14e81dba66dbff9cbc35ecfdb704-Abstract.html
Linnan Wang, Rodrigo Fonseca, Yuandong Tian
https://papers.nips.cc/paper_files/paper/2020/hash/e2ce14e81dba66dbff9cbc35ecfdb704-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e2ce14e81dba66dbff9cbc35ecfdb704-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11361-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e2ce14e81dba66dbff9cbc35ecfdb704-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e2ce14e81dba66dbff9cbc35ecfdb704-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e2ce14e81dba66dbff9cbc35ecfdb704-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e2ce14e81dba66dbff9cbc35ecfdb704-Supplemental.pdf
High dimensional black-box optimization has broad applications but remains a challenging problem to solve. Given a set of samples xi, yi, building a global model (like Bayesian Optimization (BO)) suffers from the curse of dimensionality in the high-dimensional search space, while a greedy search may lead to sub-optimality. By recursively splitting the search space into regions with high/low function values, recent works like LaNAS shows good performance in Neural Architecture Search (NAS), reducing the sample complexity empirically. In this paper, we coin LA-MCTS that extends LaNAS to other domains. Unlike previous approaches, LA-MCTS learns the partition of the search space using a few samples and their function values in an online fashion. While LaNAS uses linear partition and performs uniform sampling in each region, our LA-MCTS adopts a nonlinear decision boundary and learns a local model to pick good candidates. If the nonlinear partition function and the local model fits well with ground-truth black-box function, then good partitions and candidates can be reached with much fewer samples. LA-MCTS serves as a meta-algorithm by using existing black-box optimizers (e.g., BO, TuRBO as its local models, achieving strong performance in general black-box optimization and reinforcement learning benchmarks, in particular for high-dimensional problems.
Feature Shift Detection: Localizing Which Features Have Shifted via Conditional Distribution Tests
https://papers.nips.cc/paper_files/paper/2020/hash/e2d52448d36918c575fa79d88647ba66-Abstract.html
Sean Kulinski, Saurabh Bagchi, David I. Inouye
https://papers.nips.cc/paper_files/paper/2020/hash/e2d52448d36918c575fa79d88647ba66-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e2d52448d36918c575fa79d88647ba66-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11362-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e2d52448d36918c575fa79d88647ba66-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e2d52448d36918c575fa79d88647ba66-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e2d52448d36918c575fa79d88647ba66-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e2d52448d36918c575fa79d88647ba66-Supplemental.pdf
While previous distribution shift detection approaches can identify if a shift has occurred, these approaches cannot localize which specific features have caused a distribution shift---a critical step in diagnosing or fixing any underlying issue. For example, in military sensor networks, users will want to detect when one or more of the sensors has been compromised, and critically, they will want to know which specific sensors might be compromised. Thus, we first define a formalization of this problem as multiple conditional distribution hypothesis tests and propose both non-parametric and parametric statistical tests. For both efficiency and flexibility, we then propose to use a test statistic based on the density model score function (\ie gradient with respect to the input)---which can easily compute test statistics for all dimensions in a single forward and backward pass. Any density model could be used for computing the necessary statistics including deep density models such as normalizing flows or autoregressive models. We additionally develop methods for identifying when and where a shift occurs in multivariate time-series data and show results for multiple scenarios using realistic attack models on both simulated and real-world data.
Unifying Activation- and Timing-based Learning Rules for Spiking Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/e2e5096d574976e8f115a8f1e0ffb52b-Abstract.html
Jinseok Kim, Kyungsu Kim, Jae-Joon Kim
https://papers.nips.cc/paper_files/paper/2020/hash/e2e5096d574976e8f115a8f1e0ffb52b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e2e5096d574976e8f115a8f1e0ffb52b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11363-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e2e5096d574976e8f115a8f1e0ffb52b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e2e5096d574976e8f115a8f1e0ffb52b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e2e5096d574976e8f115a8f1e0ffb52b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e2e5096d574976e8f115a8f1e0ffb52b-Supplemental.pdf
For the gradient computation across the time domain in Spiking Neural Networks (SNNs) training, two different approaches have been independently studied. The first is to compute the gradients with respect to the change in spike activation (activation-based methods), and the second is to compute the gradients with respect to the change in spike timing (timing-based methods). In this work, we present a comparative study of the two methods and propose a new supervised learning method that combines them. The proposed method utilizes each individual spike more effectively by shifting spike timings as in the timing-based methods as well as generating and removing spikes as in the activation-based methods. Experimental results showed that the proposed method achieves higher performance in terms of both accuracy and efficiency than the previous approaches.
Space-Time Correspondence as a Contrastive Random Walk
https://papers.nips.cc/paper_files/paper/2020/hash/e2ef524fbf3d9fe611d5a8e90fefdc9c-Abstract.html
Allan Jabri, Andrew Owens, Alexei Efros
https://papers.nips.cc/paper_files/paper/2020/hash/e2ef524fbf3d9fe611d5a8e90fefdc9c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e2ef524fbf3d9fe611d5a8e90fefdc9c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11364-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e2ef524fbf3d9fe611d5a8e90fefdc9c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e2ef524fbf3d9fe611d5a8e90fefdc9c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e2ef524fbf3d9fe611d5a8e90fefdc9c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e2ef524fbf3d9fe611d5a8e90fefdc9c-Supplemental.pdf
This paper proposes a simple self-supervised approach for learning a representation for visual correspondence from raw video. We cast correspondence as prediction of links in a space-time graph constructed from video. In this graph, the nodes are patches sampled from each frame, and nodes adjacent in time can share a directed edge. We learn a representation in which pairwise similarity defines transition probability of a random walk, such that prediction of long-range correspondence is computed as a walk along the graph. We optimize the representation to place high probability along paths of similarity. Targets for learning are formed without supervision, by cycle-consistency: the objective is to maximize the likelihood of returning to the initial node when walking along a graph constructed from a palindrome of frames. Thus, a single path-level constraint implicitly supervises chains of intermediate comparisons. When used as a similarity metric without adaptation, the learned representation outperforms the self-supervised state-of-the-art on label propagation tasks involving objects, semantic parts, and pose. Moreover, we demonstrate that a technique we call edge dropout, as well as self-supervised adaptation at test-time, further improve transfer for object-centric correspondence.
The Flajolet-Martin Sketch Itself Preserves Differential Privacy: Private Counting with Minimal Space
https://papers.nips.cc/paper_files/paper/2020/hash/e3019767b1b23f82883c9850356b71d6-Abstract.html
Adam Smith, Shuang Song, Abhradeep Guha Thakurta
https://papers.nips.cc/paper_files/paper/2020/hash/e3019767b1b23f82883c9850356b71d6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e3019767b1b23f82883c9850356b71d6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11365-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e3019767b1b23f82883c9850356b71d6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e3019767b1b23f82883c9850356b71d6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e3019767b1b23f82883c9850356b71d6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e3019767b1b23f82883c9850356b71d6-Supplemental.pdf
We revisit the problem of counting the number of distinct elements $\dist$ in a data stream $D$, over a domain $[u]$. We propose an $(\epsilon,\delta)$-differentially private algorithm that approximates $\dist$ within a factor of $(1\pm\gamma)$, and with additive error of $O(\sqrt{\ln(1/\delta)}/\epsilon)$, using space $O(\ln(\ln(u)/\gamma)/\gamma^2)$. We improve on the prior work at least quadratically and up to exponentially, in terms of both space and additive error. Our additive error guarantee is optimal up to a factor of $O(\sqrt{\ln(1/\delta)})$, and the space bound is optimal up to a factor of $O\left(\min\left\{\ln\left(\frac{\ln(u)}{\gamma}\right), \frac{1}{\gamma^2}\right\}\right)$. We assume the existence of an ideal uniform random hash function, and ignore the space required to store it. We later relax this requirement by assuming pseudorandom functions and appealing to a computational variant of differential privacy, SIM-CDP. Our algorithm is built on top of the celebrated Flajolet-Martin (FM) sketch. We show that FM-sketch is differentially private as is, as long as there are $\approx \sqrt{\ln(1/\delta)}/(\epsilon\gamma)$ distinct elements in the data set. Along the way, we prove a structural result showing that the maximum of $k$ i.i.d. random variables is statistically close (in the sense of $\epsilon$-differential privacy) to the maximum of $(k+1)$ i.i.d. samples from the same distribution, as long as $k=\Omega\left(\frac{1}{\epsilon}\right)$. Finally, experiments show that our algorithms introduces error within an order of magnitude of the non-private analogues for streams with thousands of distinct elements, even while providing strong privacy guarantee ($\eps\leq 1$).
Exponential ergodicity of mirror-Langevin diffusions
https://papers.nips.cc/paper_files/paper/2020/hash/e3251075554389fe91d17a794861d47b-Abstract.html
Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet, Austin Stromme
https://papers.nips.cc/paper_files/paper/2020/hash/e3251075554389fe91d17a794861d47b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e3251075554389fe91d17a794861d47b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11366-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e3251075554389fe91d17a794861d47b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e3251075554389fe91d17a794861d47b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e3251075554389fe91d17a794861d47b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e3251075554389fe91d17a794861d47b-Supplemental.pdf
Motivated by the problem of sampling from ill-conditioned log-concave distributions, we give a clean non-asymptotic convergence analysis of mirror-Langevin diffusions as introduced in Zhang et al. (2020). As a special case of this framework, we propose a class of diffusions called Newton-Langevin diffusions and prove that they converge to stationarity exponentially fast with a rate which not only is dimension-free, but also has no dependence on the target distribution. We give an application of this result to the problem of sampling from the uniform distribution on a convex body using a strategy inspired by interior-point methods. Our general approach follows the recent trend of linking sampling and optimization and highlights the role of the chi-squared divergence. In particular, it yields new results on the convergence of the vanilla Langevin diffusion in Wasserstein distance.
An Efficient Framework for Clustered Federated Learning
https://papers.nips.cc/paper_files/paper/2020/hash/e32cc80bf07915058ce90722ee17bb71-Abstract.html
Avishek Ghosh, Jichan Chung, Dong Yin, Kannan Ramchandran
https://papers.nips.cc/paper_files/paper/2020/hash/e32cc80bf07915058ce90722ee17bb71-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e32cc80bf07915058ce90722ee17bb71-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11367-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e32cc80bf07915058ce90722ee17bb71-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e32cc80bf07915058ce90722ee17bb71-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e32cc80bf07915058ce90722ee17bb71-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e32cc80bf07915058ce90722ee17bb71-Supplemental.pdf
We address the problem of Federated Learning (FL) where users are distributed and partitioned into clusters. This setup captures settings where different groups of users have their own objectives (learning tasks) but by aggregating their data with others in the same cluster (same learning task), they can leverage the strength in numbers in order to perform more efficient Federated Learning. We propose a new framework dubbed the Iterative Federated Clustering Algorithm (IFCA), which alternately estimates the cluster identities of the users and optimizes model parameters for the user clusters via gradient descent. We analyze the convergence rate of this algorithm first in a linear model with squared loss and then for generic strongly convex and smooth loss functions. We show that in both settings, with good initialization, IFCA converges at an exponential rate, and discuss the optimality of the statistical error rate. When the clustering structure is ambiguous, we propose to train the models by combining IFCA with the weight sharing technique in multi-task learning. In the experiments, we show that our algorithm can succeed even if we relax the requirements on initialization with random initialization and multiple restarts. We also present experimental results showing that our algorithm is efficient in non-convex problems such as neural networks. We demonstrate the benefits of IFCA over the baselines on several clustered FL benchmarks.
Autoencoders that don't overfit towards the Identity
https://papers.nips.cc/paper_files/paper/2020/hash/e33d974aae13e4d877477d51d8bafdc4-Abstract.html
Harald Steck
https://papers.nips.cc/paper_files/paper/2020/hash/e33d974aae13e4d877477d51d8bafdc4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e33d974aae13e4d877477d51d8bafdc4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11368-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e33d974aae13e4d877477d51d8bafdc4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e33d974aae13e4d877477d51d8bafdc4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e33d974aae13e4d877477d51d8bafdc4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e33d974aae13e4d877477d51d8bafdc4-Supplemental.pdf
Autoencoders (AE) aim to reproduce the output from the input. They may hence tend to overfit towards learning the identity-function between the input and output, i.e., they may predict each feature in the output from itself in the input. This is not useful, however, when AEs are used for prediction tasks in the presence of noise in the data. It may seem intuitively evident that this kind of overfitting is prevented by training a denoising AE, as the dropped-out features have to be predicted from the other features. In this paper, we consider linear autoencoders, as they facilitate analytic solutions, and first show that denoising / dropout actually prevents the overfitting towards the identity-function only to the degree that it is penalized by the induced L2-norm regularization. In the main theorem of this paper, we show that the emphasized denoising AE is indeed capable of completely eliminating the overfitting towards the identity-function. Our derivations reveal several new insights, including the closed-form solution of the full-rank model, as well as a new (near-)orthogonality constraint in the low-rank model. While this constraint is conceptually very different from the regularizers recently proposed, their resulting effects on the learned embeddings are empirically similar. Our experiments on three well-known data-sets corroborate the various theoretical insights derived in this paper.
Polynomial-Time Computation of Optimal Correlated Equilibria in Two-Player Extensive-Form Games with Public Chance Moves and Beyond
https://papers.nips.cc/paper_files/paper/2020/hash/e366d105cfd734677897aaccf51e97a3-Abstract.html
Gabriele Farina, Tuomas Sandholm
https://papers.nips.cc/paper_files/paper/2020/hash/e366d105cfd734677897aaccf51e97a3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e366d105cfd734677897aaccf51e97a3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11369-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e366d105cfd734677897aaccf51e97a3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e366d105cfd734677897aaccf51e97a3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e366d105cfd734677897aaccf51e97a3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e366d105cfd734677897aaccf51e97a3-Supplemental.pdf
Unlike normal-form games, where correlated equilibria have been studied for more than 45 years, extensive-form correlation is still generally not well understood. Part of the reason for this gap is that the sequential nature of extensive-form games allows for a richness of behaviors and incentives that are not possible in normal-form settings. This richness translates to a significantly different complexity landscape surrounding extensive-form correlated equilibria. As of today, it is known that finding an optimal extensive-form correlated equilibrium (EFCE), extensive-form coarse correlated equilibrium (EFCCE), or normal-form coarse correlated equilibrium (NFCCE) in a two-player extensive-form game is computationally tractable when the game does not include chance moves, and intractable when the game involves chance moves. In this paper we significantly refine this complexity threshold by showing that, in two-player games, an optimal correlated equilibrium can be computed in polynomial time, provided that a certain condition is satisfied. We show that the condition holds, for example, when all chance moves are public, that is, both players observe all chance moves. This implies that an optimal EFCE, EFCCE and NFCCE can be computed in polynomial time in the game size in two-player games with public chance moves.
Parameterized Explainer for Graph Neural Network
https://papers.nips.cc/paper_files/paper/2020/hash/e37b08dd3015330dcbb5d6663667b8b8-Abstract.html
Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng Chen, Xiang Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/e37b08dd3015330dcbb5d6663667b8b8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e37b08dd3015330dcbb5d6663667b8b8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11370-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e37b08dd3015330dcbb5d6663667b8b8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e37b08dd3015330dcbb5d6663667b8b8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e37b08dd3015330dcbb5d6663667b8b8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e37b08dd3015330dcbb5d6663667b8b8-Supplemental.pdf
Despite recent progress in Graph Neural Networks (GNNs), explaining predictions made by GNNs remains a challenging open problem. The leading method mainly addresses the local explanations (i.e., important subgraph structure and node features) to interpret why a GNN model makes the prediction for a single instance, e.g. a node or a graph. As a result, the explanation generated is painstakingly customized for each instance. The unique explanation interpreting each instance independently is not sufficient to provide a global understanding of the learned GNN model, leading to the lack of generalizability and hindering it from being used in the inductive setting. Besides, as it is designed for explaining a single instance, it is challenging to explain a set of instances naturally (e.g., graphs of a given class). In this study, we address these key challenges and propose PGExplainer, a parameterized explainer for GNNs. PGExplainer adopts a deep neural network to parameterize the generation process of explanations, which enables PGExplainer a natural approach to multi-instance explanations. Compared to the existing work, PGExplainer has a better generalization power and can be utilized in an inductive setting easily. Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7\% relative improvement in AUC on explaining graph classification over the leading baseline.
Recursive Inference for Variational Autoencoders
https://papers.nips.cc/paper_files/paper/2020/hash/e3844e186e6eb8736e9f53c0c5889527-Abstract.html
Minyoung Kim, Vladimir Pavlovic
https://papers.nips.cc/paper_files/paper/2020/hash/e3844e186e6eb8736e9f53c0c5889527-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e3844e186e6eb8736e9f53c0c5889527-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11371-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e3844e186e6eb8736e9f53c0c5889527-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e3844e186e6eb8736e9f53c0c5889527-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e3844e186e6eb8736e9f53c0c5889527-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e3844e186e6eb8736e9f53c0c5889527-Supplemental.pdf
Inference networks of traditional Variational Autoencoders (VAEs) are typically amortized, resulting in relatively inaccurate posterior approximation compared to instance-wise variational optimization. Recent semi-amortized approaches were proposed to address this drawback; however, their iterative gradient update procedures can be computationally demanding. In this paper, we consider a different approach of building a mixture inference model. We propose a novel recursive mixture estimation algorithm for VAEs that iteratively augments the current mixture with new components so as to maximally reduce the divergence between the variational and the true posteriors. Using the functional gradient approach, we devise an intuitive learning criteria for selecting a new mixture component: the new component has to improve the data likelihood (lower bound) and, at the same time, be as divergent from the current mixture distribution as possible, thus increasing representational diversity. Although there have been similar approaches recently, termed boosted variational inference (BVI), our methods differ from BVI in several aspects, most notably that ours deal with recursive inference in VAEs in the form of amortized inference, while BVI is developed within the standard VI framework, leading to a non-amortized single optimization instance, inappropriate for VAEs. A crucial benefit of our approach is that the inference at test time needs a single feed-forward pass through the mixture inference network, making it significantly faster than the semi-amortized approaches. We show that our approach yields higher test data likelihood than the state-of-the-arts on several benchmark datasets.
Flexible mean field variational inference using mixtures of non-overlapping exponential families
https://papers.nips.cc/paper_files/paper/2020/hash/e3a54649aeec04cf1c13907bc6c5c8aa-Abstract.html
Jeffrey Spence
https://papers.nips.cc/paper_files/paper/2020/hash/e3a54649aeec04cf1c13907bc6c5c8aa-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e3a54649aeec04cf1c13907bc6c5c8aa-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11372-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e3a54649aeec04cf1c13907bc6c5c8aa-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e3a54649aeec04cf1c13907bc6c5c8aa-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e3a54649aeec04cf1c13907bc6c5c8aa-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e3a54649aeec04cf1c13907bc6c5c8aa-Supplemental.pdf
Sparse models are desirable for many applications across diverse domains as they can perform automatic variable selection, aid interpretability, and provide regularization. When fitting sparse models in a Bayesian framework, however, analytically obtaining a posterior distribution over the parameters of interest is intractable for all but the simplest cases. As a result practitioners must rely on either sampling algorithms such as Markov chain Monte Carlo or variational methods to obtain an approximate posterior. Mean field variational inference is a particularly simple and popular framework that is often amenable to analytically deriving closed-form parameter updates. When all distributions in the model are members of exponential families and are conditionally conjugate, optimization schemes can often be derived by hand. Yet, I show that using standard mean field variational inference can fail to produce sensible results for models with sparsity-inducing priors, such as the spike-and-slab. Fortunately, such pathological behavior can be remedied as I show that mixtures of exponential family distributions with non-overlapping support form an exponential family. In particular, any mixture of an exponential family of diffuse distributions and a point mass at zero to model sparsity forms an exponential family. Furthermore, specific choices of these distributions maintain conditional conjugacy. I use two applications to motivate these results: one from statistical genetics that has connections to generalized least squares with a spike-and-slab prior on the regression coefficients; and sparse probabilistic principal component analysis. The theoretical results presented here are broadly applicable beyond these two examples.
HYDRA: Pruning Adversarially Robust Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/e3a72c791a69f87b05ea7742e04430ed-Abstract.html
Vikash Sehwag, Shiqi Wang, Prateek Mittal, Suman Jana
https://papers.nips.cc/paper_files/paper/2020/hash/e3a72c791a69f87b05ea7742e04430ed-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e3a72c791a69f87b05ea7742e04430ed-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11373-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e3a72c791a69f87b05ea7742e04430ed-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e3a72c791a69f87b05ea7742e04430ed-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e3a72c791a69f87b05ea7742e04430ed-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e3a72c791a69f87b05ea7742e04430ed-Supplemental.zip
In safety-critical but computationally resource-constrained applications, deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size (often millions of parameters). While the research community has extensively explored the use of robust training and network pruning \emph{independently} to address one of these challenges, only a few recent works have studied them jointly. However, these works inherit a heuristic pruning strategy that was developed for benign training, which performs poorly when integrated with robust training techniques, including adversarial training and verifiable robust training. To overcome this challenge, we propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune. We realize this insight by formulating the pruning objective as an empirical risk minimization problem which is solved efficiently using SGD. We demonstrate that our approach, titled HYDRA, achieves compressed networks with \textit{state-of-the-art} benign and robust accuracy, \textit{simultaneously}. We demonstrate the success of our approach across CIFAR-10, SVHN, and ImageNet dataset with four robust training techniques: iterative adversarial training, randomized smoothing, MixTrain, and CROWN-IBP. We also demonstrate the existence of highly robust sub-networks within non-robust networks.
NVAE: A Deep Hierarchical Variational Autoencoder
https://papers.nips.cc/paper_files/paper/2020/hash/e3b21256183cf7c2c7a66be163579d37-Abstract.html
Arash Vahdat, Jan Kautz
https://papers.nips.cc/paper_files/paper/2020/hash/e3b21256183cf7c2c7a66be163579d37-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e3b21256183cf7c2c7a66be163579d37-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11374-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e3b21256183cf7c2c7a66be163579d37-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e3b21256183cf7c2c7a66be163579d37-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e3b21256183cf7c2c7a66be163579d37-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e3b21256183cf7c2c7a66be163579d37-Supplemental.pdf
Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, CelebA 64, and CelebA HQ datasets and it provides a strong baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art from 2.98 to 2.91 bits per dimension, and it produces high-quality images on CelebA HQ. To the best of our knowledge, NVAE is the first successful VAE applied to natural images as large as 256x256 pixels. The source code is publicly available.
Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory
https://papers.nips.cc/paper_files/paper/2020/hash/e3bc4e7f243ebc05d66a0568a3331966-Abstract.html
Yufeng Zhang, Qi Cai, Zhuoran Yang, Yongxin Chen, Zhaoran Wang
https://papers.nips.cc/paper_files/paper/2020/hash/e3bc4e7f243ebc05d66a0568a3331966-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e3bc4e7f243ebc05d66a0568a3331966-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11375-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e3bc4e7f243ebc05d66a0568a3331966-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e3bc4e7f243ebc05d66a0568a3331966-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e3bc4e7f243ebc05d66a0568a3331966-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e3bc4e7f243ebc05d66a0568a3331966-Supplemental.zip
We prove that utilizing an overparameterized two-layer neural network, temporal-difference and Q-learning globally minimize the mean-squared projected Bellman error at a sublinear rate. Moreover, the associated feature representation converges to the optimal one, generalizing the previous analysis of Cai et al. (2019) in the neural tangent kernel regime, where the associated feature representation stabilizes at the initial one. The key to our analysis is a mean-field perspective, which connects the evolution of a finite-dimensional parameter to its limiting counterpart over an infinite-dimensional Wasserstein space. Our analysis generalizes to soft Q-learning, which is further connected to policy gradient.
What Do Neural Networks Learn When Trained With Random Labels?
https://papers.nips.cc/paper_files/paper/2020/hash/e4191d610537305de1d294adb121b513-Abstract.html
Hartmut Maennel, Ibrahim M. Alabdulmohsin, Ilya O. Tolstikhin, Robert Baldock, Olivier Bousquet, Sylvain Gelly, Daniel Keysers
https://papers.nips.cc/paper_files/paper/2020/hash/e4191d610537305de1d294adb121b513-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e4191d610537305de1d294adb121b513-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11376-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e4191d610537305de1d294adb121b513-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e4191d610537305de1d294adb121b513-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e4191d610537305de1d294adb121b513-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e4191d610537305de1d294adb121b513-Supplemental.pdf
We study deep neural networks (DNNs) trained on natural image data with entirely random labels. Despite its popularity in the literature, where it is often used to study memorization, generalization, and other phenomena, little is known about what DNNs learn in this setting. In this paper, we show analytically for convolutional and fully connected networks that an alignment between the principal components of network parameters and data takes place when training with random labels. We study this alignment effect by investigating neural networks pre-trained on randomly labelled image data and subsequently fine-tuned on disjoint datasets with random or real labels. We show how this alignment produces a positive transfer: networks pre-trained with random labels train faster downstream compared to training from scratch even after accounting for simple effects, such as weight scaling. We analyze how competing effects, such as specialization at later layers, may hide the positive transfer. These effects are studied in several network architectures, including VGG16 and ResNet18, on CIFAR10 and ImageNet.
Counterfactual Prediction for Bundle Treatment
https://papers.nips.cc/paper_files/paper/2020/hash/e430ad64df3de73e6be33bcb7f6d0dac-Abstract.html
Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He
https://papers.nips.cc/paper_files/paper/2020/hash/e430ad64df3de73e6be33bcb7f6d0dac-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e430ad64df3de73e6be33bcb7f6d0dac-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11377-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e430ad64df3de73e6be33bcb7f6d0dac-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e430ad64df3de73e6be33bcb7f6d0dac-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e430ad64df3de73e6be33bcb7f6d0dac-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e430ad64df3de73e6be33bcb7f6d0dac-Supplemental.pdf
Estimating counterfactual outcome of different treatments from observational data is an important problem to assist decision making in a variety of fields. Among the various forms of treatment specification, bundle treatment has been widely adopted in many scenarios, such as recommendation systems and online marketing. The bundle treatment usually can be abstracted as a high dimensional binary vector, which makes it more challenging for researchers to remove the confounding bias in observational data. In this work, we assume the existence of low dimensional latent structure underlying bundle treatment. Via the learned latent representations of treatments, we propose a novel variational sample re-weighting (VSR) method to eliminate confounding bias by decorrelating the treatments and confounders. Finally, we conduct extensive experiments to demonstrate that the predictive model trained on this re-weighted dataset can achieve more accurate counterfactual outcome prediction.
Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs
https://papers.nips.cc/paper_files/paper/2020/hash/e43739bba7cdb577e9e3e4e42447f5a5-Abstract.html
Hongyu Ren, Jure Leskovec
https://papers.nips.cc/paper_files/paper/2020/hash/e43739bba7cdb577e9e3e4e42447f5a5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e43739bba7cdb577e9e3e4e42447f5a5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11378-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e43739bba7cdb577e9e3e4e42447f5a5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e43739bba7cdb577e9e3e4e42447f5a5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e43739bba7cdb577e9e3e4e42447f5a5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e43739bba7cdb577e9e3e4e42447f5a5-Supplemental.pdf
One of the fundamental problems in Artificial Intelligence is to perform complex multi-hop logical reasoning over the facts captured by a knowledge graph (KG). This problem is challenging, because KGs can be massive and incomplete. Recent approaches embed KG entities in a low dimensional space and then use these embeddings to find the answer entities. However, it has been an outstanding challenge of how to handle arbitrary first-order logic (FOL) queries as present methods are limited to only a subset of FOL operators. In particular, the negation operator is not supported. An additional limitation of present methods is also that they cannot naturally model uncertainty. Here, we present BetaE, a probabilistic embedding framework for answering arbitrary FOL queries over KGs. BetaE is the first method that can handle a complete set of first-order logical operations: conjunction ($\wedge$), disjunction ($\vee$), and negation ($\neg$). A key insight of BetaE is to use probabilistic distributions with bounded support, specifically the Beta distribution, and embed queries/entities as distributions, which as a consequence allows us to also faithfully model uncertainty. Logical operations are performed in the embedding space by neural operators over the probabilistic embeddings. We demonstrate the performance of BetaE on answering arbitrary FOL queries on three large, incomplete KGs. While being more general, BetaE also increases relative performance by up to 25.4% over the current state-of-the-art KG reasoning methods that can only handle conjunctive queries without negation.
Learning Disentangled Representations and Group Structure of Dynamical Environments
https://papers.nips.cc/paper_files/paper/2020/hash/e449b9317dad920c0dd5ad0a2a2d5e49-Abstract.html
Robin Quessard, Thomas Barrett, William Clements
https://papers.nips.cc/paper_files/paper/2020/hash/e449b9317dad920c0dd5ad0a2a2d5e49-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e449b9317dad920c0dd5ad0a2a2d5e49-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11379-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e449b9317dad920c0dd5ad0a2a2d5e49-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e449b9317dad920c0dd5ad0a2a2d5e49-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e449b9317dad920c0dd5ad0a2a2d5e49-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e449b9317dad920c0dd5ad0a2a2d5e49-Supplemental.pdf
Learning disentangled representations is a key step towards effectively discovering and modelling the underlying structure of environments. In the natural sciences, physics has found great success by describing the universe in terms of symmetry preserving transformations. Inspired by this formalism, we propose a framework, built upon the theory of group representation, for learning representations of a dynamical environment structured around the transformations that generate its evolution. Experimentally, we learn the structure of explicitly symmetric environments without supervision from observational data generated by sequential interactions. We further introduce an intuitive disentanglement regularisation to ensure the interpretability of the learnt representations. We show that our method enables accurate long-horizon predictions, and demonstrate a correlation between the quality of predictions and disentanglement in the latent space.
Learning Linear Programs from Optimal Decisions
https://papers.nips.cc/paper_files/paper/2020/hash/e44e875c12109e4fa3716c05008048b2-Abstract.html
Yingcong Tan, Daria Terekhov, Andrew Delong
https://papers.nips.cc/paper_files/paper/2020/hash/e44e875c12109e4fa3716c05008048b2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e44e875c12109e4fa3716c05008048b2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11380-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e44e875c12109e4fa3716c05008048b2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e44e875c12109e4fa3716c05008048b2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e44e875c12109e4fa3716c05008048b2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e44e875c12109e4fa3716c05008048b2-Supplemental.pdf
We propose a flexible gradient-based framework for learning linear programs from optimal decisions. Linear programs are often specified by hand, using prior knowledge of relevant costs and constraints. In some applications, linear programs must instead be learned from observations of optimal decisions. Learning from optimal decisions is a particularly challenging bilevel problem, and much of the related inverse optimization literature is dedicated to special cases. We tackle the general problem, learning all parameters jointly while allowing flexible parameterizations of costs, constraints, and loss functions. We also address challenges specific to learning linear programs, such as empty feasible regions and non-unique optimal decisions. Experiments show that our method successfully learns synthetic linear programs and minimum-cost multi-commodity flow instances for which previous methods are not directly applicable. We also provide a fast batch-mode PyTorch implementation of the homogeneous interior point algorithm, which supports gradients by implicit differentiation or backpropagation.
Wisdom of the Ensemble: Improving Consistency of Deep Learning Models
https://papers.nips.cc/paper_files/paper/2020/hash/e464656edca5e58850f8cec98cbb979b-Abstract.html
Lijing Wang, Dipanjan Ghosh, Maria Gonzalez Diaz, Ahmed Farahat, Mahbubul Alam, Chetan Gupta, Jiangzhuo Chen, Madhav Marathe
https://papers.nips.cc/paper_files/paper/2020/hash/e464656edca5e58850f8cec98cbb979b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e464656edca5e58850f8cec98cbb979b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11381-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e464656edca5e58850f8cec98cbb979b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e464656edca5e58850f8cec98cbb979b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e464656edca5e58850f8cec98cbb979b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e464656edca5e58850f8cec98cbb979b-Supplemental.pdf
Deep learning classifiers are assisting humans in making decisions and hence the user's trust in these models is of paramount importance. Trust is often a function of constant behavior. From an AI model perspective it means given the same input the user would expect the same output, especially for correct outputs, or in other words consistently correct outputs. This paper studies a model behavior in the context of periodic retraining of deployed models where the outputs from successive generations of the models might not agree on the correct labels assigned to the same input. We formally define consistency and correct-consistency of a learning model. We prove that consistency and correct-consistency of an ensemble learner is not less than the average consistency and correct-consistency of individual learners and correct-consistency can be improved with a probability by combining learners with accuracy not less than the average accuracy of ensemble component learners. To validate the theory using three datasets and two state-of-the-art deep learning classifiers we also propose an efficient dynamic snapshot ensemble method and demonstrate its value. Code for our algorithm is available at https://github.com/christa60/dynens.
Universal Function Approximation on Graphs
https://papers.nips.cc/paper_files/paper/2020/hash/e4acb4c86de9d2d9a41364f93951028d-Abstract.html
Rickard Brüel Gabrielsson
https://papers.nips.cc/paper_files/paper/2020/hash/e4acb4c86de9d2d9a41364f93951028d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e4acb4c86de9d2d9a41364f93951028d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11382-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e4acb4c86de9d2d9a41364f93951028d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e4acb4c86de9d2d9a41364f93951028d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e4acb4c86de9d2d9a41364f93951028d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e4acb4c86de9d2d9a41364f93951028d-Supplemental.pdf
In this work we produce a framework for constructing universal function approximators on graph isomorphism classes. We prove how this framework comes with a collection of theoretically desirable properties and enables novel analysis. We show how this allows us to achieve state-of-the-art performance on four different well-known datasets in graph classification and separate classes of graphs that other graph-learning methods cannot. Our approach is inspired by persistent homology, dependency parsing for NLP, and multivalued functions. The complexity of the underlying algorithm is O(#edges x #nodes) and code is publicly available (https://github.com/bruel-gabrielsson/universal-function-approximation-on-graphs).
Accelerating Reinforcement Learning through GPU Atari Emulation
https://papers.nips.cc/paper_files/paper/2020/hash/e4d78a6b4d93e1d79241f7b282fa3413-Abstract.html
Steven Dalton, iuri frosio
https://papers.nips.cc/paper_files/paper/2020/hash/e4d78a6b4d93e1d79241f7b282fa3413-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e4d78a6b4d93e1d79241f7b282fa3413-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11383-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e4d78a6b4d93e1d79241f7b282fa3413-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e4d78a6b4d93e1d79241f7b282fa3413-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e4d78a6b4d93e1d79241f7b282fa3413-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e4d78a6b4d93e1d79241f7b282fa3413-Supplemental.pdf
We introduce CuLE (CUDA Learning Environment), a CUDA port of the Atari Learning Environment (ALE) which is used for the development of deep reinforcement algorithms. CuLE overcomes many limitations of existing CPU-based emulators and scales naturally to multiple GPUs. It leverages GPU parallelization to run thousands of games simultaneously and it renders frames directly on the GPU, to avoid the bottleneck arising from the limited CPU-GPU communication bandwidth. CuLE generates up to 155M frames per hour on a single GPU, a finding previously achieved only through a cluster of CPUs. Beyond highlighting the differences between CPU and GPU emulators in the context of reinforcement learning, we show how to leverage the high throughput of CuLE by effective batching of the training data, and show accelerated convergence for A2C+V-trace. CuLE is available at https://github.com/NVlabs/cule.
EvolveGraph: Multi-Agent Trajectory Prediction with Dynamic Relational Reasoning
https://papers.nips.cc/paper_files/paper/2020/hash/e4d8163c7a068b65a64c89bd745ec360-Abstract.html
Jiachen Li, Fan Yang, Masayoshi Tomizuka, Chiho Choi
https://papers.nips.cc/paper_files/paper/2020/hash/e4d8163c7a068b65a64c89bd745ec360-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e4d8163c7a068b65a64c89bd745ec360-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11384-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e4d8163c7a068b65a64c89bd745ec360-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e4d8163c7a068b65a64c89bd745ec360-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e4d8163c7a068b65a64c89bd745ec360-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e4d8163c7a068b65a64c89bd745ec360-Supplemental.pdf
Multi-agent interacting systems are prevalent in the world, from purely physical systems to complicated social dynamic systems. In many applications, effective understanding of the situation and accurate trajectory prediction of interactive agents play a significant role in downstream tasks, such as decision making and planning. In this paper, we propose a generic trajectory forecasting framework (named EvolveGraph) with explicit relational structure recognition and prediction via latent interaction graphs among multiple heterogeneous, interactive agents. Considering the uncertainty of future behaviors, the model is designed to provide multi-modal prediction hypotheses. Since the underlying interactions may evolve even with abrupt changes, and different modalities of evolution may lead to different outcomes, we address the necessity of dynamic relational reasoning and adaptively evolving the interaction graphs. We also introduce a double-stage training pipeline which not only improves training efficiency and accelerates convergence, but also enhances model performance. The proposed framework is evaluated on both synthetic physics simulations and multiple real-world benchmark datasets in various areas. The experimental results illustrate that our approach achieves state-of-the-art performance in terms of prediction accuracy.
Comparator-Adaptive Convex Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/e4f37b9ed429c1fe5ce61860d9902521-Abstract.html
Dirk van der Hoeven, Ashok Cutkosky, Haipeng Luo
https://papers.nips.cc/paper_files/paper/2020/hash/e4f37b9ed429c1fe5ce61860d9902521-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e4f37b9ed429c1fe5ce61860d9902521-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11385-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e4f37b9ed429c1fe5ce61860d9902521-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e4f37b9ed429c1fe5ce61860d9902521-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e4f37b9ed429c1fe5ce61860d9902521-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e4f37b9ed429c1fe5ce61860d9902521-Supplemental.pdf
We study bandit convex optimization methods that adapt to the norm of the comparator, a topic that has only been studied before for its full-information counterpart. Specifically, we develop convex bandit algorithms with regret bounds that are small whenever the norm of the comparator is small. We first use techniques from the full-information setting to develop comparator-adaptive algorithms for linear bandits. Then, we extend the ideas to convex bandits with Lipschitz or smooth loss functions, using a new single-point gradient estimator and carefully designed surrogate losses.
Model-based Reinforcement Learning for Semi-Markov Decision Processes with Neural ODEs
https://papers.nips.cc/paper_files/paper/2020/hash/e562cd9c0768d5464b64cf61da7fc6bb-Abstract.html
Jianzhun Du, Joseph Futoma, Finale Doshi-Velez
https://papers.nips.cc/paper_files/paper/2020/hash/e562cd9c0768d5464b64cf61da7fc6bb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e562cd9c0768d5464b64cf61da7fc6bb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11386-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e562cd9c0768d5464b64cf61da7fc6bb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e562cd9c0768d5464b64cf61da7fc6bb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e562cd9c0768d5464b64cf61da7fc6bb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e562cd9c0768d5464b64cf61da7fc6bb-Supplemental.pdf
We present two elegant solutions for modeling continuous-time dynamics, in a novel model-based reinforcement learning (RL) framework for semi-Markov decision processes (SMDPs), using neural ordinary differential equations (ODEs). Our models accurately characterize continuous-time dynamics and enable us to develop high-performing policies using a small amount of data. We also develop a model-based approach for optimizing time schedules to reduce interaction rates with the environment while maintaining the near-optimal performance, which is not possible for model-free methods. We experimentally demonstrate the efficacy of our methods across various continuous-time domains.
The Adaptive Complexity of Maximizing a Gross Substitutes Valuation
https://papers.nips.cc/paper_files/paper/2020/hash/e56954b4f6347e897f954495eab16a88-Abstract.html
Ron Kupfer, Sharon Qian, Eric Balkanski, Yaron Singer
https://papers.nips.cc/paper_files/paper/2020/hash/e56954b4f6347e897f954495eab16a88-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e56954b4f6347e897f954495eab16a88-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11387-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e56954b4f6347e897f954495eab16a88-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e56954b4f6347e897f954495eab16a88-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e56954b4f6347e897f954495eab16a88-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e56954b4f6347e897f954495eab16a88-Supplemental.zip
In this paper, we study the adaptive complexity of maximizing a monotone gross substitutes function under a cardinality constraint. Our main result is an algorithm that achieves a 1-epsilon approximation in O(log n) adaptive rounds for any constant epsilon > 0, which is an exponential speedup in parallel running time compared to previously studied algorithms for gross substitutes functions. We show that the algorithmic results are tight in the sense that there is no algorithm that obtains a constant factor approximation in o(log n) rounds. Both the upper and lower bounds are under the assumption that queries are only on feasible sets (i.e., of size at most k). We also show that under a stronger model, where non-feasible queries are allowed, there is no non-adaptive algorithm that obtains an approximation better than 1/2 + epsilon. Both lower bounds extend to the class of OXS functions. Additionally, we conduct experiments on synthetic and real data sets to demonstrate the near-optimal performance and efficiency of the algorithm in practice.
A Robust Functional EM Algorithm for Incomplete Panel Count Data
https://papers.nips.cc/paper_files/paper/2020/hash/e56eea9a45b153de634b23780365f976-Abstract.html
Alexander Moreno, Zhenke Wu, Jamie Roslyn Yap, Cho Lam, David Wetter, Inbal Nahum-Shani, Walter Dempsey, James M. Rehg
https://papers.nips.cc/paper_files/paper/2020/hash/e56eea9a45b153de634b23780365f976-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e56eea9a45b153de634b23780365f976-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11388-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e56eea9a45b153de634b23780365f976-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e56eea9a45b153de634b23780365f976-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e56eea9a45b153de634b23780365f976-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e56eea9a45b153de634b23780365f976-Supplemental.pdf
Panel count data describes aggregated counts of recurrent events observed at discrete time points. To understand dynamics of health behaviors and predict future negative events, the field of quantitative behavioral research has evolved to increasingly rely upon panel count data collected via multiple self reports, for example, about frequencies of smoking using in-the-moment surveys on mobile devices. However, missing reports are common and present a major barrier to downstream statistical learning. As a first step, under a missing completely at random assumption (MCAR), we propose a simple yet widely applicable functional EM algorithm to estimate the counting process mean function, which is of central interest to behavioral scientists. The proposed approach wraps several popular panel count inference methods, seamlessly deals with incomplete counts and is robust to misspecification of the Poisson process assumption. Theoretical analysis of the proposed algorithm provides finite-sample guarantees by extending parametric EM theory to the general non-parametric setting. We illustrate the utility of the proposed algorithm through numerical experiments and an analysis of smoking cessation data. We also discuss useful extensions to address deviations from the MCAR assumption and covariate effects.
Graph Stochastic Neural Networks for Semi-supervised Learning
https://papers.nips.cc/paper_files/paper/2020/hash/e586a4f55fb43a540c2e9dab45e00f53-Abstract.html
Haibo Wang, Chuan Zhou, Xin Chen, Jia Wu, Shirui Pan, Jilong Wang
https://papers.nips.cc/paper_files/paper/2020/hash/e586a4f55fb43a540c2e9dab45e00f53-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e586a4f55fb43a540c2e9dab45e00f53-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11389-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e586a4f55fb43a540c2e9dab45e00f53-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e586a4f55fb43a540c2e9dab45e00f53-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e586a4f55fb43a540c2e9dab45e00f53-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e586a4f55fb43a540c2e9dab45e00f53-Supplemental.pdf
Graph Neural Networks (GNNs) have achieved remarkable performance in the task of the semi-supervised node classification. However, most existing models learn a deterministic classification function, which lack sufficient flexibility to explore better choices in the presence of kinds of imperfect observed data such as the scarce labeled nodes and noisy graph structure. To improve the rigidness and inflexibility of deterministic classification functions, this paper proposes a novel framework named Graph Stochastic Neural Networks (GSNN), which aims to model the uncertainty of the classification function by simultaneously learning a family of functions, i.e., a stochastic function. Specifically, we introduce a learnable graph neural network coupled with a high-dimensional latent variable to model the distribution of the classification function, and further adopt the amortised variational inference to approximate the intractable joint posterior for missing labels and the latent variable. By maximizing the lower-bound of the likelihood for observed node labels, the instantiated models can be trained in an end-to-end manner effectively. Extensive experiments on three real-world datasets show that GSNN achieves substantial performance gain in different scenarios compared with stat-of-the-art baselines.
Compositional Zero-Shot Learning via Fine-Grained Dense Feature Composition
https://papers.nips.cc/paper_files/paper/2020/hash/e58cc5ca94270acaceed13bc82dfedf7-Abstract.html
Dat Huynh, Ehsan Elhamifar
https://papers.nips.cc/paper_files/paper/2020/hash/e58cc5ca94270acaceed13bc82dfedf7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e58cc5ca94270acaceed13bc82dfedf7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11390-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e58cc5ca94270acaceed13bc82dfedf7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e58cc5ca94270acaceed13bc82dfedf7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e58cc5ca94270acaceed13bc82dfedf7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e58cc5ca94270acaceed13bc82dfedf7-Supplemental.pdf
We develop a novel generative model for zero-shot learning to recognize fine-grained unseen classes without training samples. Our observation is that generating holistic features of unseen classes fails to capture every attribute needed to distinguish small differences among classes. We propose a feature composition framework that learns to extract attribute-based features from training samples and combines them to construct fine-grained features for unseen classes. Feature composition allows us to not only selectively compose features of unseen classes from only relevant training samples, but also obtain diversity among composed features via changing samples used for composition. In addition, instead of building a global feature of an unseen class, we use all attribute-based features to form a dense representation consisting of fine-grained attribute details. To recognize unseen classes, we propose a novel training scheme that uses a discriminative model to construct features that are subsequently used to train itself. Therefore, we directly train the discriminative model on composed features without learning separate generative models. We conduct experiments on four popular datasets of DeepFashion, AWA2, CUB, and SUN, showing that our method significantly improves the state of the art.
A Benchmark for Systematic Generalization in Grounded Language Understanding
https://papers.nips.cc/paper_files/paper/2020/hash/e5a90182cc81e12ab5e72d66e0b46fe3-Abstract.html
Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, Brenden M. Lake
https://papers.nips.cc/paper_files/paper/2020/hash/e5a90182cc81e12ab5e72d66e0b46fe3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e5a90182cc81e12ab5e72d66e0b46fe3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11391-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e5a90182cc81e12ab5e72d66e0b46fe3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e5a90182cc81e12ab5e72d66e0b46fe3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e5a90182cc81e12ab5e72d66e0b46fe3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e5a90182cc81e12ab5e72d66e0b46fe3-Supplemental.zip
Humans easily interpret expressions that describe unfamiliar situations composed from familiar parts ("greet the pink brontosaurus by the ferris wheel"). Modern neural networks, by contrast, struggle to interpret novel compositions. In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding. Going beyond a related benchmark that focused on syntactic aspects of generalization, gSCAN defines a language grounded in the states of a grid world, facilitating novel evaluations of acquiring linguistically motivated rules. For example, agents must understand how adjectives such as 'small' are interpreted relative to the current world state or how adverbs such as 'cautiously' combine with new verbs. We test a strong multi-modal baseline model and a state-of-the-art compositional method finding that, in most cases, they fail dramatically when generalization requires systematic compositional rules.
Weston-Watkins Hinge Loss and Ordered Partitions
https://papers.nips.cc/paper_files/paper/2020/hash/e5e6851e7f7ffd3530e7389e183aa468-Abstract.html
Yutong Wang, Clayton Scott
https://papers.nips.cc/paper_files/paper/2020/hash/e5e6851e7f7ffd3530e7389e183aa468-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e5e6851e7f7ffd3530e7389e183aa468-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11392-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e5e6851e7f7ffd3530e7389e183aa468-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e5e6851e7f7ffd3530e7389e183aa468-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e5e6851e7f7ffd3530e7389e183aa468-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e5e6851e7f7ffd3530e7389e183aa468-Supplemental.pdf
Multiclass extensions of the support vector machine (SVM) have been formulated in a variety of ways. A recent empirical comparison of nine such formulations [Doǧan et al. 2016] recommends the variant proposed by Weston and Watkins (WW), despite the fact that the WW-hinge loss is not calibrated with respect to the 0-1 loss. In this work we introduce a novel discrete loss function for multiclass classification, the ordered partition loss, and prove that the WW-hinge loss is calibrated with respect to this loss. We also argue that the ordered partition loss is minimally emblematic among discrete losses satisfying this property. Finally, we apply our theory to justify the empirical observation made by Doǧan et al that the WW-SVM can work well even under massive label noise, a challenging setting for multiclass SVMs.
Reinforcement Learning with Augmented Data
https://papers.nips.cc/paper_files/paper/2020/hash/e615c82aba461681ade82da2da38004a-Abstract.html
Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, Aravind Srinivas
https://papers.nips.cc/paper_files/paper/2020/hash/e615c82aba461681ade82da2da38004a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e615c82aba461681ade82da2da38004a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11393-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e615c82aba461681ade82da2da38004a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e615c82aba461681ade82da2da38004a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e615c82aba461681ade82da2da38004a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e615c82aba461681ade82da2da38004a-Supplemental.zip
Learning from visual observations is a fundamental yet challenging problem in Reinforcement Learning (RL). Although algorithmic advances combined with convolutional neural networks have proved to be a recipe for success, current methods are still lacking on two fronts: (a) data-efficiency of learning and (b) generalization to new environments. To this end, we present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms. We perform the first extensive study of general data augmentations for RL on both pixel-based and state-based inputs, and introduce two new data augmentations - random translate and random amplitude scale. We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods across common benchmarks. RAD sets a new state-of-the-art in terms of data-efficiency and final performance on the DeepMind Control Suite benchmark for pixel-based control as well as OpenAI Gym benchmark for state-based control. We further demonstrate that RAD significantly improves test-time generalization over existing methods on several OpenAI ProcGen benchmarks.
Towards Minimax Optimal Reinforcement Learning in Factored Markov Decision Processes
https://papers.nips.cc/paper_files/paper/2020/hash/e61eaa38aed621dd776d0e67cfeee366-Abstract.html
Yi Tian, Jian Qian, Suvrit Sra
https://papers.nips.cc/paper_files/paper/2020/hash/e61eaa38aed621dd776d0e67cfeee366-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e61eaa38aed621dd776d0e67cfeee366-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11394-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e61eaa38aed621dd776d0e67cfeee366-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e61eaa38aed621dd776d0e67cfeee366-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e61eaa38aed621dd776d0e67cfeee366-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e61eaa38aed621dd776d0e67cfeee366-Supplemental.pdf
We study minimax optimal reinforcement learning in episodic factored Markov decision processes (FMDPs), which are MDPs with conditionally independent transition components. Assuming the factorization is known, we propose two model-based algorithms. The first one achieves minimax optimal regret guarantees for a rich class of factored structures, while the second one enjoys better computational complexity with a slightly worse regret. A key new ingredient of our algorithms is the design of a bonus term to guide exploration. We complement our algorithms by presenting several structure dependent lower bounds on regret for FMDPs that reveal the difficulty hiding in the intricacy of the structures.
Graduated Assignment for Joint Multi-Graph Matching and Clustering with Application to Unsupervised Graph Matching Network Learning
https://papers.nips.cc/paper_files/paper/2020/hash/e6384711491713d29bc63fc5eeb5ba4f-Abstract.html
Runzhong Wang, Junchi Yan, Xiaokang Yang
https://papers.nips.cc/paper_files/paper/2020/hash/e6384711491713d29bc63fc5eeb5ba4f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e6384711491713d29bc63fc5eeb5ba4f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11395-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e6384711491713d29bc63fc5eeb5ba4f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e6384711491713d29bc63fc5eeb5ba4f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e6384711491713d29bc63fc5eeb5ba4f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e6384711491713d29bc63fc5eeb5ba4f-Supplemental.pdf
This paper considers the setting of jointly matching and clustering multiple graphs belonging to different groups, which naturally rises in many realistic problems. Both graph matching and clustering are challenging (NP-hard) and a joint solution is appealing due to the natural connection of the two tasks. In this paper, we resort to a graduated assignment procedure for soft matching and clustering over iterations, whereby the two-way constraint and clustering confidence are modulated by two separate annealing parameters, respectively. Our technique can be further utilized for end-to-end learning whose loss refers to the cross-entropy between two lines of matching pipelines, as such the keypoint feature extraction CNNs can be learned without ground-truth supervision. Experimental results on real-world benchmarks show our method outperforms learning-free algorithms and performs comparatively against two-graph based supervised graph matching approaches.
Estimating Training Data Influence by Tracing Gradient Descent
https://papers.nips.cc/paper_files/paper/2020/hash/e6385d39ec9394f2f3a354d9d2b88eec-Abstract.html
Garima Pruthi, Frederick Liu, Satyen Kale, Mukund Sundararajan
https://papers.nips.cc/paper_files/paper/2020/hash/e6385d39ec9394f2f3a354d9d2b88eec-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11396-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e6385d39ec9394f2f3a354d9d2b88eec-Supplemental.pdf
We introduce a method called TracIn that computes the influence of a training example on a prediction made by the model. The idea is to trace how the loss on the test point changes during the training process whenever the training example of interest was utilized. We provide a scalable implementation of TracIn via: (a) a first-order gradient approximation to the exact computation, (b) saved checkpoints of standard training procedures, and (c) cherry-picking layers of a deep neural network. In contrast with previously proposed methods, TracIn is simple to implement; all it needs is the ability to work with gradients, checkpoints, and loss functions. The method is general. It applies to any machine learning model trained using stochastic gradient descent or a variant of it, agnostic of architecture, domain and task. We expect the method to be widely useful within processes that study and improve training data.
Joint Policy Search for Multi-agent Collaboration with Imperfect Information
https://papers.nips.cc/paper_files/paper/2020/hash/e64f346817ce0c93d7166546ac8ce683-Abstract.html
Yuandong Tian, Qucheng Gong, Yu Jiang
https://papers.nips.cc/paper_files/paper/2020/hash/e64f346817ce0c93d7166546ac8ce683-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e64f346817ce0c93d7166546ac8ce683-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11397-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e64f346817ce0c93d7166546ac8ce683-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e64f346817ce0c93d7166546ac8ce683-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e64f346817ce0c93d7166546ac8ce683-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e64f346817ce0c93d7166546ac8ce683-Supplemental.pdf
To learn good joint policies for multi-agent collaboration with incomplete information remains a fundamental challenge. While for two-player zero-sum games, coordinate-ascent approaches (optimizing one agent's policy at a time, e.g., self-play) work with guarantees, in multi-agent cooperative setting they often converge to sub-optimal Nash equilibrium. On the other hand, directly modeling joint policy changes in incomplete information game is nontrivial due to complicated interplay of policies (e.g., upstream updates affect downstream state reachability). In this paper, we show global changes of game values can be decomposed to policy changes localized at each information set, with a novel term named \emph{policy-change density}. Based on this, we propose \emph{Joint Policy Search} (JPS) that iteratively improves joint policies of collaborative agents in incomplete information games, without re-evaluating the entire game. On multiple collaborative tabular games, JPS is proven to never worsen performance and can improve solutions provided by unilateral approaches (e.g, CFR), outperforming algorithms designed for collaborative policy learning (e.g. BAD). Furthermore, for real-world game whose states are too many to enumerate, \ours{} has an online form that naturally links with gradient updates. We test it to Contract Bridge, a 4-player imperfect-information game where a team of $2$ collaborates to compete against the other. In its bidding phase, players bid in turn to find a good contract through a limited information channel. Based on a strong baseline agent that bids competitive bridge purely through domain-agnostic self-play, JPS improves collaboration of team players and outperforms WBridge5, a championship-winning software, by $+0.63$ IMPs (International Matching Points) per board over $1000$ games, substantially better than previous SoTA ($+0.41$ IMPs/b against WBridge5). Note that $+0.1$ IMPs/b is regarded as a nontrivial improvement in Computer Bridge.
Adversarial Bandits with Corruptions: Regret Lower Bound and No-regret Algorithm
https://papers.nips.cc/paper_files/paper/2020/hash/e655c7716a4b3ea67f48c6322fc42ed6-Abstract.html
lin yang, Mohammad Hajiesmaili, Mohammad Sadegh Talebi, John C. S. Lui, Wing Shing Wong
https://papers.nips.cc/paper_files/paper/2020/hash/e655c7716a4b3ea67f48c6322fc42ed6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e655c7716a4b3ea67f48c6322fc42ed6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11398-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e655c7716a4b3ea67f48c6322fc42ed6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e655c7716a4b3ea67f48c6322fc42ed6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e655c7716a4b3ea67f48c6322fc42ed6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e655c7716a4b3ea67f48c6322fc42ed6-Supplemental.pdf
This paper studies adversarial bandits with corruptions. In the basic adversarial bandit setting, the reward of arms is predetermined by an adversary who is oblivious to the learner’s policy. In this paper, we consider an extended setting in which an attacker sits in-between the environment and the learner, and is endowed with a limited budget to corrupt the reward of the selected arm. We have two main results. First, we derive a lower bound on the regret of any bandit algorithm that is aware of the budget of the attacker. Also, for budget-agnostic algorithms, we characterize an impossibility result demonstrating that even when the attacker has a sublinear budget, i.e., a budget growing sublinearly with time horizon T, they fail to achieve a sublinear regret. Second, we propose ExpRb, a bandit algorithm that incorporates a biased estimator and a robustness parameter to deal with corruption. We characterize the regret of ExpRb as a function of the corruption budget and show that for the case of a known corruption budget, the regret of ExpRb is tight.
Beta R-CNN: Looking into Pedestrian Detection from Another Perspective
https://papers.nips.cc/paper_files/paper/2020/hash/e6b4b2a746ed40e1af829d1fa82daa10-Abstract.html
Zixuan Xu, Banghuai Li, Ye Yuan, Anhong Dang
https://papers.nips.cc/paper_files/paper/2020/hash/e6b4b2a746ed40e1af829d1fa82daa10-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e6b4b2a746ed40e1af829d1fa82daa10-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11399-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e6b4b2a746ed40e1af829d1fa82daa10-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e6b4b2a746ed40e1af829d1fa82daa10-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e6b4b2a746ed40e1af829d1fa82daa10-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e6b4b2a746ed40e1af829d1fa82daa10-Supplemental.pdf
Recently significant progress has been made in pedestrian detection, but it remains challenging to achieve high performance in occluded and crowded scenes. It could be mostly attributed to the widely used representation of pedestrians, i.e., 2Daxis-aligned bounding box, which just describes the approximate location and size of the object. Bounding box models the object as a uniform distribution within the boundary, making pedestrians indistinguishable in occluded and crowded scenes due to much noise. To eliminate the problem, we propose a novel representation based on 2D beta distribution, named Beta Representation. It pictures a pedestrianby explicitly constructing the relationship between full-body and visible boxes, and emphasizes the center of visual mass by assigning different probability valuesto pixels. As a result, Beta Representation is much better for distinguishing highly-overlapped instances in crowded scenes with a new NMS strategy named BetaNMS. What’s more, to fully exploit Beta Representation, a novel pipeline Beta R-CNN equipped with BetaHead and BetaMask is proposed, leading to high detection performance in occluded and crowded scenes.
Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks
https://papers.nips.cc/paper_files/paper/2020/hash/e6b738eca0e6792ba8a9cbcba6c1881d-Abstract.html
Soham De, Sam Smith
https://papers.nips.cc/paper_files/paper/2020/hash/e6b738eca0e6792ba8a9cbcba6c1881d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e6b738eca0e6792ba8a9cbcba6c1881d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11400-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e6b738eca0e6792ba8a9cbcba6c1881d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e6b738eca0e6792ba8a9cbcba6c1881d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e6b738eca0e6792ba8a9cbcba6c1881d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e6b738eca0e6792ba8a9cbcba6c1881d-Supplemental.pdf
Batch normalization dramatically increases the largest trainable depth of residual networks, and this benefit has been crucial to the empirical success of deep residual networks on a wide range of benchmarks. We show that this key benefit arises because, at initialization, batch normalization downscales the residual branch relative to the skip connection, by a normalizing factor on the order of the square root of the network depth. This ensures that, early in training, the function computed by normalized residual blocks in deep networks is close to the identity function (on average). We use this insight to develop a simple initialization scheme that can train deep residual networks without normalization. We also provide a detailed empirical study of residual networks, which clarifies that, although batch normalized networks can be trained with larger learning rates, this effect is only beneficial in specific compute regimes, and has minimal benefits when the batch size is small.
Learning Retrospective Knowledge with Reverse Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/e6cbc650cd5798a05dfd0f51d14cde5c-Abstract.html
Shangtong Zhang, Vivek Veeriah, Shimon Whiteson
https://papers.nips.cc/paper_files/paper/2020/hash/e6cbc650cd5798a05dfd0f51d14cde5c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e6cbc650cd5798a05dfd0f51d14cde5c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11401-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e6cbc650cd5798a05dfd0f51d14cde5c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e6cbc650cd5798a05dfd0f51d14cde5c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e6cbc650cd5798a05dfd0f51d14cde5c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e6cbc650cd5798a05dfd0f51d14cde5c-Supplemental.pdf
We present a Reverse Reinforcement Learning (Reverse RL) approach for representing retrospective knowledge. General Value Functions (GVFs) have enjoyed great success in representing predictive knowledge, i.e., answering questions about possible future outcomes such as “how much fuel will be consumed in expectation if we drive from A to B?”. GVFs, however, cannot answer questions like “how much fuel do we expect a car to have given it is at B at time t?”. To answer this question, we need to know when that car had a full tank and how that car came to B. Since such questions emphasize the influence of possible past events on the present, we refer to their answers as retrospective knowledge. In this paper, we show how to represent retrospective knowledge with Reverse GVFs, which are trained via Reverse RL. We demonstrate empirically the utility of Reverse GVFs in both representation learning and anomaly detection.
Dialog without Dialog Data: Learning Visual Dialog Agents from VQA Data
https://papers.nips.cc/paper_files/paper/2020/hash/e7023ba77a45f7e84c5ee8a28dd63585-Abstract.html
Michael Cogswell, Jiasen Lu, Rishabh Jain, Stefan Lee, Devi Parikh, Dhruv Batra
https://papers.nips.cc/paper_files/paper/2020/hash/e7023ba77a45f7e84c5ee8a28dd63585-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e7023ba77a45f7e84c5ee8a28dd63585-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11402-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e7023ba77a45f7e84c5ee8a28dd63585-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e7023ba77a45f7e84c5ee8a28dd63585-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e7023ba77a45f7e84c5ee8a28dd63585-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e7023ba77a45f7e84c5ee8a28dd63585-Supplemental.pdf
Can we develop visually grounded dialog agents that can efficiently adapt to new tasks without forgetting how to talk to people? Such agents could leverage a larger variety of existing data to generalize to a new task, minimizing expensive data collection and annotation. In this work, we study a setting we call "Dialog without Dialog", which requires agents to develop visually grounded dialog models that can adapt to new tasks without language level supervision. By factorizing intention and language, our model minimizes linguistic drift after fine-tuning for new tasks. We present qualitative results, automated metrics, and human studies that all show our model can adapt to new tasks and maintain language quality. Baselines either fail to perform well at new tasks or experience language drift, becoming unintelligible to humans. Code has been made available at: https://github.com/mcogswell/dialogwithoutdialog.
GCOMB: Learning Budget-constrained Combinatorial Algorithms over Billion-sized Graphs
https://papers.nips.cc/paper_files/paper/2020/hash/e7532dbeff7ef901f2e70daacb3f452d-Abstract.html
Sahil Manchanda, AKASH MITTAL, Anuj Dhawan, Sourav Medya, Sayan Ranu, Ambuj Singh
https://papers.nips.cc/paper_files/paper/2020/hash/e7532dbeff7ef901f2e70daacb3f452d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e7532dbeff7ef901f2e70daacb3f452d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11403-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e7532dbeff7ef901f2e70daacb3f452d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e7532dbeff7ef901f2e70daacb3f452d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e7532dbeff7ef901f2e70daacb3f452d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e7532dbeff7ef901f2e70daacb3f452d-Supplemental.pdf
There has been an increased interest in discovering heuristics for combinatorial problems on graphs through machine learning. While existing techniques have primarily focused on obtaining high-quality solutions, scalability to billion-sized graphs has not been adequately addressed. In addition, the impact of a budget-constraint, which is necessary for many practical scenarios, remains to be studied. In this paper, we propose a framework called GCOMB to bridge these gaps. GCOMB trains a Graph Convolutional Network (GCN) using a novel probabilistic greedy mechanism to predict the quality of a node. To further facilitate the combinatorial nature of the problem, GCOMB utilizes a Q-learning framework, which is made efficient through importance sampling. We perform extensive experiments on real graphs to benchmark the efficiency and efficacy of GCOMB. Our results establish that GCOMB is 100 times faster and marginally better in quality than state-of-the-art algorithms for learning combinatorial algorithms. Additionally, a case-study on the practical combinatorial problem of Influence Maximization (IM) shows GCOMB is 150 times faster than the specialized IM algorithm IMM with similar quality.
A General Large Neighborhood Search Framework for Solving Integer Linear Programs
https://papers.nips.cc/paper_files/paper/2020/hash/e769e03a9d329b2e864b4bf4ff54ff39-Abstract.html
Jialin Song, ravi lanka, Yisong Yue, Bistra Dilkina
https://papers.nips.cc/paper_files/paper/2020/hash/e769e03a9d329b2e864b4bf4ff54ff39-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e769e03a9d329b2e864b4bf4ff54ff39-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11404-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e769e03a9d329b2e864b4bf4ff54ff39-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e769e03a9d329b2e864b4bf4ff54ff39-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e769e03a9d329b2e864b4bf4ff54ff39-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e769e03a9d329b2e864b4bf4ff54ff39-Supplemental.pdf
This paper studies how to design abstractions of large-scale combinatorial optimization problems that can leverage existing state-of-the-art solvers in general-purpose ways, and that are amenable to data-driven design. The goal is to arrive at new approaches that can reliably outperform existing solvers in wall-clock time. We focus on solving integer programs and ground our approach in the large neighborhood search (LNS) paradigm, which iteratively chooses a subset of variables to optimize while leaving the remainder fixed. The appeal of LNS is that it can easily use any existing solver as a subroutine, and thus can inherit the benefits of carefully engineered heuristic approaches and their software implementations. We also show that one can learn a good neighborhood selector from training data. Through an extensive empirical validation, we demonstrate that our LNS framework can significantly outperform, in wall-clock time, compared to state-of-the-art commercial solvers such as Gurobi.
A Theoretical Framework for Target Propagation
https://papers.nips.cc/paper_files/paper/2020/hash/e7a425c6ece20cbc9056f98699b53c6f-Abstract.html
Alexander Meulemans, Francesco Carzaniga, Johan Suykens, João Sacramento, Benjamin F. Grewe
https://papers.nips.cc/paper_files/paper/2020/hash/e7a425c6ece20cbc9056f98699b53c6f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e7a425c6ece20cbc9056f98699b53c6f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11405-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e7a425c6ece20cbc9056f98699b53c6f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e7a425c6ece20cbc9056f98699b53c6f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e7a425c6ece20cbc9056f98699b53c6f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e7a425c6ece20cbc9056f98699b53c6f-Supplemental.pdf
The success of deep learning, a brain-inspired form of AI, has sparked interest in understanding how the brain could similarly learn across multiple layers of neurons. However, the majority of biologically-plausible learning algorithms have not yet reached the performance of backpropagation (BP), nor are they built on strong theoretical foundations. Here, we analyze target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization. Our theory shows that TP is closely related to Gauss-Newton optimization and thus substantially differs from BP. Furthermore, our analysis reveals a fundamental limitation of difference target propagation (DTP), a well-known variant of TP, in the realistic scenario of non-invertible neural networks. We provide a first solution to this problem through a novel reconstruction loss that improves feedback weight training, while simultaneously introducing architectural flexibility by allowing for direct feedback connections from the output to each hidden layer. Our theory is corroborated by experimental results that show significant improvements in performance and in the alignment of forward weight updates with loss gradients, compared to DTP.
OrganITE: Optimal transplant donor organ offering using an individual treatment effect
https://papers.nips.cc/paper_files/paper/2020/hash/e7c573c14a09b84f6b7782ce3965f335-Abstract.html
Jeroen Berrevoets, James Jordon, Ioana Bica, alexander gimson, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2020/hash/e7c573c14a09b84f6b7782ce3965f335-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e7c573c14a09b84f6b7782ce3965f335-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11406-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e7c573c14a09b84f6b7782ce3965f335-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e7c573c14a09b84f6b7782ce3965f335-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e7c573c14a09b84f6b7782ce3965f335-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e7c573c14a09b84f6b7782ce3965f335-Supplemental.pdf
Transplant-organs are a scarce medical resource. The uniqueness of each organ and the patients' heterogeneous responses to the organs present a unique and challenging machine learning problem. In this problem there are two key challenges: (i) assigning each organ "optimally" to a patient in the queue; (ii) accurately estimating the potential outcomes associated with each patient and each possible organ. In this paper, we introduce OrganITE, an organ-to-patient assignment methodology that assigns organs based not only on its own estimates of the potential outcomes but also on organ scarcity. By modelling and accounting for organ scarcity we significantly increase total life years across the population, compared to the existing greedy approaches that simply optimise life years for the current organ available. Moreover, we propose an individualised treatment effect model capable of addressing the high dimensionality of the organ space. We test our method on real and simulated data, resulting in as much as an additional year of life expectancy as compared to existing organ-to-patient policies.
The Complete Lasso Tradeoff Diagram
https://papers.nips.cc/paper_files/paper/2020/hash/e7db14e12fb49c1d78a573e6e5f542c2-Abstract.html
Hua Wang, Yachong Yang, Zhiqi Bu, Weijie Su
https://papers.nips.cc/paper_files/paper/2020/hash/e7db14e12fb49c1d78a573e6e5f542c2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e7db14e12fb49c1d78a573e6e5f542c2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11407-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e7db14e12fb49c1d78a573e6e5f542c2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e7db14e12fb49c1d78a573e6e5f542c2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e7db14e12fb49c1d78a573e6e5f542c2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e7db14e12fb49c1d78a573e6e5f542c2-Supplemental.pdf
A fundamental problem in high-dimensional regression is to understand the tradeoff between type I and type II errors or, equivalently, false discovery rate (FDR) and power in variable selection. To address this important problem, we offer the first complete diagram that distinguishes all pairs of FDR and power that can be asymptotically realized by the Lasso from the remaining pairs, in a regime of linear sparsity under random designs. The tradeoff between the FDR and power characterized by our diagram holds no matter how strong the signals are. In particular, our results complete the earlier Lasso tradeoff diagram in previous literature by recognizing two simple constraints on the pairs of FDR and power. The improvement is more substantial when the regression problem is above the Donoho-Tanner phase transition. Finally, we present extensive simulation studies to confirm the sharpness of the complete Lasso tradeoff diagram.
On the universality of deep learning
https://papers.nips.cc/paper_files/paper/2020/hash/e7e8f8e5982b3298c8addedf6811d500-Abstract.html
Emmanuel Abbe, Colin Sandon
https://papers.nips.cc/paper_files/paper/2020/hash/e7e8f8e5982b3298c8addedf6811d500-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e7e8f8e5982b3298c8addedf6811d500-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11408-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e7e8f8e5982b3298c8addedf6811d500-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e7e8f8e5982b3298c8addedf6811d500-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e7e8f8e5982b3298c8addedf6811d500-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e7e8f8e5982b3298c8addedf6811d500-Supplemental.pdf
This paper shows that deep learning, i.e., neural networks trained by SGD, can learn in polytime any function class that can be learned in polytime by some algorithm, including parities. This universal result is further shown to be robust, i.e., it holds under possibly poly-noise on the gradients, which gives a separation between deep learning and statistical query algorithms, as the latter are not comparably universal due to cases like parities. This also shows that SGD-based deep learning does not suffer from the limitations of the perceptron discussed by Minsky-Papert '69. The paper further complement this result with a lower-bound on the generalization error of descent algorithms, which implies in particular that the robust universality breaks down if the gradients are averaged over large enough batches of samples as in full-GD, rather than fewer samples as in SGD.
Regression with reject option and application to kNN
https://papers.nips.cc/paper_files/paper/2020/hash/e8219d4c93f6c55c6b10fe6bfe997c6c-Abstract.html
Ahmed Zaoui, Christophe Denis, Mohamed Hebiri
https://papers.nips.cc/paper_files/paper/2020/hash/e8219d4c93f6c55c6b10fe6bfe997c6c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e8219d4c93f6c55c6b10fe6bfe997c6c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11409-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e8219d4c93f6c55c6b10fe6bfe997c6c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e8219d4c93f6c55c6b10fe6bfe997c6c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e8219d4c93f6c55c6b10fe6bfe997c6c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e8219d4c93f6c55c6b10fe6bfe997c6c-Supplemental.pdf
We investigate the problem of regression where one is allowed to abstain from predicting. We refer to this framework as regression with reject option as an extension of classification with reject option. In this context, we focus on the case where the rejection rate is fixed and derive the optimal rule which relies on thresholding the conditional variance function. We provide a semi-supervised estimation procedure of the optimal rule involving two datasets: a first labeled dataset is used to estimate both regression function and conditional variance function while a second unlabeled dataset is exploited to calibrate the desired rejection rate. The resulting predictor with reject option is shown to be almost as good as the optimal predictor with reject option both in terms of risk and rejection rate. We additionally apply our methodology with kNN algorithm and establish rates of convergence for the resulting kNN predictor under mild conditions. Finally, a numerical study is performed to illustrate the benefit of using the proposed procedure.
The Primal-Dual method for Learning Augmented Algorithms
https://papers.nips.cc/paper_files/paper/2020/hash/e834cb114d33f729dbc9c7fb0c6bb607-Abstract.html
Etienne Bamas, Andreas Maggiori, Ola Svensson
https://papers.nips.cc/paper_files/paper/2020/hash/e834cb114d33f729dbc9c7fb0c6bb607-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e834cb114d33f729dbc9c7fb0c6bb607-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11410-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e834cb114d33f729dbc9c7fb0c6bb607-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e834cb114d33f729dbc9c7fb0c6bb607-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e834cb114d33f729dbc9c7fb0c6bb607-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e834cb114d33f729dbc9c7fb0c6bb607-Supplemental.pdf
The extension of classical online algorithms when provided with predictions is a new and active research area. In this paper, we extend the primal-dual method for online algorithms in order to incorporate predictions that advise the online algorithm about the next action to take. We use this framework to obtain novel algorithms for a variety of online covering problems. We compare our algorithms to the cost of the true and predicted offline optimal solutions and show that these algorithms outperform any online algorithm when the prediction is accurate while maintaining good guarantees when the prediction is misleading.
FLAMBE: Structural Complexity and Representation Learning of Low Rank MDPs
https://papers.nips.cc/paper_files/paper/2020/hash/e894d787e2fd6c133af47140aa156f00-Abstract.html
Alekh Agarwal, Sham Kakade, Akshay Krishnamurthy, Wen Sun
https://papers.nips.cc/paper_files/paper/2020/hash/e894d787e2fd6c133af47140aa156f00-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e894d787e2fd6c133af47140aa156f00-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11411-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e894d787e2fd6c133af47140aa156f00-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e894d787e2fd6c133af47140aa156f00-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e894d787e2fd6c133af47140aa156f00-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e894d787e2fd6c133af47140aa156f00-Supplemental.pdf
In order to deal with the curse of dimensionality in reinforcement learning (RL), it is common practice to make parametric assumptions where values or policies are functions of some low dimensional feature space. This work focuses on the representation learning question: how can we learn such features? Under the assumption that the underlying (unknown) dynamics correspond to a low rank transition matrix, we show how the representation learning question is related to a particular non-linear matrix decomposition problem. Structurally, we make precise connections between these low rank MDPs and latent variable models, showing how they significantly generalize prior formulations, such as block MDPs, for representation learning in RL. Algorithmically, we develop FLAMBE, which engages in exploration and representation learning for provably efficient RL in low rank transition models. On a technical level, our analysis eliminates reachability assumptions that appear in prior results on the simpler block MDP model and may be of independent interest.
A Class of Algorithms for General Instrumental Variable Models
https://papers.nips.cc/paper_files/paper/2020/hash/e8b1cbd05f6e6a358a81dee52493dd06-Abstract.html
Niki Kilbertus, Matt J. Kusner, Ricardo Silva
https://papers.nips.cc/paper_files/paper/2020/hash/e8b1cbd05f6e6a358a81dee52493dd06-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e8b1cbd05f6e6a358a81dee52493dd06-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11412-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e8b1cbd05f6e6a358a81dee52493dd06-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e8b1cbd05f6e6a358a81dee52493dd06-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e8b1cbd05f6e6a358a81dee52493dd06-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e8b1cbd05f6e6a358a81dee52493dd06-Supplemental.pdf
Causal treatment effect estimation is a key problem that arises in a variety of real-world settings, from personalized medicine to governmental policy making. There has been a flurry of recent work in machine learning on estimating causal effects when one has access to an instrument. However, to achieve identifiability, they in general require one-size-fits-all assumptions such as an additive error model for the outcome. An alternative is partial identification, which provides bounds on the causal effect. Little exists in terms of bounding methods that can deal with the most general case, where the treatment itself can be continuous. Moreover, bounding methods generally do not allow for a continuum of assumptions on the shape of the causal effect that can smoothly trade off stronger background knowledge for more informative bounds. In this work, we provide a method for causal effect bounding in continuous distributions, leveraging recent advances in gradient-based methods for the optimization of computationally intractable objective functions. We demonstrate on a set of synthetic and real-world data that our bounds capture the causal effect when additive methods fail, providing a useful range of answers compatible with observation as opposed to relying on unwarranted structural assumptions.
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
https://papers.nips.cc/paper_files/paper/2020/hash/e8d66338fab3727e34a9179ed8804f64-Abstract.html
Antonio Barbalau, Adrian Cosma, Radu Tudor Ionescu, Marius Popescu
https://papers.nips.cc/paper_files/paper/2020/hash/e8d66338fab3727e34a9179ed8804f64-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e8d66338fab3727e34a9179ed8804f64-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11413-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e8d66338fab3727e34a9179ed8804f64-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e8d66338fab3727e34a9179ed8804f64-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e8d66338fab3727e34a9179ed8804f64-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e8d66338fab3727e34a9179ed8804f64-Supplemental.zip
We study the task of replicating the functionality of black-box neural models, for which we only know the output class probabilities provided for a set of input images. We assume back-propagation through the black-box model is not possible and its training images are not available, e.g. the model could be exposed only through an API. In this context, we present a teacher-student framework that can distill the black-box (teacher) model into a student model with minimal accuracy loss. To generate useful data samples for training the student, our framework (i) learns to generate images on a proxy data set (with images and classes different from those used to train the black-box) and (ii) applies an evolutionary strategy to make sure that each generated data sample exhibits a high response for a specific class when given as input to the black box. Our framework is compared with several baseline and state-of-the-art methods on three benchmark data sets. The empirical evidence indicates that our model is superior to the considered baselines. Although our method does not back-propagate through the black-box network, it generally surpasses state-of-the-art methods that regard the teacher as a glass-box model. Our code is available at: https://github.com/antoniobarbalau/black-box-ripper.
Bayesian Optimization of Risk Measures
https://papers.nips.cc/paper_files/paper/2020/hash/e8f2779682fd11fa2067beffc27a9192-Abstract.html
Sait Cakmak, Raul Astudillo Marban, Peter Frazier, Enlu Zhou
https://papers.nips.cc/paper_files/paper/2020/hash/e8f2779682fd11fa2067beffc27a9192-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e8f2779682fd11fa2067beffc27a9192-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11414-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e8f2779682fd11fa2067beffc27a9192-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e8f2779682fd11fa2067beffc27a9192-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e8f2779682fd11fa2067beffc27a9192-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e8f2779682fd11fa2067beffc27a9192-Supplemental.pdf
We consider Bayesian optimization of objective functions of the form $\rho[ F(x, W) ]$, where $F$ is a black-box expensive-to-evaluate function and $\rho$ denotes either the VaR or CVaR risk measure, computed with respect to the randomness induced by the environmental random variable $W$. Such problems arise in decision making under uncertainty, such as in portfolio optimization and robust systems design. We propose a family of novel Bayesian optimization algorithms that exploit the structure of the objective function to substantially improve sampling efficiency. Instead of modeling the objective function directly as is typical in Bayesian optimization, these algorithms model $F$ as a Gaussian process, and use the implied posterior on the objective function to decide which points to evaluate. We demonstrate the effectiveness of our approach in a variety of numerical experiments.
TorsionNet: A Reinforcement Learning Approach to Sequential Conformer Search
https://papers.nips.cc/paper_files/paper/2020/hash/e904831f48e729f9ad8355a894334700-Abstract.html
Tarun Gogineni, Ziping Xu, Exequiel Punzalan, Runxuan Jiang, Joshua Kammeraad, Ambuj Tewari, Paul Zimmerman
https://papers.nips.cc/paper_files/paper/2020/hash/e904831f48e729f9ad8355a894334700-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e904831f48e729f9ad8355a894334700-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11415-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e904831f48e729f9ad8355a894334700-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e904831f48e729f9ad8355a894334700-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e904831f48e729f9ad8355a894334700-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e904831f48e729f9ad8355a894334700-Supplemental.pdf
Molecular geometry prediction of flexible molecules, or conformer search, is a long-standing challenge in computational chemistry. This task is of great importance for predicting structure-activity relationships for a wide variety of substances ranging from biomolecules to ubiquitous materials. Substantial computational resources are invested in Monte Carlo and Molecular Dynamics methods to generate diverse and representative conformer sets for medium to large molecules, which are yet intractable to chemoinformatic conformer search methods. We present TorsionNet, an efficient sequential conformer search technique based on reinforcement learning under the rigid rotor approximation. The model is trained via curriculum learning, whose theoretical benefit is explored in detail, to maximize a novel metric grounded in thermodynamics called the Gibbs Score. Our experimental results show that TorsionNet outperforms the highest-scoring chemoinformatics method by 4x on large branched alkanes, and by several orders of magnitude on the previously unexplored biopolymer lignin, with applications in renewable energy. TorsionNet also outperforms the far more exhaustive but computationally intensive Self-Guided Molecular Dynamics sampling method.
GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis
https://papers.nips.cc/paper_files/paper/2020/hash/e92e1b476bb5262d793fd40931e0ed53-Abstract.html
Katja Schwarz, Yiyi Liao, Michael Niemeyer, Andreas Geiger
https://papers.nips.cc/paper_files/paper/2020/hash/e92e1b476bb5262d793fd40931e0ed53-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e92e1b476bb5262d793fd40931e0ed53-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11416-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e92e1b476bb5262d793fd40931e0ed53-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e92e1b476bb5262d793fd40931e0ed53-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e92e1b476bb5262d793fd40931e0ed53-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e92e1b476bb5262d793fd40931e0ed53-Supplemental.zip
While 2D generative adversarial networks have enabled high-resolution image synthesis, they largely lack an understanding of the 3D world and the image formation process. Thus, they do not provide precise control over camera viewpoint or object pose. To address this problem, several recent approaches leverage intermediate voxel-based representations in combination with differentiable rendering. However, existing methods either produce low image resolution or fall short in disentangling camera and scene properties, e.g., the object identity may vary with the viewpoint. In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene. In contrast to voxel-based representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity. By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone. We systematically analyze our approach on several challenging synthetic and real-world datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.
PIE-NET: Parametric Inference of Point Cloud Edges
https://papers.nips.cc/paper_files/paper/2020/hash/e94550c93cd70fe748e6982b3439ad3b-Abstract.html
Xiaogang Wang, Yuelang Xu, Kai Xu, Andrea Tagliasacchi, Bin Zhou, Ali Mahdavi-Amiri, Hao Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/e94550c93cd70fe748e6982b3439ad3b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e94550c93cd70fe748e6982b3439ad3b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11417-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e94550c93cd70fe748e6982b3439ad3b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e94550c93cd70fe748e6982b3439ad3b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e94550c93cd70fe748e6982b3439ad3b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e94550c93cd70fe748e6982b3439ad3b-Supplemental.zip
We introduce an end-to-end learnable technique to robustly identify feature edges in 3D point cloud data. We represent these edges as a collection of parametric curves (i.e.,~lines, circles, and B-splines). Accordingly, our deep neural network, coined PIE-NET, is trained for parametric inference of edges. The network relies on a "region proposal" architecture, where a first module proposes an over-complete collection of edge and corner points, and a second module ranks each proposal to decide whether it should be considered. We train and evaluate our method on the ABC dataset, a large dataset of CAD models, and compare our results to those produced by traditional (non-learning) processing pipelines, as well as a recent deep learning based edge detector (EC-NET). Our results significantly improve over the state-of-the-art from both a quantitative and qualitative standpoint.
A Simple Language Model for Task-Oriented Dialogue
https://papers.nips.cc/paper_files/paper/2020/hash/e946209592563be0f01c844ab2170f0c-Abstract.html
Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, Richard Socher
https://papers.nips.cc/paper_files/paper/2020/hash/e946209592563be0f01c844ab2170f0c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e946209592563be0f01c844ab2170f0c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11418-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e946209592563be0f01c844ab2170f0c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e946209592563be0f01c844ab2170f0c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e946209592563be0f01c844ab2170f0c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e946209592563be0f01c844ab2170f0c-Supplemental.zip
Task-oriented dialogue is often decomposed into three tasks: understanding user input, deciding actions, and generating a response. While such decomposition might suggest a dedicated model for each sub-task, we find a simple, unified approach leads to state-of-the-art performance on the MultiWOZ dataset. SimpleTOD is a simple approach to task-oriented dialogue that uses a single, causal language model trained on all sub-tasks recast as a single sequence prediction problem. This allows SimpleTOD to fully leverage transfer learning from pre-trained, open domain, causal language models such as GPT-2. SimpleTOD improves over the prior state-of-the-art in joint goal accuracy for dialogue state tracking, and our analysis reveals robustness to noisy annotations in this setting. SimpleTOD also improves the main metrics used to evaluate action decisions and response generation in an end-to-end setting: inform rate by 8.1 points, success rate by 9.7 points, and combined score by 7.2 points.
A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval
https://papers.nips.cc/paper_files/paper/2020/hash/e9470886ecab9743fb7ea59420c245d2-Abstract.html
Fan Wu, Patrick Rebeschini
https://papers.nips.cc/paper_files/paper/2020/hash/e9470886ecab9743fb7ea59420c245d2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e9470886ecab9743fb7ea59420c245d2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11419-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e9470886ecab9743fb7ea59420c245d2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e9470886ecab9743fb7ea59420c245d2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e9470886ecab9743fb7ea59420c245d2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e9470886ecab9743fb7ea59420c245d2-Supplemental.zip
We analyze continuous-time mirror descent applied to sparse phase retrieval, which is the problem of recovering sparse signals from a set of magnitude-only measurements. We apply mirror descent to the unconstrained empirical risk minimization problem (batch setting), using the square loss and square measurements. We provide a full convergence analysis of the algorithm in this non-convex setting and prove that, with the hypentropy mirror map, mirror descent recovers any $k$-sparse vector $\mathbf{x}^\star\in\mathbb{R}^n$ with minimum (in modulus) non-zero entry on the order of $\| \mathbf{x}^\star \|_2/\sqrt{k}$ from $k^2$ Gaussian measurements, modulo logarithmic terms. This yields a simple algorithm which, unlike most existing approaches to sparse phase retrieval, adapts to the sparsity level, without including thresholding steps or adding regularization terms. Our results also provide a principled theoretical understanding for Hadamard Wirtinger flow [54], as Euclidean gradient descent applied to the empirical risk problem with Hadamard parametrization can be recovered as a first-order approximation to mirror descent in discrete time.
Confidence sequences for sampling without replacement
https://papers.nips.cc/paper_files/paper/2020/hash/e96c7de8f6390b1e6c71556e4e0a4959-Abstract.html
Ian Waudby-Smith, Aaditya Ramdas
https://papers.nips.cc/paper_files/paper/2020/hash/e96c7de8f6390b1e6c71556e4e0a4959-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e96c7de8f6390b1e6c71556e4e0a4959-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11420-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e96c7de8f6390b1e6c71556e4e0a4959-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e96c7de8f6390b1e6c71556e4e0a4959-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e96c7de8f6390b1e6c71556e4e0a4959-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e96c7de8f6390b1e6c71556e4e0a4959-Supplemental.pdf
Many practical tasks involve sampling sequentially without replacement (WoR) from a finite population of size $N$, in an attempt to estimate some parameter $\theta^\star$. Accurately quantifying uncertainty throughout this process is a nontrivial task, but is necessary because it often determines when we stop collecting samples and confidently report a result. We present a suite of tools for designing \textit{confidence sequences} (CS) for $\theta^\star$. A CS is a sequence of confidence sets $(C_n)_{n=1}^N$, that shrink in size, and all contain $\theta^\star$ simultaneously with high probability. We first exploit a relationship between Bayesian posteriors and martingales to construct a (frequentist) CS for the parameters of a hypergeometric distribution. We then present Hoeffding- and empirical-Bernstein-type time-uniform CSs and fixed-time confidence intervals for sampling WoR which improve on previous bounds in the literature.
A mean-field analysis of two-player zero-sum games
https://papers.nips.cc/paper_files/paper/2020/hash/e97c864e8ac67f7aed5ce53ec28638f5-Abstract.html
Carles Domingo-Enrich, Samy Jelassi, Arthur Mensch, Grant Rotskoff, Joan Bruna
https://papers.nips.cc/paper_files/paper/2020/hash/e97c864e8ac67f7aed5ce53ec28638f5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e97c864e8ac67f7aed5ce53ec28638f5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11421-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e97c864e8ac67f7aed5ce53ec28638f5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e97c864e8ac67f7aed5ce53ec28638f5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e97c864e8ac67f7aed5ce53ec28638f5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e97c864e8ac67f7aed5ce53ec28638f5-Supplemental.zip
Finding Nash equilibria in two-player zero-sum continuous games is a central problem in machine learning, e.g. for training both GANs and robust models. The existence of pure Nash equilibria requires strong conditions which are not typically met in practice. Mixed Nash equilibria exist in greater generality and may be found using mirror descent. Yet this approach does not scale to high dimensions. To address this limitation, we parametrize mixed strategies as mixtures of particles, whose positions and weights are updated using gradient descent-ascent. We study this dynamics as an interacting gradient flow over measure spaces endowed with the Wasserstein-Fisher-Rao metric. We establish global convergence to an approximate equilibrium for the related Langevin gradient-ascent dynamic. We prove a law of large numbers that relates particle dynamics to mean-field dynamics. Our method identifies mixed equilibria in high dimensions and is demonstrably effective for training mixtures of GANs.
Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge
https://papers.nips.cc/paper_files/paper/2020/hash/e992111e4ab9985366e806733383bd8c-Abstract.html
Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, Jonathan Berant
https://papers.nips.cc/paper_files/paper/2020/hash/e992111e4ab9985366e806733383bd8c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e992111e4ab9985366e806733383bd8c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11422-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e992111e4ab9985366e806733383bd8c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e992111e4ab9985366e806733383bd8c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e992111e4ab9985366e806733383bd8c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e992111e4ab9985366e806733383bd8c-Supplemental.zip
To what extent can a neural network systematically reason over symbolic facts? Evidence suggests that large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control. Recently, it has been shown that Transformer-based models succeed in consistent reasoning over explicit symbolic facts, under a "closed-world" assumption. However, in an open-domain setup, it is desirable to tap into the vast reservoir of implicit knowledge already encoded in the parameters of pre-trained LMs. In this work, we provide a first demonstration that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements. To do this, we describe a procedure for automatically generating datasets that teach a model new reasoning skills, and demonstrate that models learn to effectively perform inference which involves implicit taxonomic and world knowledge, chaining and counting. Finally, we show that "teaching" models to reason generalizes beyond the training distribution: they successfully compose the usage of multiple reasoning skills in single examples. Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.
Pipeline PSRO: A Scalable Approach for Finding Approximate Nash Equilibria in Large Games
https://papers.nips.cc/paper_files/paper/2020/hash/e9bcd1b063077573285ae1a41025f5dc-Abstract.html
Stephen Mcaleer, JB Lanier, Roy Fox, Pierre Baldi
https://papers.nips.cc/paper_files/paper/2020/hash/e9bcd1b063077573285ae1a41025f5dc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e9bcd1b063077573285ae1a41025f5dc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11423-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e9bcd1b063077573285ae1a41025f5dc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e9bcd1b063077573285ae1a41025f5dc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e9bcd1b063077573285ae1a41025f5dc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e9bcd1b063077573285ae1a41025f5dc-Supplemental.pdf
Finding approximate Nash equilibria in zero-sum imperfect-information games is challenging when the number of information states is large. Policy Space Response Oracles (PSRO) is a deep reinforcement learning algorithm grounded in game theory that is guaranteed to converge to an approximate Nash equilibrium. However, PSRO requires training a reinforcement learning policy at each iteration, making it too slow for large games. We show through counterexamples and experiments that DCH and Rectified PSRO, two existing approaches to scaling up PSRO, fail to converge even in small games. We introduce Pipeline PSRO (P2SRO), the first scalable PSRO-based method for finding approximate Nash equilibria in large zero-sum imperfect-information games. P2SRO is able to parallelize PSRO with convergence guarantees by maintaining a hierarchical pipeline of reinforcement learning workers, each training against the policies generated by lower levels in the hierarchy. We show that unlike existing methods, P2SRO converges to an approximate Nash equilibrium, and does so faster as the number of parallel workers increases, across a variety of imperfect information games. We also introduce an open-source environment for Barrage Stratego, a variant of Stratego with an approximate game tree complexity of 10^50. P2SRO is able to achieve state-of-the-art performance on Barrage Stratego and beats all existing bots. Experiment code is available at https://github.com/JBLanier/pipeline-psro.
Improving Sparse Vector Technique with Renyi Differential Privacy
https://papers.nips.cc/paper_files/paper/2020/hash/e9bf14a419d77534105016f5ec122d62-Abstract.html
Yuqing Zhu, Yu-Xiang Wang
https://papers.nips.cc/paper_files/paper/2020/hash/e9bf14a419d77534105016f5ec122d62-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/e9bf14a419d77534105016f5ec122d62-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/11424-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/e9bf14a419d77534105016f5ec122d62-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/e9bf14a419d77534105016f5ec122d62-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/e9bf14a419d77534105016f5ec122d62-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/e9bf14a419d77534105016f5ec122d62-Supplemental.pdf
The Sparse Vector Technique (SVT) is one of the most fundamental algorithmic tools in differential privacy (DP). It also plays a central role in the state-of-the-art algorithms for adaptive data analysis and model-agnostic private learning. In this paper, we revisit SVT from the lens of Renyi differential privacy, which results in new privacy bounds, new theoretical insight and new variants of SVT algorithms. A notable example is a Gaussian mechanism version of SVT, which provides better utility over the standard (Laplace-mechanism-based) version thanks to its more concentrated noise and tighter composition. Extensive empirical evaluation demonstrates the merits of Gaussian SVT over the Laplace SVT and other alternatives, which encouragingly suggests that using Gaussian SVT as a drop-in replacement could make SVT-based algorithms practical in downstream tasks.