title
stringlengths 13
150
| url
stringlengths 97
97
| authors
stringlengths 8
467
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | AuthorFeedback
stringlengths 102
102
⌀ | Bibtex
stringlengths 53
54
| MetaReview
stringlengths 99
99
| Paper
stringlengths 93
93
| Review
stringlengths 95
95
| Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 53
2k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Improved Analysis of Clipping Algorithms for Non-convex Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/b282d1735283e8eea45bce393cefe265-Abstract.html | Bohang Zhang, Jikai Jin, Cong Fang, Liwei Wang | https://papers.nips.cc/paper_files/paper/2020/hash/b282d1735283e8eea45bce393cefe265-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b282d1735283e8eea45bce393cefe265-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11025-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b282d1735283e8eea45bce393cefe265-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b282d1735283e8eea45bce393cefe265-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b282d1735283e8eea45bce393cefe265-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b282d1735283e8eea45bce393cefe265-Supplemental.zip | Gradient clipping is commonly used in training deep neural networks partly due to its practicability in relieving the exploding gradient problem. Recently, \citet{zhang2019gradient} show that clipped (stochastic) Gradient Descent (GD) converges faster than vanilla GD via introducing a new assumption called $(L_0, L_1)$-smoothness, which characterizes the violent fluctuation of gradients typically encountered in deep neural networks. However, their iteration complexities on the problem-dependent parameters are rather pessimistic, and theoretical justification of clipping combined with other crucial techniques, e.g. momentum acceleration, are still lacking. In this paper, we bridge the gap by presenting a general framework to study the clipping algorithms, which also takes momentum methods into consideration.We provide convergence analysis of the framework in both deterministic and stochastic setting, and demonstrate the tightness of our results by comparing them with existing lower bounds. Our results imply that the efficiency of clipping methods will not degenerate even in highly non-smooth regions of the landscape. Experiments confirm the superiority of clipping-based methods in deep learning tasks. |
Bias no more: high-probability data-dependent regret bounds for adversarial bandits and MDPs | https://papers.nips.cc/paper_files/paper/2020/hash/b2ea5e977c5fc1ccfa74171a9723dd61-Abstract.html | Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, Mengxiao Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/b2ea5e977c5fc1ccfa74171a9723dd61-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b2ea5e977c5fc1ccfa74171a9723dd61-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11026-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b2ea5e977c5fc1ccfa74171a9723dd61-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b2ea5e977c5fc1ccfa74171a9723dd61-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b2ea5e977c5fc1ccfa74171a9723dd61-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b2ea5e977c5fc1ccfa74171a9723dd61-Supplemental.pdf | Besides its simplicity, our approach enjoys several advantages. First, the obtained high-probability regret bounds are data-dependent and could be much smaller than the worst-case bounds, which resolves an open problem asked by Neu (2015). Second, resolving another open problem of Bartlett et al. (2008) and Abernethy and Rakhlin (2009), our approach leads to the first general and efficient algorithm with a high-probability regret bound for adversarial linear bandits, while previous methods are either inefficient or only applicable to specific action sets. Finally, our approach can also be applied to learning adversarial Markov Decision Processes and provides the first algorithm with a high-probability small-loss bound for this problem. |
A Ranking-based, Balanced Loss Function Unifying Classification and Localisation in Object Detection | https://papers.nips.cc/paper_files/paper/2020/hash/b2eeb7362ef83deff5c7813a67e14f0a-Abstract.html | Kemal Oksuz, Baris Can Cam, Emre Akbas, Sinan Kalkan | https://papers.nips.cc/paper_files/paper/2020/hash/b2eeb7362ef83deff5c7813a67e14f0a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b2eeb7362ef83deff5c7813a67e14f0a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11027-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b2eeb7362ef83deff5c7813a67e14f0a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b2eeb7362ef83deff5c7813a67e14f0a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b2eeb7362ef83deff5c7813a67e14f0a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b2eeb7362ef83deff5c7813a67e14f0a-Supplemental.pdf | We propose average Localisation-Recall-Precision (aLRP), a unified, bounded, balanced and ranking-based loss function for both classification and localisation tasks in object detection. aLRP extends the Localisation-Recall-Precision (LRP) performance metric (Oksuz et al., 2018) inspired from how Average Precision (AP) Loss extends precision to a ranking-based loss function for classification (Chen et al., 2020). aLRP has the following distinct advantages: (i) aLRP is the first ranking-based loss function for both classification and localisation tasks. (ii) Thanks to using ranking for both tasks, aLRP naturally enforces high-quality localisation for high-precision classification. (iii) aLRP provides provable balance between positives and negatives. (iv) Compared to on average ~6 hyperparameters in the loss functions of state-of-the-art detectors, aLRP Loss has only one hyperparameter, which we did not tune in practice. On the COCO dataset, aLRP Loss improves its ranking-based predecessor, AP Loss, up to around 5 AP points, achieves 48.9 AP without test time augmentation and outperforms all one-stage detectors. Code available at: https://github.com/kemaloksuz/aLRPLoss . |
StratLearner: Learning a Strategy for Misinformation Prevention in Social Networks | https://papers.nips.cc/paper_files/paper/2020/hash/b2f627fff19fda463cb386442eac2b3d-Abstract.html | Guangmo Tong | https://papers.nips.cc/paper_files/paper/2020/hash/b2f627fff19fda463cb386442eac2b3d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b2f627fff19fda463cb386442eac2b3d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11028-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b2f627fff19fda463cb386442eac2b3d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b2f627fff19fda463cb386442eac2b3d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b2f627fff19fda463cb386442eac2b3d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b2f627fff19fda463cb386442eac2b3d-Supplemental.zip | Given a combinatorial optimization problem taking an input, can we learn a strategy to solve it from the examples of input-solution pairs without knowing its objective function? In this paper, we consider such a setting and study the misinformation prevention problem. Given the examples of attacker-protector pairs, our goal is to learn a strategy to compute protectors against future attackers, without the need of knowing the underlying diffusion model. To this end, we design a structured prediction framework, where the main idea is to parameterize the scoring function using random features constructed through distance functions on randomly sampled subgraphs, which leads to a kernelized scoring function with weights learnable via the large margin method. Evidenced by experiments, our method can produce near-optimal protectors without using any information of the diffusion model, and it outperforms other possible graph-based and learning-based methods by an evident margin. |
A Unified Switching System Perspective and Convergence Analysis of Q-Learning Algorithms | https://papers.nips.cc/paper_files/paper/2020/hash/b30958093daeed059670b35173654dc9-Abstract.html | Donghwan Lee, Niao He | https://papers.nips.cc/paper_files/paper/2020/hash/b30958093daeed059670b35173654dc9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b30958093daeed059670b35173654dc9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11029-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b30958093daeed059670b35173654dc9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b30958093daeed059670b35173654dc9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b30958093daeed059670b35173654dc9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b30958093daeed059670b35173654dc9-Supplemental.pdf | This paper develops a novel and unified framework to analyze the convergence of a large family of Q-learning algorithms from the switching system perspective. We show that the nonlinear ODE models associated with Q-learning and many of its variants can be naturally formulated as affine switching systems. Building on their asymptotic stability, we obtain a number of interesting results: (i) we provide a simple ODE analysis for the convergence of asynchronous Q-learning under relatively weak assumptions; (ii) we establish the first convergence analysis of the averaging Q-learning algorithm; and (iii) we derive a new sufficient condition for the convergence of Q-learning with linear function approximation. |
Kernel Alignment Risk Estimator: Risk Prediction from Training Data | https://papers.nips.cc/paper_files/paper/2020/hash/b367e525a7e574817c19ad24b7b35607-Abstract.html | Arthur Jacot, Berfin Simsek, Francesco Spadaro, Clement Hongler, Franck Gabriel | https://papers.nips.cc/paper_files/paper/2020/hash/b367e525a7e574817c19ad24b7b35607-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b367e525a7e574817c19ad24b7b35607-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11030-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b367e525a7e574817c19ad24b7b35607-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b367e525a7e574817c19ad24b7b35607-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b367e525a7e574817c19ad24b7b35607-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b367e525a7e574817c19ad24b7b35607-Supplemental.pdf | We study the risk (i.e. generalization error) of Kernel Ridge Regression (KRR) for a kernel $K$ with ridge $\lambda>0$ and i.i.d. observations. For this, we introduce two objects: the Signal Capture Threshold (SCT) and the Kernel Alignment Risk Estimator (KARE). The SCT $\vartheta_{K,\lambda}$ is a function of the data distribution: it can be used to identify the components of the data that the KRR predictor captures, and to approximate the (expected) KRR risk. This then leads to a KRR risk approximation by the KARE $\rho_{K, \lambda}$, an explicit function of the training data, agnostic of the true data distribution. We phrase the regression problem in a functional setting. The key results then follow from a finite-size adaptation of the resolvent method for general Wishart random matrices. Under a natural universality assumption (that the KRR moments depend asymptotically on the first two moments of the observations) we capture the mean and variance of the KRR predictor. We numerically investigate our findings on the Higgs and MNIST datasets for various classical kernels: the KARE gives an excellent approximation of the risk. This supports our universality hypothesis. Using the KARE, one can compare choices of Kernels and hyperparameters directly from the training set. The KARE thus provides a promising data-dependent procedure to select Kernels that generalize well. |
Calibrating CNNs for Lifelong Learning | https://papers.nips.cc/paper_files/paper/2020/hash/b3b43aeeacb258365cc69cdaf42a68af-Abstract.html | Pravendra Singh, Vinay Kumar Verma, Pratik Mazumder, Lawrence Carin, Piyush Rai | https://papers.nips.cc/paper_files/paper/2020/hash/b3b43aeeacb258365cc69cdaf42a68af-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b3b43aeeacb258365cc69cdaf42a68af-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11031-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b3b43aeeacb258365cc69cdaf42a68af-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b3b43aeeacb258365cc69cdaf42a68af-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b3b43aeeacb258365cc69cdaf42a68af-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b3b43aeeacb258365cc69cdaf42a68af-Supplemental.pdf | We present an approach for lifelong/continual learning of convolutional neural networks (CNN) that does not suffer from the problem of catastrophic forgetting when moving from one task to the other. We show that the activation maps generated by the CNN trained on the old task can be calibrated using very few calibration parameters, to become relevant to the new task. Based on this, we calibrate the activation maps produced by each network layer using spatial and channel-wise calibration modules and train only these calibration parameters for each new task in order to perform lifelong learning. Our calibration modules introduce significantly less computation and parameters as compared to the approaches that dynamically expand the network. Our approach is immune to catastrophic forgetting since we store the task-adaptive calibration parameters, which contain all the task-specific knowledge and is exclusive to each task. Further, our approach does not require storing data samples from the old tasks, which is done by many replay based methods. We perform extensive experiments on multiple benchmark datasets (SVHN, CIFAR, ImageNet, and MS-Celeb), all of which show substantial improvements over state-of-the-art methods (e.g., a 29% absolute increase in accuracy on CIFAR-100 with 10 classes at a time). On large-scale datasets, our approach yields 23.8% and 9.7% absolute increase in accuracy on ImageNet-100 and MS-Celeb-10K datasets, respectively, by employing very few (0.51% and 0.35% of model parameters) task-adaptive calibration parameters. |
Online Convex Optimization Over Erdos-Renyi Random Networks | https://papers.nips.cc/paper_files/paper/2020/hash/b3d6e130a30b176f2ca5af7d1e73953f-Abstract.html | Jinlong Lei, Peng Yi, Yiguang Hong, Jie Chen, Guodong Shi | https://papers.nips.cc/paper_files/paper/2020/hash/b3d6e130a30b176f2ca5af7d1e73953f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b3d6e130a30b176f2ca5af7d1e73953f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11032-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b3d6e130a30b176f2ca5af7d1e73953f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b3d6e130a30b176f2ca5af7d1e73953f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b3d6e130a30b176f2ca5af7d1e73953f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b3d6e130a30b176f2ca5af7d1e73953f-Supplemental.pdf | The work studies how node-to-node communications over an Erd\H{o}s-R\'enyi random network influence distributed online convex optimization, which is vital in solving large-scale machine learning in antagonistic or changing environments. At per step, each node (computing unit) makes a local decision, experiences a loss evaluated with a convex function, and communicates the decision with other nodes over a network. The node-to-node communications are described by the Erd\H{o}s-R\'enyi rule, where independently each link takes place with a probability $p$ over a prescribed connected graph. The objective is to
minimize the system-wide loss accumulated over a finite time horizon.
We consider standard distributed gradient descents with full gradients, one-point bandits and two-points bandits for convex and strongly convex losses, respectively. We establish how the regret bounds scale with respect to time horizon $T$, network size $N$, decision dimension $d$, and an algebraic network connectivity. The regret bounds scaling with respect to $T$
match those obtained by state-of-the-art algorithms and fundamental limits in
the corresponding centralized online optimization problems, e.g.,
$\mathcal{O}(\sqrt{T}) $ and $\mathcal{O}(\ln(T)) $ regrets are established for convex and strongly convex losses with full gradient feedback and two-points information, respectively. For classical Erd\H{o}s-R\'enyi networks over all-to-all possible node communications, the regret scalings with respect to the probability $p$ are analytically established, based on which the tradeoff between the communication overhead and computation accuracy is clearly demonstrated. Numerical studies have validated the theoretical findings. |
Robustness of Bayesian Neural Networks to Gradient-Based Attacks | https://papers.nips.cc/paper_files/paper/2020/hash/b3f61131b6eceeb2b14835fa648a48ff-Abstract.html | Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane', Luca Bortolussi, Guido Sanguinetti | https://papers.nips.cc/paper_files/paper/2020/hash/b3f61131b6eceeb2b14835fa648a48ff-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b3f61131b6eceeb2b14835fa648a48ff-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11033-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b3f61131b6eceeb2b14835fa648a48ff-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b3f61131b6eceeb2b14835fa648a48ff-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b3f61131b6eceeb2b14835fa648a48ff-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b3f61131b6eceeb2b14835fa648a48ff-Supplemental.pdf | Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse the geometry of adversarial attacks in the large-data, overparametrized limit for Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution, i.e., when the data lies on a lower-dimensional submanifold of the ambient space. As a direct consequence, we demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks. Experimental results on the MNIST and Fashion MNIST datasets with BNNs trained with Hamiltonian Monte Carlo and Variational Inference support this line of argument, showing that BNNs can display both high accuracy and robustness to gradient based adversarial attacks. |
Parametric Instance Classification for Unsupervised Visual Feature learning | https://papers.nips.cc/paper_files/paper/2020/hash/b427426b8acd2c2e53827970f2c2f526-Abstract.html | Yue Cao, Zhenda Xie, Bin Liu, Yutong Lin, Zheng Zhang, Han Hu | https://papers.nips.cc/paper_files/paper/2020/hash/b427426b8acd2c2e53827970f2c2f526-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b427426b8acd2c2e53827970f2c2f526-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11034-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b427426b8acd2c2e53827970f2c2f526-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b427426b8acd2c2e53827970f2c2f526-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b427426b8acd2c2e53827970f2c2f526-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b427426b8acd2c2e53827970f2c2f526-Supplemental.pdf | This paper presents parametric instance classification (PIC) for unsupervised visual feature learning. Unlike the state-of-the-art approaches which do instance discrimination in a dual-branch non-parametric fashion, PIC directly performs a one-branch parametric instance classification, revealing a simple framework similar to supervised classification and without the need to address the information leakage issue. We show that the simple PIC framework can be as effective as the state-of-the-art approaches, i.e. SimCLR and MoCo v2, by adapting several common component settings used in the state-of-the-art approaches. We also propose two novel techniques to further improve effectiveness and practicality of PIC: 1) a sliding-window data scheduler, instead of the previous epoch-based data scheduler, which addresses the extremely infrequent instance visiting issue in PIC and improves the effectiveness; 2) a negative sampling and weight update correction approach to reduce the training time and GPU memory consumption, which also enables application of PIC to almost unlimited training images. We hope that the PIC framework can serve as a simple baseline to facilitate future study. The code and network configurations are available at \url{https://github.com/bl0/PIC}. |
Sparse Weight Activation Training | https://papers.nips.cc/paper_files/paper/2020/hash/b44182379bf9fae976e6ae5996e13cd8-Abstract.html | Md Aamir Raihan, Tor Aamodt | https://papers.nips.cc/paper_files/paper/2020/hash/b44182379bf9fae976e6ae5996e13cd8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b44182379bf9fae976e6ae5996e13cd8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11035-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b44182379bf9fae976e6ae5996e13cd8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b44182379bf9fae976e6ae5996e13cd8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b44182379bf9fae976e6ae5996e13cd8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b44182379bf9fae976e6ae5996e13cd8-Supplemental.zip | Neural network training is computationally and memory intensive. Sparse training can reduce the burden on emerging hardware platforms designed to accelerate sparse computations, but it can also affect network convergence. In this work, we propose a novel CNN training algorithm called Sparse Weight Activation Training (SWAT). SWAT is more computation and memory-efficient than
conventional training. SWAT modifies back-propagation based on the empirical insight that convergence during training tends to be robust to the elimination of (i) small magnitude weights during the forward pass and (ii) both small magnitude weights and activations during the backward pass. We evaluate SWAT on recent CNN architectures such as ResNet, VGG, DenseNet and WideResNet using CIFAR-10, CIFAR-100 and ImageNet datasets. For ResNet-50 on ImageNet SWAT reduces total floating-point operations (FLOPs) during training by 80% resulting in a 3.3x training speedup when run on a simulated sparse learning accelerator representative of emerging platforms while incurring only 1.63% reduction in validation accuracy. Moreover, SWAT reduces memory footprint during the backward pass by 23% to 50% for activations and 50% to 90% for weights. Code is available at https://github.com/AamirRaihan/SWAT. |
Collapsing Bandits and Their Application to Public Health Intervention | https://papers.nips.cc/paper_files/paper/2020/hash/b460cf6b09878b00a3e1ad4c72344ccd-Abstract.html | Aditya Mate, Jackson Killian, Haifeng Xu, Andrew Perrault, Milind Tambe | https://papers.nips.cc/paper_files/paper/2020/hash/b460cf6b09878b00a3e1ad4c72344ccd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b460cf6b09878b00a3e1ad4c72344ccd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11036-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b460cf6b09878b00a3e1ad4c72344ccd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b460cf6b09878b00a3e1ad4c72344ccd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b460cf6b09878b00a3e1ad4c72344ccd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b460cf6b09878b00a3e1ad4c72344ccd-Supplemental.pdf | We propose and study Collapsing Bandits, a new restless multi-armed bandit (RMAB) setting in which each arm follows a binary-state Markovian process with a special structure: when an arm is played, the state is fully observed, thus“collapsing” any uncertainty, but when an arm is passive, no observation is made, thus allowing uncertainty to evolve. The goal is to keep as many arms in the “good” state as possible by planning a limited budget of actions per round. Such CollapsingBandits are natural models for many healthcare domains in which health workers must simultaneously monitor patients and deliver interventions in a way that maximizes the health of their patient cohort. Our main contributions are as follows: (i) Building on the Whittle index technique for RMABs, we derive conditions under which the Collapsing Bandits problem is indexable. Our derivation hinges on novel conditions that characterize when the optimal policies may take the form of either“forward” or “reverse” threshold policies. (ii) We exploit the optimality of threshold policies to build fast algorithms for computing the Whittle index, including a closed-form. (iii) We evaluate our algorithm on several data distributions including data from a real-world healthcare task in which a worker must monitor and deliver interventions to maximize their patients’ adherence to tuberculosis medication. Our algorithm achieves a 3-order-of-magnitude speedup compared to state-of-the-art RMAB techniques, while achieving similar performance. The code is available at:https://github.com/AdityaMate/collapsing_bandits |
Neural Sparse Voxel Fields | https://papers.nips.cc/paper_files/paper/2020/hash/b4b758962f17808746e9bb832a6fa4b8-Abstract.html | Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, Christian Theobalt | https://papers.nips.cc/paper_files/paper/2020/hash/b4b758962f17808746e9bb832a6fa4b8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b4b758962f17808746e9bb832a6fa4b8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11037-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b4b758962f17808746e9bb832a6fa4b8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b4b758962f17808746e9bb832a6fa4b8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b4b758962f17808746e9bb832a6fa4b8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b4b758962f17808746e9bb832a6fa4b8-Supplemental.zip | Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encodes both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. The NSVF defines a series of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a differentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views at inference time can be accelerated by skipping the voxels without relevant scene content. Our method is over 10 times faster than the state-of-the-art while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can be easily applied to scene editing and scene composition. we also demonstrate various kinds of challenging tasks, including multi-object learning, free-viewpoint rendering of a moving human, and large-scale scene rendering. |
A Flexible Framework for Designing Trainable Priors with Adaptive Smoothing and Game Encoding | https://papers.nips.cc/paper_files/paper/2020/hash/b4edda67f0f57e218a8e766927e3e5c5-Abstract.html | Bruno Lecouat, Jean Ponce, Julien Mairal | https://papers.nips.cc/paper_files/paper/2020/hash/b4edda67f0f57e218a8e766927e3e5c5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b4edda67f0f57e218a8e766927e3e5c5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11038-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b4edda67f0f57e218a8e766927e3e5c5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b4edda67f0f57e218a8e766927e3e5c5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b4edda67f0f57e218a8e766927e3e5c5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b4edda67f0f57e218a8e766927e3e5c5-Supplemental.pdf | We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems, and whose architectures are derived from an optimization algorithm. We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions. This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end. The priors used in this presentation include variants of total variation, Laplacian regularization, bilateral filtering, sparse coding on learned dictionaries, and non-local self similarities. Our models are fully interpretable as well as parameter and data efficient. Our experiments demonstrate their effectiveness on a large diversity of tasks ranging from image denoising and compressed sensing for fMRI to dense stereo matching. |
The Discrete Gaussian for Differential Privacy | https://papers.nips.cc/paper_files/paper/2020/hash/b53b3a3d6ab90ce0268229151c9bde11-Abstract.html | Clément L. Canonne, Gautam Kamath, Thomas Steinke | https://papers.nips.cc/paper_files/paper/2020/hash/b53b3a3d6ab90ce0268229151c9bde11-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b53b3a3d6ab90ce0268229151c9bde11-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11039-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b53b3a3d6ab90ce0268229151c9bde11-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b53b3a3d6ab90ce0268229151c9bde11-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b53b3a3d6ab90ce0268229151c9bde11-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b53b3a3d6ab90ce0268229151c9bde11-Supplemental.pdf | With these shortcomings in mind, we introduce and analyze the discrete Gaussian in the context of differential privacy. Specifically, we theoretically and experimentally show that adding discrete Gaussian noise provides essentially the same privacy and accuracy guarantees as the addition of continuous Gaussian noise. We also present an simple and efficient algorithm for exact sampling from this distribution. This demonstrates its applicability for privately answering counting queries, or more generally, low-sensitivity integer-valued queries. |
Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing | https://papers.nips.cc/paper_files/paper/2020/hash/b58144d7e90b5a43edcce1ca9e642882-Abstract.html | Arun Jambulapati, Jerry Li, Kevin Tian | https://papers.nips.cc/paper_files/paper/2020/hash/b58144d7e90b5a43edcce1ca9e642882-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b58144d7e90b5a43edcce1ca9e642882-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11040-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b58144d7e90b5a43edcce1ca9e642882-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b58144d7e90b5a43edcce1ca9e642882-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b58144d7e90b5a43edcce1ca9e642882-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b58144d7e90b5a43edcce1ca9e642882-Supplemental.pdf | We develop two methods for the following fundamental statistical task: given an $\eps$-corrupted set of $n$ samples from a $d$-dimensional sub-Gaussian distribution, return an approximate top eigenvector of the covariance matrix. Our first robust PCA algorithm runs in polynomial time, returns a $1 - O(\eps\log\eps^{-1})$-approximate top eigenvector, and is based on a simple iterative filtering approach. Our second, which attains a slightly worse approximation factor, runs in nearly-linear time and sample complexity under a mild spectral gap assumption. These are the first polynomial-time algorithms yielding non-trivial information about the covariance of a corrupted sub-Gaussian distribution without requiring additional algebraic structure of moments. As a key technical tool, we develop the first width-independent solvers for Schatten-$p$ norm packing semidefinite programs, giving a $(1 + \eps)$-approximate solution in $O(p\log(\tfrac{nd}{\eps})\eps^{-1})$ input-sparsity time iterations (where $n$, $d$ are problem dimensions). |
Adaptive Importance Sampling for Finite-Sum Optimization and Sampling with Decreasing Step-Sizes | https://papers.nips.cc/paper_files/paper/2020/hash/b58f7d184743106a8a66028b7a28937c-Abstract.html | Ayoub El Hanchi, David Stephens | https://papers.nips.cc/paper_files/paper/2020/hash/b58f7d184743106a8a66028b7a28937c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b58f7d184743106a8a66028b7a28937c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11041-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b58f7d184743106a8a66028b7a28937c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b58f7d184743106a8a66028b7a28937c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b58f7d184743106a8a66028b7a28937c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b58f7d184743106a8a66028b7a28937c-Supplemental.pdf | Reducing the variance of the gradient estimator is known to improve the convergence rate of stochastic gradient-based optimization and sampling algorithms. One way of achieving variance reduction is to design importance sampling strategies. Recently, the problem of designing such schemes was formulated as an online learning problem with bandit feedback, and algorithms with sub-linear static regret were designed. In this work, we build on this framework and propose a simple and efficient algorithm for adaptive importance sampling for finite-sum optimization and sampling with decreasing step-sizes. Under standard technical conditions, we show that our proposed algorithm achieves O(T^{2/3}) and O(T^{5/6}) dynamic regret for SGD and SGLD respectively when run with O(1/t) step sizes. We achieve this dynamic regret bound by leveraging our knowledge of the dynamics defined by the algorithm, and combining ideas from online learning and variance-reduced stochastic optimization. We validate empirically the performance of our algorithm and identify settings in which it leads to significant improvements. |
Learning efficient task-dependent representations with synaptic plasticity | https://papers.nips.cc/paper_files/paper/2020/hash/b599e8250e4481aaa405a715419c8179-Abstract.html | Colin Bredenberg, Eero Simoncelli, Cristina Savin | https://papers.nips.cc/paper_files/paper/2020/hash/b599e8250e4481aaa405a715419c8179-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b599e8250e4481aaa405a715419c8179-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11042-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b599e8250e4481aaa405a715419c8179-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b599e8250e4481aaa405a715419c8179-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b599e8250e4481aaa405a715419c8179-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b599e8250e4481aaa405a715419c8179-Supplemental.pdf | Neural populations encode the sensory world imperfectly: their capacity is limited by the number of neurons, availability of metabolic and other biophysical resources, and intrinsic noise. The brain is presumably shaped by these limitations, improving efficiency by discarding some aspects of incoming sensory streams, while preferentially preserving commonly occurring, behaviorally-relevant information. Here we construct a stochastic recurrent neural circuit model that can learn efficient, task-specific sensory codes using a novel form of reward-modulated Hebbian synaptic plasticity. We illustrate the flexibility of the model by training an initially unstructured neural network to solve two different tasks: stimulus estimation, and stimulus discrimination. The network achieves high performance in both tasks by appropriately allocating resources and using its recurrent circuitry to best compensate for different levels of noise. We also show how the interaction between stimulus priors and task structure dictates the emergent network representations. |
A Contour Stochastic Gradient Langevin Dynamics Algorithm for Simulations of Multi-modal Distributions | https://papers.nips.cc/paper_files/paper/2020/hash/b5b8c484824d8a06f4f3d570bc420313-Abstract.html | Wei Deng, Guang Lin, Faming Liang | https://papers.nips.cc/paper_files/paper/2020/hash/b5b8c484824d8a06f4f3d570bc420313-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b5b8c484824d8a06f4f3d570bc420313-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11043-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b5b8c484824d8a06f4f3d570bc420313-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b5b8c484824d8a06f4f3d570bc420313-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b5b8c484824d8a06f4f3d570bc420313-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b5b8c484824d8a06f4f3d570bc420313-Supplemental.pdf | We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called contour stochastic gradient Langevin dynamics (CSGLD), for Bayesian learning in big data statistics. The proposed algorithm is essentially a scalable dynamic importance sampler, which automatically flattens the target distribution such that the simulation for a multi-modal distribution can be greatly facilitated. Theoretically, we prove a stability condition and establish the asymptotic convergence of the self-adapting parameter to a unique fixed-point, regardless of the non-convexity of the original energy function; we also present an error analysis for the weighted averaging estimators. Empirically, the CSGLD algorithm is tested on multiple benchmark datasets including CIFAR10 and CIFAR100. The numerical results indicate its superiority over the existing state-of-the-art algorithms in training deep neural networks. |
Error Bounds of Imitating Policies and Environments | https://papers.nips.cc/paper_files/paper/2020/hash/b5c01503041b70d41d80e3dbe31bbd8c-Abstract.html | Tian Xu, Ziniu Li, Yang Yu | https://papers.nips.cc/paper_files/paper/2020/hash/b5c01503041b70d41d80e3dbe31bbd8c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b5c01503041b70d41d80e3dbe31bbd8c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11044-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b5c01503041b70d41d80e3dbe31bbd8c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b5c01503041b70d41d80e3dbe31bbd8c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b5c01503041b70d41d80e3dbe31bbd8c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b5c01503041b70d41d80e3dbe31bbd8c-Supplemental.zip | Imitation learning trains a policy by mimicking expert demonstrations. Various imitation methods were proposed and empirically evaluated, meanwhile, their theoretical understanding needs further studies. In this paper, we firstly analyze the value gap between the expert policy and imitated policies by two imitation methods, behavioral cloning and generative adversarial imitation. The results support that generative adversarial imitation can reduce the compounding errors compared to behavioral cloning, and thus has a better sample complexity. Noticed that by considering the environment transition model as a dual agent, imitation learning can also be used to learn the environment model. Therefore, based on the bounds of imitating policies, we further analyze the performance of imitating environments. The results show that environment models can be more effectively imitated by generative adversarial imitation than behavioral cloning, suggesting a novel application of adversarial imitation for model-based reinforcement learning. We hope these results could inspire future advances in imitation learning and model-based reinforcement learning. |
Disentangling Human Error from Ground Truth in Segmentation of Medical Images | https://papers.nips.cc/paper_files/paper/2020/hash/b5d17ed2b502da15aa727af0d51508d6-Abstract.html | Le Zhang, Ryutaro Tanno, Mou-Cheng Xu, Chen Jin, Joseph Jacob, Olga Cicarrelli, Frederik Barkhof, Daniel Alexander | https://papers.nips.cc/paper_files/paper/2020/hash/b5d17ed2b502da15aa727af0d51508d6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b5d17ed2b502da15aa727af0d51508d6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11045-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b5d17ed2b502da15aa727af0d51508d6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b5d17ed2b502da15aa727af0d51508d6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b5d17ed2b502da15aa727af0d51508d6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b5d17ed2b502da15aa727af0d51508d6-Supplemental.pdf | Recent years have seen increasing use of supervised learning methods for segmentation tasks. However, the predictive performance of these algorithms depends on the quality of labels. This problem is particularly pertinent in the medical image domain, where both the annotation cost and inter-observer variability are high. In a typical label acquisition process, different human experts provide their estimates of the ``true'' segmentation labels under the influence of their own biases and competence levels. Treating these noisy labels blindly as the ground truth limits the performance that automatic segmentation algorithms can achieve. In this work, we present a method for jointly learning, from purely noisy observations alone, the reliability of individual annotators and the true segmentation label distributions, using two coupled CNNs. The separation of the two is achieved by encouraging the estimated annotators to be maximally unreliable while achieving high fidelity with the noisy training data. We first define a toy segmentation dataset based on MNIST and study the properties of the proposed algorithm. We then demonstrate the utility of the method on three public medical imaging segmentation datasets with simulated (when necessary) and real diverse annotations: 1) MSLSC (multiple-sclerosis lesions); 2) BraTS (brain tumours); 3) LIDC-IDRI (lung abnormalities). In all cases, our method outperforms competing methods and relevant baselines particularly in cases where the number of annotations is small and the amount of disagreement is large. The experiments also show strong ability to capture the complex spatial characteristics of annotators' mistakes. Our code is available at \url{https://github.com/moucheng2017/LearnNoisyLabelsMedicalImages}. |
Consequences of Misaligned AI | https://papers.nips.cc/paper_files/paper/2020/hash/b607ba543ad05417b8507ee86c54fcb7-Abstract.html | Simon Zhuang, Dylan Hadfield-Menell | https://papers.nips.cc/paper_files/paper/2020/hash/b607ba543ad05417b8507ee86c54fcb7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b607ba543ad05417b8507ee86c54fcb7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11046-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b607ba543ad05417b8507ee86c54fcb7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b607ba543ad05417b8507ee86c54fcb7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b607ba543ad05417b8507ee86c54fcb7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b607ba543ad05417b8507ee86c54fcb7-Supplemental.pdf | AI systems often rely on two key components: a specified goal or reward function and an optimization algorithm to compute the optimal behavior for that goal. This approach is intended to provide value for a principal: the user on whose behalf the agent acts. The objectives given to these agents often refer to a partial specification of the principal's goals. We consider the cost of this incompleteness by analyzing a model of a principal and an agent in a resource constrained world where the L features of the state correspond to different sources of utility for the principal. We assume that the reward function given to the agent only has support on J < L features. The contributions of our paper are as follows: 1) we propose a novel model of an incomplete principal—agent problem from artificial intelligence; 2) we provide necessary and sufficient conditions under which indefinitely optimizing for any incomplete proxy objective leads to arbitrarily low overall utility; and 3) we show how modifying the setup to allow reward functions that reference the full state or allowing the principal to update the proxy objective over time can lead to higher utility solutions. The results in this paper argue that we should view the design of reward functions as an interactive and dynamic process and identifies a theoretical scenario where some degree of interactivity is desirable. |
Promoting Coordination through Policy Regularization in Multi-Agent Deep Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/b628386c9b92481fab68fbf284bd6a64-Abstract.html | Julien Roy, Paul Barde, Félix Harvey, Derek Nowrouzezahrai, Chris Pal | https://papers.nips.cc/paper_files/paper/2020/hash/b628386c9b92481fab68fbf284bd6a64-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b628386c9b92481fab68fbf284bd6a64-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11047-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b628386c9b92481fab68fbf284bd6a64-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b628386c9b92481fab68fbf284bd6a64-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b628386c9b92481fab68fbf284bd6a64-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b628386c9b92481fab68fbf284bd6a64-Supplemental.pdf | In multi-agent reinforcement learning, discovering successful collective behaviors is challenging as it requires exploring a joint action space that grows exponentially with the number of agents. While the tractability of independent agent-wise exploration is appealing, this approach fails on tasks that require elaborate group strategies. We argue that coordinating the agents' policies can guide their exploration and we investigate techniques to promote such an inductive bias. We propose two policy regularization methods: TeamReg, which is based on inter-agent action predictability and CoachReg that relies on synchronized behavior selection. We evaluate each approach on four challenging continuous control tasks with sparse rewards that require varying levels of coordination as well as on the discrete action Google Research Football environment. Our experiments show improved performance across many cooperative multi-agent problems. Finally, we analyze the effects of our proposed methods on the policies that our agents learn and show that our methods successfully enforce the qualities that we propose as proxies for coordinated behaviors. |
Emergent Reciprocity and Team Formation from Randomized Uncertain Social Preferences | https://papers.nips.cc/paper_files/paper/2020/hash/b63c87b0a41016ad29313f0d7393cee8-Abstract.html | Bowen Baker | https://papers.nips.cc/paper_files/paper/2020/hash/b63c87b0a41016ad29313f0d7393cee8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b63c87b0a41016ad29313f0d7393cee8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11048-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b63c87b0a41016ad29313f0d7393cee8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b63c87b0a41016ad29313f0d7393cee8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b63c87b0a41016ad29313f0d7393cee8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b63c87b0a41016ad29313f0d7393cee8-Supplemental.pdf | Multi-agent reinforcement learning (MARL) has shown recent success in increasingly complex fixed-team zero-sum environments. However, the real world is not zero-sum nor does it have fixed teams; humans face numerous social dilemmas and must learn when to cooperate and when to compete. To successfully deploy agents into the human world, it may be important that they be able to understand and help in our conflicts. Unfortunately, selfish MARL agents typically fail when faced with social dilemmas. In this work, we show evidence of emergent direct reciprocity, indirect reciprocity and reputation, and team formation when training agents with randomized uncertain social preferences (RUSP), a novel environment augmentation that expands the distribution of environments agents play in. RUSP is generic and scalable; it can be applied to any multi-agent environment without changing the original underlying game dynamics or objectives. In particular, we show that with RUSP these behaviors can emerge and lead to higher social welfare equilibria in both classic abstract social dilemmas like Iterated Prisoner's Dilemma as well in more complex intertemporal environments. |
Hitting the High Notes: Subset Selection for Maximizing Expected Order Statistics | https://papers.nips.cc/paper_files/paper/2020/hash/b6417f112bd27848533e54885b66c288-Abstract.html | Aranyak Mehta, Uri Nadav, Alexandros Psomas, Aviad Rubinstein | https://papers.nips.cc/paper_files/paper/2020/hash/b6417f112bd27848533e54885b66c288-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b6417f112bd27848533e54885b66c288-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11049-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b6417f112bd27848533e54885b66c288-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b6417f112bd27848533e54885b66c288-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b6417f112bd27848533e54885b66c288-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b6417f112bd27848533e54885b66c288-Supplemental.pdf | We consider the fundamental problem of selecting $k$ out of $n$ random variables in a way that the expected highest or second-highest value is maximized. This question captures several applications where we have uncertainty about the quality of candidates (e.g. auction bids, search results) and have the capacity to explore only a small subset due to an exogenous constraint. For example, consider a second price auction where system constraints (e.g., costly retrieval or model computation) allow the participation of only $k$ out of $n$ bidders, and the goal is to optimize the expected efficiency (highest bid) or expected revenue (second highest bid).
We study the case where we are given an explicit description of each random variable. We give a PTAS for the problem of maximizing the expected highest value. For the second-highest value, we prove a hardness result: assuming the Planted Clique Hypothesis, there is no constant factor approximation algorithm that runs in polynomial time. Surprisingly, under the assumption that each random variable has monotone hazard rate (MHR), a simple score-based algorithm, namely picking the $k$ random variables with the largest $1/\sqrt{k}$ top quantile value, is a constant approximation to the expected highest and second highest value, \emph{simultaneously}. |
Towards Scale-Invariant Graph-related Problem Solving by Iterative Homogeneous GNNs | https://papers.nips.cc/paper_files/paper/2020/hash/b64a70760bb75e3ecfd1ad86d8f10c88-Abstract.html | Hao Tang, Zhiao Huang, Jiayuan Gu, Bao-Liang Lu, Hao Su | https://papers.nips.cc/paper_files/paper/2020/hash/b64a70760bb75e3ecfd1ad86d8f10c88-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b64a70760bb75e3ecfd1ad86d8f10c88-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11050-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b64a70760bb75e3ecfd1ad86d8f10c88-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b64a70760bb75e3ecfd1ad86d8f10c88-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b64a70760bb75e3ecfd1ad86d8f10c88-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b64a70760bb75e3ecfd1ad86d8f10c88-Supplemental.pdf | Current graph neural networks (GNNs) lack generalizability with respect to scales (graph sizes, graph diameters, edge weights, etc..) when solving many graph analysis problems. Taking the perspective of synthesizing graph theory programs, we propose several extensions to address the issue. First, inspired by the dependency of iteration number of common graph theory algorithms on graph size, we learn to terminate the message passing process in GNNs adaptively according to the computation progress. Second, inspired by the fact that many graph theory algorithms are homogeneous with respect to graph weights, we introduce homogeneous transformation layers that are universal homogeneous function approximators, to convert ordinary GNNs to be homogeneous. Experimentally, we show that our GNN can be trained from small-scale graphs but generalize well to large-scale graphs for a number of basic graph theory problems. It also shows generalizability for applications of multi-body physical simulation and image-based navigation problems. |
Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses | https://papers.nips.cc/paper_files/paper/2020/hash/b67fb3360ae5597d85a005153451dd4e-Abstract.html | Yihan Zhou, Victor Sanches Portella, Mark Schmidt, Nicholas Harvey | https://papers.nips.cc/paper_files/paper/2020/hash/b67fb3360ae5597d85a005153451dd4e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b67fb3360ae5597d85a005153451dd4e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11051-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b67fb3360ae5597d85a005153451dd4e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b67fb3360ae5597d85a005153451dd4e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b67fb3360ae5597d85a005153451dd4e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b67fb3360ae5597d85a005153451dd4e-Supplemental.pdf | In this work, we consider OCO for relative Lipschitz and relative strongly convex functions. We extend the known regret bounds for classical OCO algorithms to the relative setting. Specifically, we show regret bounds for the follow the regularized leader algorithms and a variant of online mirror descent. Due to the generality of these methods, these results yield regret bounds for a wide variety of OCO algorithms. Furthermore, we further extend the results to algorithms with extra regularization such as regularized dual averaging. |
The Lottery Ticket Hypothesis for Pre-trained BERT Networks | https://papers.nips.cc/paper_files/paper/2020/hash/b6af2c9703f203a2794be03d443af2e3-Abstract.html | Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael Carbin | https://papers.nips.cc/paper_files/paper/2020/hash/b6af2c9703f203a2794be03d443af2e3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b6af2c9703f203a2794be03d443af2e3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11052-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b6af2c9703f203a2794be03d443af2e3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b6af2c9703f203a2794be03d443af2e3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b6af2c9703f203a2794be03d443af2e3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b6af2c9703f203a2794be03d443af2e3-Supplemental.pdf | In natural language processing (NLP), enormous pre-trained models like BERT have become the standard starting point for training on a range of downstream tasks, and similar trends are emerging in other areas of deep learning. In parallel, work on the lottery ticket hypothesis has shown that models for NLP and computer vision contain smaller matching subnetworks capable of training in isolation to full accuracy and transferring to other tasks. In this work, we combine these observations to assess whether such trainable, transferrable subnetworks exist in pre-trained BERT models. For a range of downstream tasks, we indeed find matching subnetworks at 40% to 90% sparsity. We find these subnetworks at (pre-trained) initialization, a deviation from prior NLP research where they emerge only after some amount of training. Subnetworks found on the masked language modeling task (the same task used to pre-train the model) transfer universally; those found on other tasks transfer in a limited fashion if at all. As large-scale pre-training becomes an increasingly central paradigm in deep learning, our results demonstrate that the main lottery ticket observations remain relevant in this context. Codes available at https://github.com/VITA-Group/BERT-Tickets. |
Label-Aware Neural Tangent Kernel: Toward Better Generalization and Local Elasticity | https://papers.nips.cc/paper_files/paper/2020/hash/b6b90237b3ebd1e462a5d11dbc5c4dae-Abstract.html | Shuxiao Chen, Hangfeng He, Weijie Su | https://papers.nips.cc/paper_files/paper/2020/hash/b6b90237b3ebd1e462a5d11dbc5c4dae-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b6b90237b3ebd1e462a5d11dbc5c4dae-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11053-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b6b90237b3ebd1e462a5d11dbc5c4dae-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b6b90237b3ebd1e462a5d11dbc5c4dae-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b6b90237b3ebd1e462a5d11dbc5c4dae-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b6b90237b3ebd1e462a5d11dbc5c4dae-Supplemental.pdf | As a popular approach to modeling the dynamics of training overparametrized neural networks (NNs), the neural tangent kernels (NTK) are known to fall behind real-world NNs in generalization ability. This performance gap is in part due to the \textit{label agnostic} nature of the NTK, which renders the resulting kernel not as \textit{locally elastic} as NNs~\citep{he2019local}. In this paper, we introduce a novel approach from the perspective of \emph{label-awareness} to reduce this gap for the NTK. Specifically, we propose two label-aware kernels that are each a superimposition of a label-agnostic part and a hierarchy of label-aware parts with increasing complexity of label dependence, using the Hoeffding decomposition. Through both theoretical and empirical evidence, we show that the models trained with the proposed kernels better simulate NNs in terms of generalization ability and local elasticity. |
Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples | https://papers.nips.cc/paper_files/paper/2020/hash/b6c8cf4c587f2ead0c08955ee6e2502b-Abstract.html | Shafi Goldwasser, Adam Tauman Kalai, Yael Kalai, Omar Montasser | https://papers.nips.cc/paper_files/paper/2020/hash/b6c8cf4c587f2ead0c08955ee6e2502b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b6c8cf4c587f2ead0c08955ee6e2502b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11054-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b6c8cf4c587f2ead0c08955ee6e2502b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b6c8cf4c587f2ead0c08955ee6e2502b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b6c8cf4c587f2ead0c08955ee6e2502b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b6c8cf4c587f2ead0c08955ee6e2502b-Supplemental.zip | We present a transductive learning algorithm that takes as input training examples
from a distribution P and arbitrary (unlabeled) test examples, possibly chosen by
an adversary. This is unlike prior work that assumes that test examples are small
perturbations of P. Our algorithm outputs a selective classifier, which abstains from predicting on some examples. By considering selective transductive learning, we give the first nontrivial guarantees for learning classes of bounded VC dimension with arbitrary train and test distributions—no prior guarantees were known even for simple classes of functions such as intervals on the line. In particular, for any function in a class C of bounded VC dimension, we guarantee a low test error rate and a low rejection rate with respect to P. Our algorithm is efficient given an Empirical Risk Minimizer (ERM) for C. Our guarantees hold even for test examples chosen by an unbounded white-box adversary. We also give guarantees for generalization, agnostic, and unsupervised settings. |
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows | https://papers.nips.cc/paper_files/paper/2020/hash/b6cf334c22c8f4ce8eb920bb7b512ed0-Abstract.html | Hadi Mohaghegh Dolatabadi, Sarah Erfani, Christopher Leckie | https://papers.nips.cc/paper_files/paper/2020/hash/b6cf334c22c8f4ce8eb920bb7b512ed0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b6cf334c22c8f4ce8eb920bb7b512ed0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11055-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b6cf334c22c8f4ce8eb920bb7b512ed0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b6cf334c22c8f4ce8eb920bb7b512ed0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b6cf334c22c8f4ce8eb920bb7b512ed0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b6cf334c22c8f4ce8eb920bb7b512ed0-Supplemental.pdf | Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks. In this regard, the study of powerful attack models sheds light on the sources of vulnerability in these classifiers, hopefully leading to more robust ones. In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image. We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely. Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers. |
Few-shot Image Generation with Elastic Weight Consolidation | https://papers.nips.cc/paper_files/paper/2020/hash/b6d767d2f8ed5d21a44b0e5886680cb9-Abstract.html | Yijun Li, Richard Zhang, Jingwan (Cynthia) Lu, Eli Shechtman | https://papers.nips.cc/paper_files/paper/2020/hash/b6d767d2f8ed5d21a44b0e5886680cb9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b6d767d2f8ed5d21a44b0e5886680cb9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11056-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b6d767d2f8ed5d21a44b0e5886680cb9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b6d767d2f8ed5d21a44b0e5886680cb9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b6d767d2f8ed5d21a44b0e5886680cb9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b6d767d2f8ed5d21a44b0e5886680cb9-Supplemental.pdf | Few-shot image generation seeks to generate more data of a given domain, with only few available training examples. As it is unreasonable to expect to fully infer the distribution from just a few observations (e.g., emojis), we seek to leverage a large, related source domain as pretraining (e.g., human faces). Thus, we wish to preserve the diversity of the source domain, while adapting to the appearance of the target. We adapt a pretrained model, without introducing any additional parameters, to the few examples of the target domain. Crucially, we regularize the changes of the weights during this adaptation, in order to best preserve the information of the source dataset, while fitting the target. We demonstrate the effectiveness of our algorithm by generating high-quality results of different target domains, including those with extremely few examples (e.g., 10). We also analyze the performance of our method with respect to some important factors, such as the number of examples and the similarity between the source and target domain. |
On the Expressiveness of Approximate Inference in Bayesian Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/b6dfd41875bc090bd31d0b1740eb5b1b-Abstract.html | Andrew Foong, David Burt, Yingzhen Li, Richard Turner | https://papers.nips.cc/paper_files/paper/2020/hash/b6dfd41875bc090bd31d0b1740eb5b1b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b6dfd41875bc090bd31d0b1740eb5b1b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11057-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b6dfd41875bc090bd31d0b1740eb5b1b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b6dfd41875bc090bd31d0b1740eb5b1b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b6dfd41875bc090bd31d0b1740eb5b1b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b6dfd41875bc090bd31d0b1740eb5b1b-Supplemental.pdf | While Bayesian neural networks (BNNs) hold the promise of being flexible, well-calibrated statistical models, inference often requires approximations whose consequences are poorly understood. We study the quality of common variational methods in approximating the Bayesian predictive distribution. For single-hidden layer ReLU BNNs, we prove a fundamental limitation in function-space of two of the most commonly used distributions defined in weight-space: mean-field Gaussian and Monte Carlo dropout. We find there are simple cases where neither method can have substantially increased uncertainty in between well-separated regions of low uncertainty. We provide strong empirical evidence that exact inference does not have this pathology, hence it is due to the approximation and not the model. In contrast, for deep networks, we prove a universality result showing that there exist approximate posteriors in the above classes which provide flexible uncertainty estimates. However, we find empirically that pathologies of a similar form as in the single-hidden layer case can persist when performing variational inference in deeper networks. Our results motivate careful consideration of the implications of approximate inference methods in BNNs. |
Non-Crossing Quantile Regression for Distributional Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/b6f8dc086b2d60c5856e4ff517060392-Abstract.html | Fan Zhou, Jianing Wang, Xingdong Feng | https://papers.nips.cc/paper_files/paper/2020/hash/b6f8dc086b2d60c5856e4ff517060392-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b6f8dc086b2d60c5856e4ff517060392-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11058-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b6f8dc086b2d60c5856e4ff517060392-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b6f8dc086b2d60c5856e4ff517060392-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b6f8dc086b2d60c5856e4ff517060392-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b6f8dc086b2d60c5856e4ff517060392-Supplemental.pdf | Distributional reinforcement learning (DRL) estimates the distribution over future returns instead of the mean to more efficiently capture the intrinsic uncertainty of MDPs. However, batch-based DRL algorithms cannot guarantee the non-decreasing property of learned quantile curves especially at the early training stage, leading to abnormal distribution estimates and reduced model interpretability. To address these issues, we introduce a general DRL framework by using non-crossing quantile regression to ensure the monotonicity constraint within each sampled batch, which can be incorporated with any well-known DRL algorithm. We demonstrate the validity of our method from both the theory and model implementation perspectives. Experiments on Atari 2600 Games show that some state-of-art DRL algorithms with the non-crossing modification can significantly outperform their baselines in terms of faster convergence speeds and better testing performance. In particular, our method can effectively recover the distribution information and thus dramatically increase the exploration efficiency when the reward space is extremely sparse. |
Dark Experience for General Continual Learning: a Strong, Simple Baseline | https://papers.nips.cc/paper_files/paper/2020/hash/b704ea2c39778f07c617f6b7ce480e9e-Abstract.html | Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, SIMONE CALDERARA | https://papers.nips.cc/paper_files/paper/2020/hash/b704ea2c39778f07c617f6b7ce480e9e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b704ea2c39778f07c617f6b7ce480e9e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11059-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b704ea2c39778f07c617f6b7ce480e9e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b704ea2c39778f07c617f6b7ce480e9e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b704ea2c39778f07c617f6b7ce480e9e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b704ea2c39778f07c617f6b7ce480e9e-Supplemental.zip | Continual Learning has inspired a plethora of approaches and evaluation settings; however, the majority of them overlooks the properties of a practical scenario, where the data stream cannot be shaped as a sequence of tasks and offline training is not viable. We work towards General Continual Learning (GCL), where task boundaries blur and the domain and class distributions shift either gradually or suddenly. We address it through mixing rehearsal with knowledge distillation and regularization; our simple baseline, Dark Experience Replay, matches the network's logits sampled throughout the optimization trajectory, thus promoting consistency with its past. By conducting an extensive analysis on both standard benchmarks and a novel GCL evaluation setting (MNIST-360), we show that such a seemingly simple baseline outperforms consolidated approaches and leverages limited resources. We further explore the generalization capabilities of our objective, showing its regularization being beneficial beyond mere performance. |
Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping | https://papers.nips.cc/paper_files/paper/2020/hash/b710915795b9e9c02cf10d6d2bdb688c-Abstract.html | Yujing Hu, Weixun Wang, Hangtian Jia, Yixiang Wang, Yingfeng Chen, Jianye Hao, Feng Wu, Changjie Fan | https://papers.nips.cc/paper_files/paper/2020/hash/b710915795b9e9c02cf10d6d2bdb688c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b710915795b9e9c02cf10d6d2bdb688c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11060-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b710915795b9e9c02cf10d6d2bdb688c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b710915795b9e9c02cf10d6d2bdb688c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b710915795b9e9c02cf10d6d2bdb688c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b710915795b9e9c02cf10d6d2bdb688c-Supplemental.zip | Reward shaping is an effective technique for incorporating domain knowledge into reinforcement learning (RL). Existing approaches such as potential-based reward shaping normally make full use of a given shaping reward function. However, since the transformation of human knowledge into numeric reward values is often imperfect due to reasons such as human cognitive bias, completely utilizing the shaping reward function may fail to improve the performance of RL algorithms. In this paper, we consider the problem of adaptively utilizing a given shaping reward function. We formulate the utilization of shaping rewards as a bi-level optimization problem, where the lower level is to optimize policy using the shaping rewards and the upper level is to optimize a parameterized shaping weight function for true reward maximization. We formally derive the gradient of the expected true reward with respect to the shaping weight function parameters and accordingly propose three learning algorithms based on different assumptions. Experiments in sparse-reward cartpole and MuJoCo environments show that our algorithms can fully exploit beneficial shaping rewards, and meanwhile ignore unbeneficial shaping rewards or even transform them into beneficial ones. |
Neural encoding with visual attention | https://papers.nips.cc/paper_files/paper/2020/hash/b71f5aaf3371c2cdfb7a7c0497f569d4-Abstract.html | Meenakshi Khosla, Gia Ngo, Keith Jamison, Amy Kuceyeski, Mert Sabuncu | https://papers.nips.cc/paper_files/paper/2020/hash/b71f5aaf3371c2cdfb7a7c0497f569d4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b71f5aaf3371c2cdfb7a7c0497f569d4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11061-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b71f5aaf3371c2cdfb7a7c0497f569d4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b71f5aaf3371c2cdfb7a7c0497f569d4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b71f5aaf3371c2cdfb7a7c0497f569d4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b71f5aaf3371c2cdfb7a7c0497f569d4-Supplemental.pdf | Visual perception is critically influenced by the focus of attention. Due to limited resources, it is well known that neural representations are biased in favor of attended locations. Using concurrent eye-tracking and functional Magnetic Resonance Imaging (fMRI) recordings from a large cohort of human subjects watching movies, we first demonstrate that leveraging gaze information, in the form of attentional masking, can significantly improve brain response prediction accuracy in a neural encoding model. Next, we propose a novel approach to neural encoding by including a trainable soft-attention module. Using our new approach, we demonstrate that it is possible to learn visual attention policies by end-to-end learning merely on fMRI response data, and without relying on any eye-tracking.
Interestingly, we find that attention locations estimated by the model on independent data agree well with the corresponding eye fixation patterns, despite no explicit supervision to do so. Together, these findings suggest that attention modules can be instrumental in neural encoding models of visual stimuli. |
On the linearity of large non-linear models: when and why the tangent kernel is constant | https://papers.nips.cc/paper_files/paper/2020/hash/b7ae8fecf15b8b6c3c69eceae636d203-Abstract.html | Chaoyue Liu, Libin Zhu, Misha Belkin | https://papers.nips.cc/paper_files/paper/2020/hash/b7ae8fecf15b8b6c3c69eceae636d203-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b7ae8fecf15b8b6c3c69eceae636d203-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11062-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b7ae8fecf15b8b6c3c69eceae636d203-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b7ae8fecf15b8b6c3c69eceae636d203-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b7ae8fecf15b8b6c3c69eceae636d203-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b7ae8fecf15b8b6c3c69eceae636d203-Supplemental.pdf | The goal of this work is to shed light on the remarkable phenomenon of "transition to linearity" of certain neural networks as their width approaches infinity. We show that the "transition to linearity'' of the model and, equivalently, constancy of the (neural) tangent kernel (NTK) result from the scaling properties of the norm of the Hessian matrix of the network as a function of the network width.
We present a general framework for understanding the constancy of the tangent kernel via Hessian scaling applicable to the standard classes of neural networks. Our analysis provides a new perspective on the phenomenon of constant tangent kernel, which is different from the widely accepted "lazy training''.
Furthermore, we show that the "transition to linearity" is not a general property of wide neural networks and does not hold when the last layer of the network is non-linear.
It is also not necessary for successful optimization by gradient descent. |
PLLay: Efficient Topological Layer based on Persistent Landscapes | https://papers.nips.cc/paper_files/paper/2020/hash/b803a9254688e259cde2ec0361c8abe4-Abstract.html | Kwangho Kim, Jisu Kim, Manzil Zaheer, Joon Kim, Frederic Chazal, Larry Wasserman | https://papers.nips.cc/paper_files/paper/2020/hash/b803a9254688e259cde2ec0361c8abe4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b803a9254688e259cde2ec0361c8abe4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11063-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b803a9254688e259cde2ec0361c8abe4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b803a9254688e259cde2ec0361c8abe4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b803a9254688e259cde2ec0361c8abe4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b803a9254688e259cde2ec0361c8abe4-Supplemental.pdf | We propose PLLay, a novel topological layer for general deep learning models based on persistence landscapes, in which we can efficiently exploit the underlying topological features of the input data structure. In this work, we show differentiability with respect to layer inputs, for a general persistent homology with arbitrary filtration. Thus, our proposed layer can be placed anywhere in the network and feed critical information on the topological features of input data into subsequent layers to improve the learnability of the networks toward a given task. A task-optimal structure of PLLay is learned during training via backpropagation, without requiring any input featurization or data preprocessing. We provide a novel adaptation for the DTM function-based filtration, and show that the proposed layer is robust against noise and outliers through a stability analysis. We demonstrate the effectiveness of our approach by classification experiments on various datasets. |
Decentralized Langevin Dynamics for Bayesian Learning | https://papers.nips.cc/paper_files/paper/2020/hash/b8043b9b976639acb17b035ab8963f18-Abstract.html | Anjaly Parayil, He Bai, Jemin George, Prudhvi Gurram | https://papers.nips.cc/paper_files/paper/2020/hash/b8043b9b976639acb17b035ab8963f18-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b8043b9b976639acb17b035ab8963f18-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11064-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b8043b9b976639acb17b035ab8963f18-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b8043b9b976639acb17b035ab8963f18-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b8043b9b976639acb17b035ab8963f18-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b8043b9b976639acb17b035ab8963f18-Supplemental.zip | Motivated by decentralized approaches to machine learning, we propose a collaborative Bayesian learning algorithm taking the form of decentralized Langevin dynamics in a non-convex setting. Our analysis show that the initial KL-divergence between the Markov Chain and the target posterior distribution is exponentially decreasing while the error contributions to the overall KL-divergence from the additive noise is decreasing in polynomial time. We further show that the polynomial-term experiences speed-up with number of agents and provide sufficient conditions on the time-varying step-sizes to guarantee convergence to the desired distribution. The performance of the proposed algorithm is evaluated on a wide variety of machine learning tasks. The empirical results show that the performance of individual agents with locally available data is on par with the centralized setting with considerable improvement in the convergence rate. |
Shared Space Transfer Learning for analyzing multi-site fMRI data | https://papers.nips.cc/paper_files/paper/2020/hash/b837305e43f7e535a1506fc263eee3ed-Abstract.html | Tony Muhammad Yousefnezhad, Alessandro Selvitella, Daoqiang Zhang, Andrew Greenshaw, Russell Greiner | https://papers.nips.cc/paper_files/paper/2020/hash/b837305e43f7e535a1506fc263eee3ed-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b837305e43f7e535a1506fc263eee3ed-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11065-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b837305e43f7e535a1506fc263eee3ed-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b837305e43f7e535a1506fc263eee3ed-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b837305e43f7e535a1506fc263eee3ed-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b837305e43f7e535a1506fc263eee3ed-Supplemental.zip | Multi-voxel pattern analysis (MVPA) learns predictive models from task-based functional magnetic resonance imaging (fMRI) data, for distinguishing when subjects are performing different cognitive tasks — e.g., watching movies or making decisions. MVPA works best with a well-designed feature set and an adequate sample size. However, most fMRI datasets are noisy, high-dimensional, expensive to collect, and with small sample sizes. Further, training a robust, generalized predictive model that can analyze homogeneous cognitive tasks provided by multi-site fMRI datasets has additional challenges. This paper proposes the Shared Space Transfer Learning (SSTL) as a novel transfer learning (TL) approach that can functionally align homogeneous multi-site fMRI datasets, and so improve the prediction performance in every site. SSTL first extracts a set of common features for all subjects in each site. It then uses TL to map these site-specific features to a site-independent shared space in order to improve the performance of the MVPA. SSTL uses a scalable optimization procedure that works effectively for high-dimensional fMRI datasets. The optimization procedure extracts the common features for each site by using a single-iteration algorithm and maps these site-specific common features to the site-independent shared space. We evaluate the effectiveness of the proposed method for transferring between various cognitive tasks. Our comprehensive experiments validate that SSTL achieves superior performance to other state-of-the-art analysis techniques. |
The Diversified Ensemble Neural Network | https://papers.nips.cc/paper_files/paper/2020/hash/b86e8d03fe992d1b0e19656875ee557c-Abstract.html | Shaofeng Zhang, Meng Liu, Junchi Yan | https://papers.nips.cc/paper_files/paper/2020/hash/b86e8d03fe992d1b0e19656875ee557c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b86e8d03fe992d1b0e19656875ee557c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11066-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b86e8d03fe992d1b0e19656875ee557c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b86e8d03fe992d1b0e19656875ee557c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b86e8d03fe992d1b0e19656875ee557c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b86e8d03fe992d1b0e19656875ee557c-Supplemental.pdf | Ensemble is a general way of improving the accuracy and stability of learning models, especially for the generalization ability on small datasets. Compared with tree-based methods, relatively less works have been devoted to an in-depth study on effective ensemble design for neural networks. In this paper, we propose a principled ensemble technique by constructing the so-called diversified ensemble layer to combine multiple networks as individual modules. We theoretically show that each individual model in our ensemble layer corresponds to weights in the ensemble layer optimized in different directions. Meanwhile, the devised ensemble layer can be readily integrated into popular neural architectures, including CNNs, RNNs, and GCNs. Extensive experiments are conducted on public tabular datasets, images, and texts. By adopting weight sharing approach, the results show our method can notably improve the accuracy and stability of the original neural networks with ignorable extra time and space overhead. |
Inductive Quantum Embedding | https://papers.nips.cc/paper_files/paper/2020/hash/b87039703fe79778e9f140b78621d7fb-Abstract.html | Santosh Kumar Srivastava, Dinesh Khandelwal, Dhiraj Madan, Dinesh Garg, Hima Karanam, L Venkata Subramaniam | https://papers.nips.cc/paper_files/paper/2020/hash/b87039703fe79778e9f140b78621d7fb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b87039703fe79778e9f140b78621d7fb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11067-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b87039703fe79778e9f140b78621d7fb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b87039703fe79778e9f140b78621d7fb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b87039703fe79778e9f140b78621d7fb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b87039703fe79778e9f140b78621d7fb-Supplemental.pdf | Quantum logic inspired embedding (aka Quantum Embedding (QE)) of a Knowledge-Base (KB) was proposed recently by Garg:2019. It is claimed that the QE preserves the logical structure of the input KB given in the form of unary and binary predicates hierarchy. Such structure preservation allows one to perform Boolean logic style deductive reasoning directly over these embedding vectors. The original QE idea, however, is limited to the transductive (not inductive) setting. Moreover, the original QE scheme runs quite slow on real applications involving millions of entities. This paper alleviates both of these key limitations. We start by reformulating the original QE problem to allow for the induction. On the way, we also underscore some interesting analytic and geometric properties of the solution and leverage them to design a faster training scheme. As an application, we show that one can achieve state-of-the-art performance on the well-known NLP task of fine-grained entity type classification by using the inductive QE approach. Our training runs 9-times faster than the original QE scheme on this task. |
Variational Bayesian Unlearning | https://papers.nips.cc/paper_files/paper/2020/hash/b8a6550662b363eb34145965d64d0cfb-Abstract.html | Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet | https://papers.nips.cc/paper_files/paper/2020/hash/b8a6550662b363eb34145965d64d0cfb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b8a6550662b363eb34145965d64d0cfb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11068-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b8a6550662b363eb34145965d64d0cfb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b8a6550662b363eb34145965d64d0cfb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b8a6550662b363eb34145965d64d0cfb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b8a6550662b363eb34145965d64d0cfb-Supplemental.pdf | This paper studies the problem of approximately unlearning a Bayesian model from a small subset of the training data to be erased. We frame this problem as one of minimizing the Kullback-Leibler divergence between the approximate posterior belief of model parameters after directly unlearning from erased data vs. the exact posterior belief from retraining with remaining data. Using the variational inference (VI) framework, we show that it is equivalent to minimizing an evidence upper bound which trades off between fully unlearning from erased data vs. not entirely forgetting the posterior belief given the full data (i.e., including the remaining data); the latter prevents catastrophic unlearning that can render the model useless. In model training with VI, only an approximate (instead of exact) posterior belief given the full data can be obtained, which makes unlearning even more challenging. We propose two novel tricks to tackle this challenge. We empirically demonstrate our unlearning methods on Bayesian models such as sparse Gaussian process and logistic regression using synthetic and real-world datasets. |
Batched Coarse Ranking in Multi-Armed Bandits | https://papers.nips.cc/paper_files/paper/2020/hash/b8b9c74ac526fffbeb2d39ab038d1cd7-Abstract.html | Nikolai Karpov, Qin Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/b8b9c74ac526fffbeb2d39ab038d1cd7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b8b9c74ac526fffbeb2d39ab038d1cd7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11069-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b8b9c74ac526fffbeb2d39ab038d1cd7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b8b9c74ac526fffbeb2d39ab038d1cd7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b8b9c74ac526fffbeb2d39ab038d1cd7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b8b9c74ac526fffbeb2d39ab038d1cd7-Supplemental.pdf | We study the problem of coarse ranking in the multi-armed bandits (MAB) setting, where we have a set of arms each of which is associated with an unknown distribution. The task is to partition the arms into clusters of predefined sizes, such that the mean of any arm in the $i$-th cluster is larger than that of any arm in the $j$-th cluster for any $j > i$. Coarse ranking generalizes a number of basic problems in MAB (e.g., best arm identification) and has many real-world applications. We initiate the study of the problem in the batched model where we can only have a small number of policy changes. We study both the fixed budget and fixed confidence variants in MAB, and propose algorithms and prove impossibility results which together give almost tight tradeoffs between the total number of arms pulls and the number of policy changes. We have tested our algorithms in both real and synthetic data; our experimental results have demonstrated the efficiency of the proposed methods. |
Understanding and Improving Fast Adversarial Training | https://papers.nips.cc/paper_files/paper/2020/hash/b8ce47761ed7b3b6f48b583350b7f9e4-Abstract.html | Maksym Andriushchenko, Nicolas Flammarion | https://papers.nips.cc/paper_files/paper/2020/hash/b8ce47761ed7b3b6f48b583350b7f9e4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b8ce47761ed7b3b6f48b583350b7f9e4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11070-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b8ce47761ed7b3b6f48b583350b7f9e4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b8ce47761ed7b3b6f48b583350b7f9e4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b8ce47761ed7b3b6f48b583350b7f9e4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b8ce47761ed7b3b6f48b583350b7f9e4-Supplemental.pdf | A recent line of work focused on making adversarial training computationally efficient for deep learning models. In particular, Wong et al. (2020) showed that $\ell_\infty$-adversarial training with fast gradient sign method (FGSM) can fail due to a phenomenon called catastrophic overfitting, when the model quickly loses its robustness over a single epoch of training. We show that adding a random step to FGSM, as proposed in Wong et al. (2020), does not prevent catastrophic overfitting, and that randomness is not important per se --- its main role being simply to reduce the magnitude of the perturbation. Moreover, we show that catastrophic overfitting is not inherent to deep and overparametrized networks, but can occur in a single-layer convolutional network with a few filters. In an extreme case, even a single filter can make the network highly non-linear locally, which is the main reason why FGSM training fails. Based on this observation, we propose a new regularization method, GradAlign, that prevents catastrophic overfitting by explicitly maximizing the gradient alignment inside the perturbation set and improves the quality of the FGSM solution. As a result, GradAlign allows to successfully apply FGSM training also for larger $\ell_\infty$-perturbations and reduce the gap to multi-step adversarial training. The code of our experiments is available at https://github.com/tml-epfl/understanding-fast-adv-training. |
Coded Sequential Matrix Multiplication For Straggler Mitigation | https://papers.nips.cc/paper_files/paper/2020/hash/b8fd7211e5247891e4d4f0562418868a-Abstract.html | Nikhil Krishnan Muralee Krishnan, Seyederfan Hosseini, Ashish Khisti | https://papers.nips.cc/paper_files/paper/2020/hash/b8fd7211e5247891e4d4f0562418868a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b8fd7211e5247891e4d4f0562418868a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11071-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b8fd7211e5247891e4d4f0562418868a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b8fd7211e5247891e4d4f0562418868a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b8fd7211e5247891e4d4f0562418868a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b8fd7211e5247891e4d4f0562418868a-Supplemental.zip | In this work, we consider a sequence of $J$ matrix multiplication jobs which needs to be distributed by a master across multiple worker nodes. For $i\in \{1,2,\ldots,J\}$, job-$i$ begins in round-$i$ and has to be completed by round-$(i+T)$. Previous works consider only the special case of $T=0$ and focus on coding across workers. We propose here two schemes with $T>0$, which feature coding across workers as well as the dimension of time. Our first scheme is a modification of the polynomial coding scheme introduced by Yu et al. and places no assumptions on the straggler model. Exploitation of the temporal dimension helps the scheme handle a larger set of straggler patterns than the polynomial coding scheme, for a given computational load per worker per round. The second scheme assumes a particular straggler model to further improve performance (in terms of encoding/decoding complexity). We develop theoretical results establishing (i) optimality of our proposed schemes for a certain class of straggler patterns and (ii) improved performance for the case of i.i.d. stragglers. These are further validated by experiments, where we implement our schemes to train neural networks. |
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning | https://papers.nips.cc/paper_files/paper/2020/hash/b8ffa41d4e492f0fad2f13e29e1762eb-Abstract.html | Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, Dimitris Papailiopoulos | https://papers.nips.cc/paper_files/paper/2020/hash/b8ffa41d4e492f0fad2f13e29e1762eb-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11072-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-Supplemental.zip | Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in the form of backdoors during training. The goal of a backdoor is to corrupt the performance of the trained model on specific sub-tasks (e.g., by classifying green cars as frogs). A range of FL backdoor attacks have been introduced in the literature, but also methods to defend against them, and it is currently an open question whether FL systems can be tailored to be robust against backdoors. In this work, we provide evidence to the contrary. We first establish that, in the general case, robustness to backdoors implies model robustness to adversarial examples, a major open problem in itself. Furthermore, detecting the presence of a backdoor in a FL model is unlikely assuming first-order oracles or polynomial time. We couple our theoretical results with a new family of backdoor attacks, which we refer to as edge-case backdoors. An edge-case backdoor forces a model to misclassify on seemingly easy inputs that are however unlikely to be part of the training, or test data, i.e., they live on the tail of the input distribution. We explain how these edge-case backdoors can lead to unsavory failures and may have serious repercussions on fairness. We further exhibit that, with careful tuning at the side of the adversary, one can insert them across a range of machine learning tasks (e.g., image classification, OCR, text prediction, sentiment analysis), and bypass state-of-the-art defense mechanisms. |
Certifiably Adversarially Robust Detection of Out-of-Distribution Data | https://papers.nips.cc/paper_files/paper/2020/hash/b90c46963248e6d7aab1e0f429743ca0-Abstract.html | Julian Bitterwolf, Alexander Meinke, Matthias Hein | https://papers.nips.cc/paper_files/paper/2020/hash/b90c46963248e6d7aab1e0f429743ca0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b90c46963248e6d7aab1e0f429743ca0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11073-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b90c46963248e6d7aab1e0f429743ca0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b90c46963248e6d7aab1e0f429743ca0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b90c46963248e6d7aab1e0f429743ca0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b90c46963248e6d7aab1e0f429743ca0-Supplemental.pdf | Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class. This is a problem in safety-critical applications since a reliable assessment of the uncertainty of a classifier is a key property, allowing to trigger human intervention or to transfer into a safe state. In this paper, we are aiming for certifiable worst case guarantees for OOD detection by enforcing not only low confidence at the OOD point but also in an $l_\infty$-ball around it. For this purpose, we use interval bound propagation (IBP) to upper bound the maximal confidence in the $l_\infty$-ball and minimize this upper bound during training time. We show that non-trivial bounds on the confidence for OOD data generalizing beyond the OOD dataset seen at training time are possible. Moreover, in contrast to certified adversarial robustness which typically comes with significant loss in prediction performance, certified guarantees for worst case OOD detection are possible without much loss in accuracy. |
Domain Generalization via Entropy Regularization | https://papers.nips.cc/paper_files/paper/2020/hash/b98249b38337c5088bbc660d8f872d6a-Abstract.html | Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, Dacheng Tao | https://papers.nips.cc/paper_files/paper/2020/hash/b98249b38337c5088bbc660d8f872d6a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b98249b38337c5088bbc660d8f872d6a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11074-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b98249b38337c5088bbc660d8f872d6a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b98249b38337c5088bbc660d8f872d6a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b98249b38337c5088bbc660d8f872d6a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b98249b38337c5088bbc660d8f872d6a-Supplemental.pdf | Domain generalization aims to learn from multiple source domains a predictive model that can generalize to unseen target domains. One essential problem in domain generalization is to learn discriminative domain-invariant features. To arrive at this, some methods introduce a domain discriminator through adversarial learning to match the feature distributions in multiple source domains. However, adversarial training can only guarantee that the learned features have invariant marginal distributions, while the invariance of conditional distributions is more important for prediction in new domains. To ensure the conditional invariance of learned features, we propose an entropy regularization term that measures the dependency between the learned features and the class labels. Combined with the typical task-related loss, e.g., cross-entropy loss for classification, and adversarial loss for domain discrimination, our overall objective is guaranteed to learn conditional-invariant features across all source domains and thus can learn classifiers with better generalization capabilities.
We demonstrate the effectiveness of our method through comparison with state-of-the-art methods on both simulated and real-world datasets. Code is available at: https://github.com/sshan-zhao/DGviaER. |
Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels | https://papers.nips.cc/paper_files/paper/2020/hash/b9cfe8b6042cf759dc4c0cccb27a6737-Abstract.html | Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos J. Storkey | https://papers.nips.cc/paper_files/paper/2020/hash/b9cfe8b6042cf759dc4c0cccb27a6737-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/b9cfe8b6042cf759dc4c0cccb27a6737-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11075-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/b9cfe8b6042cf759dc4c0cccb27a6737-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/b9cfe8b6042cf759dc4c0cccb27a6737-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/b9cfe8b6042cf759dc4c0cccb27a6737-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/b9cfe8b6042cf759dc4c0cccb27a6737-Supplemental.pdf | Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task. Common approaches have taken the form of meta-learning: learning to learn on the new problem given the old. Following the recognition that meta-learning is implementing learning in a multi-level model, we present a Bayesian treatment for the meta-learning inner loop through the use of deep kernels. As a result we can learn a kernel that transfers to new tasks; we call this Deep Kernel Transfer (DKT). This approach has many advantages: is straightforward to implement as a single optimizer, provides uncertainty quantification, and does not require estimation of task-specific parameters. We empirically demonstrate that DKT outperforms several state-of-the-art algorithms in few-shot classification, and is the state of the art for cross-domain adaptation and regression. We conclude that complex meta-learning routines can be replaced by a simpler Bayesian model without loss of accuracy. |
Skeleton-bridged Point Completion: From Global Inference to Local Adjustment | https://papers.nips.cc/paper_files/paper/2020/hash/ba036d228858d76fb89189853a5503bd-Abstract.html | Yinyu Nie, Yiqun Lin, Xiaoguang Han, Shihui Guo, Jian Chang, Shuguang Cui, Jian.J Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/ba036d228858d76fb89189853a5503bd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba036d228858d76fb89189853a5503bd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11076-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba036d228858d76fb89189853a5503bd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba036d228858d76fb89189853a5503bd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba036d228858d76fb89189853a5503bd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba036d228858d76fb89189853a5503bd-Supplemental.pdf | Point completion refers to complete the missing geometries of objects from partial point clouds. Existing works usually estimate the missing shape by decoding a latent feature encoded from the input points. However, real-world objects are usually with diverse topologies and surface details, which a latent feature may fail to represent to recover a clean and complete surface. To this end, we propose a skeleton-bridged point completion network (SK-PCN) for shape completion. Given a partial scan, our method first predicts its 3D skeleton to obtain the global structure, and completes the surface by learning displacements from skeletal points. We decouple the shape completion into structure estimation and surface reconstruction, which eases the learning difficulty and benefits our method to obtain on-surface details. Besides, considering the missing features during encoding input points, SK-PCN adopts a local adjustment strategy that merges the input point cloud to our predictions for surface refinement. Comparing with previous methods, our skeleton-bridged manner better supports point normal estimation to obtain the full surface mesh beyond point clouds. The qualitative and quantitative experiments on both point cloud and mesh completion show that our approach outperforms the existing methods on various object categories. |
Compressing Images by Encoding Their Latent Representations with Relative Entropy Coding | https://papers.nips.cc/paper_files/paper/2020/hash/ba053350fe56ed93e64b3e769062b680-Abstract.html | Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato | https://papers.nips.cc/paper_files/paper/2020/hash/ba053350fe56ed93e64b3e769062b680-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba053350fe56ed93e64b3e769062b680-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11077-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba053350fe56ed93e64b3e769062b680-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba053350fe56ed93e64b3e769062b680-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba053350fe56ed93e64b3e769062b680-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba053350fe56ed93e64b3e769062b680-Supplemental.zip | Variational Autoencoders (VAEs) have seen widespread use in learned image compression. They are used to learn expressive latent representations on which downstream compression methods can operate with high efficiency. Recently proposed 'bits-back' methods can indirectly encode the latent representation of images with codelength close to the relative entropy between the latent posterior and the prior. However, due to the underlying algorithm, these methods can only be used for lossless compression, and they only achieve their nominal efficiency when compressing multiple images simultaneously; they are inefficient for compressing single images. As an alternative, we propose a novel method, Relative Entropy Coding (REC), that can directly encode the latent representation with codelength close to the relative entropy for single images, supported by our empirical results obtained on the Cifar10, ImageNet32 and Kodak datasets. Moreover, unlike previous bits-back methods, REC is immediately applicable to lossy compression, where it is competitive with the state-of-the-art on the Kodak dataset. |
Improved Guarantees for k-means++ and k-means++ Parallel | https://papers.nips.cc/paper_files/paper/2020/hash/ba304f3809ed31d0ad97b5a2b5df2a39-Abstract.html | Konstantin Makarychev, Aravind Reddy, Liren Shan | https://papers.nips.cc/paper_files/paper/2020/hash/ba304f3809ed31d0ad97b5a2b5df2a39-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba304f3809ed31d0ad97b5a2b5df2a39-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11078-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba304f3809ed31d0ad97b5a2b5df2a39-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba304f3809ed31d0ad97b5a2b5df2a39-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba304f3809ed31d0ad97b5a2b5df2a39-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba304f3809ed31d0ad97b5a2b5df2a39-Supplemental.pdf | In this paper, we study k-means++ and k-means||, the two most popular algorithms for the classic k-means clustering problem. We provide novel analyses and show improved approximation and bi-criteria approximation guarantees for k-means++ and k-means||. Our results give a better theoretical justification for why these algorithms perform extremely well in practice. |
Sparse Spectrum Warped Input Measures for Nonstationary Kernel Learning | https://papers.nips.cc/paper_files/paper/2020/hash/ba3c95c2962d3aab2f6e667932daa3c5-Abstract.html | Anthony Tompkins, Rafael Oliveira, Fabio T. Ramos | https://papers.nips.cc/paper_files/paper/2020/hash/ba3c95c2962d3aab2f6e667932daa3c5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba3c95c2962d3aab2f6e667932daa3c5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11079-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba3c95c2962d3aab2f6e667932daa3c5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba3c95c2962d3aab2f6e667932daa3c5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba3c95c2962d3aab2f6e667932daa3c5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba3c95c2962d3aab2f6e667932daa3c5-Supplemental.zip | We establish a general form of explicit, input-dependent, measure-valued warpings for learning nonstationary kernels. While stationary kernels are uniquitous and simple to use, they struggle to adapt to functions that vary in smoothness with respect to the input. The proposed learning algorithm warps inputs as conditional Gaussian measures that control the smoothness of a standard stationary kernel. This construction allows us to capture non-stationary patterns in the data and provides intuitive inductive bias. The resulting method is based on sparse spectrum Gaussian processes, enabling closed-form solutions, and is extensible to a stacked construction to capture more complex patterns. The method is extensively validated alongside related algorithms on synthetic and real world datasets. We demonstrate a remarkable efficiency in the number of parameters of the warping functions in learning problems with both small and large data regimes. |
An Efficient Adversarial Attack for Tree Ensembles | https://papers.nips.cc/paper_files/paper/2020/hash/ba3e9b6a519cfddc560b5d53210df1bd-Abstract.html | Chong Zhang, Huan Zhang, Cho-Jui Hsieh | https://papers.nips.cc/paper_files/paper/2020/hash/ba3e9b6a519cfddc560b5d53210df1bd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba3e9b6a519cfddc560b5d53210df1bd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11080-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba3e9b6a519cfddc560b5d53210df1bd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba3e9b6a519cfddc560b5d53210df1bd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba3e9b6a519cfddc560b5d53210df1bd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba3e9b6a519cfddc560b5d53210df1bd-Supplemental.pdf | We study the problem of efficient adversarial attacks on tree based ensembles such as gradient boosting decision trees (GBDTs) and random forests (RFs). Since these models are non-continuous step functions and gradient does not exist, most existing efficient adversarial attacks are not applicable. Although decision-based black-box attacks can be applied, they cannot utilize the special structure of trees. In our work, we transform the attack problem into a discrete search problem specially designed for tree ensembles, where the goal is to find a valid ``leaf tuple'' that leads to mis-classification while having the shortest distance to the original input. With this formulation, we show that a simple yet effective greedy algorithm can be applied to iteratively optimize the adversarial example by moving the leaf tuple to its neighborhood within hamming distance 1. Experimental results on several large GBDT and RF models with up to hundreds of trees demonstrate that our method can be thousands of times faster than the previous mixed-integer linear programming (MILP) based approach, while also providing smaller (better) adversarial examples than decision-based black-box attacks on general $\ell_p$ ($p=1, 2, \infty$) norm perturbations. |
Learning Continuous System Dynamics from Irregularly-Sampled Partial Observations | https://papers.nips.cc/paper_files/paper/2020/hash/ba4849411c8bbdd386150e5e32204198-Abstract.html | Zijie Huang, Yizhou Sun, Wei Wang | https://papers.nips.cc/paper_files/paper/2020/hash/ba4849411c8bbdd386150e5e32204198-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba4849411c8bbdd386150e5e32204198-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11081-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba4849411c8bbdd386150e5e32204198-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba4849411c8bbdd386150e5e32204198-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba4849411c8bbdd386150e5e32204198-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba4849411c8bbdd386150e5e32204198-Supplemental.pdf | Many real-world systems, such as moving planets, can be considered as multi-agent dynamic systems, where objects interact with each other and co-evolve along with the time. Such dynamics is usually difficult to capture, and understanding and predicting the dynamics based on observed trajectories of objects become a critical research problem in many domains. Most existing algorithms, however, assume the observations are regularly sampled and all the objects can be fully observed at each sampling time, which is impractical for many applications. In this paper, we pro-pose to learn system dynamics from irregularly-sampled and partial observations with underlying graph structure for the first time. To tackle the above challenge, we present LG-ODE, a latent ordinary differential equation generative model for modeling multi-agent dynamic system with known graph structure. It can simultaneously learn the embedding of high dimensional trajectories and infer continuous latent system dynamics. Our model employs a novel encoder parameterized by a graph neural network that can infer initial states in an unsupervised way from irregularly-sampled partial observations of structural objects and utilizes neuralODE to infer arbitrarily complex continuous-time latent dynamics. Experiments on motion capture, spring system, and charged particle datasets demonstrate the effectiveness of our approach. |
Online Bayesian Persuasion | https://papers.nips.cc/paper_files/paper/2020/hash/ba5451d3c91a0f982f103cdbe249bc78-Abstract.html | Matteo Castiglioni, Andrea Celli, Alberto Marchesi, Nicola Gatti | https://papers.nips.cc/paper_files/paper/2020/hash/ba5451d3c91a0f982f103cdbe249bc78-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba5451d3c91a0f982f103cdbe249bc78-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11082-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba5451d3c91a0f982f103cdbe249bc78-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba5451d3c91a0f982f103cdbe249bc78-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba5451d3c91a0f982f103cdbe249bc78-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba5451d3c91a0f982f103cdbe249bc78-Supplemental.pdf | In Bayesian persuasion, an informed sender has to design a signaling scheme that discloses the right amount of information so as to influence the behavior of a self-interested receiver. This kind of strategic interaction is ubiquitous in real economic scenarios. However, the original model by Kamenica and Gentzkow makes some stringent assumptions which limit its applicability in practice. One of the most limiting assumptions is arguably that, in order to compute an optimal signaling scheme, the sender is usually required to know the receiver's utility function. In this paper, we relax this assumption through an online learning framework in which the sender faces a receiver with unknown type. At each round, the receiver's type is chosen adversarially from a finite set of possible types. We are interested in no-regret algorithms prescribing a signaling scheme at each round of the repeated interaction with performances close to that of the best-in-hindsight signaling scheme. First, we prove a hardness result on the per-iteration running time required to achieve the no-regret property. Then, we provide algorithms for the full and partial information model which exhibit regret sublinear in the number of rounds and polynomial in the parameters of the game. |
Robust Pre-Training by Adversarial Contrastive Learning | https://papers.nips.cc/paper_files/paper/2020/hash/ba7e36c43aff315c00ec2b8625e3b719-Abstract.html | Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang | https://papers.nips.cc/paper_files/paper/2020/hash/ba7e36c43aff315c00ec2b8625e3b719-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba7e36c43aff315c00ec2b8625e3b719-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11083-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba7e36c43aff315c00ec2b8625e3b719-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba7e36c43aff315c00ec2b8625e3b719-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba7e36c43aff315c00ec2b8625e3b719-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba7e36c43aff315c00ec2b8625e3b719-Supplemental.pdf | Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness In this work, we improve robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations. Our approach leverages a recent contrastive learning framework, which learns representations by maximizing feature consistency under differently augmented views. This fits particularly well with the goal of adversarial robustness, as one cause of adversarial fragility is the lack of feature invariance, i.e., small input perturbations can result in undesirable large changes in features or even predicted labels. We explore various options to formulate the contrastive task, and demonstrate that by injecting adversarial perturbations, contrastive pre-training can lead to models that are both label-efficient and robust. We empirically evaluate the proposed Adversarial Contrastive Learning (ACL) and show it can consistently outperform existing methods. For example on the CIFAR-10 dataset, ACL outperforms the previous state-of-the-art unsupervised robust pre-training approach by 2.99% on robust accuracy and 2.14% on standard accuracy. We further demonstrate that ACL pre-training can improve semi-supervised adversarial training, even when only a few labeled examples are available. Our codes and pre-trained models have been released at: https://github.com/VITA-Group/Adversarial-Contrastive-Learning. |
Random Walk Graph Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/ba95d78a7c942571185308775a97a3a0-Abstract.html | Giannis Nikolentzos, Michalis Vazirgiannis | https://papers.nips.cc/paper_files/paper/2020/hash/ba95d78a7c942571185308775a97a3a0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba95d78a7c942571185308775a97a3a0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11084-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba95d78a7c942571185308775a97a3a0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba95d78a7c942571185308775a97a3a0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba95d78a7c942571185308775a97a3a0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba95d78a7c942571185308775a97a3a0-Supplemental.pdf | In recent years, graph neural networks (GNNs) have become the de facto tool for performing machine learning tasks on graphs. Most GNNs belong to the family of message passing neural networks (MPNNs). These models employ an iterative neighborhood aggregation scheme to update vertex representations. Then, to compute vector representations of graphs, they aggregate the representations of the vertices using some permutation invariant function. One would expect the hidden layers of a GNN to be composed of parameters that take the form of graphs. However, this is not the case for MPNNs since their update procedure is parameterized by fully-connected layers. In this paper, we propose a more intuitive and transparent architecture for graph-structured data, so-called Random Walk Graph Neural Network (RWNN). The first layer of the model consists of a number of trainable ``hidden graphs'' which are compared against the input graphs using a random walk kernel to produce graph representations. These representations are then passed on to a fully-connected neural network which produces the output. The employed random walk kernel is differentiable, and therefore, the proposed model is end-to-end trainable. We demonstrate the model's transparency on synthetic datasets. Furthermore, we empirically evaluate the model on graph classification datasets and show that it achieves competitive performance. |
Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling | https://papers.nips.cc/paper_files/paper/2020/hash/ba9a56ce0a9bfa26e8ed9e10b2cc8f46-Abstract.html | Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, Panayotis Mertikopoulos | https://papers.nips.cc/paper_files/paper/2020/hash/ba9a56ce0a9bfa26e8ed9e10b2cc8f46-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/ba9a56ce0a9bfa26e8ed9e10b2cc8f46-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11085-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/ba9a56ce0a9bfa26e8ed9e10b2cc8f46-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/ba9a56ce0a9bfa26e8ed9e10b2cc8f46-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/ba9a56ce0a9bfa26e8ed9e10b2cc8f46-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/ba9a56ce0a9bfa26e8ed9e10b2cc8f46-Supplemental.pdf | Owing to their stability and convergence speed, extragradient methods have become a staple for solving large-scale saddle-point problems in machine learning. The basic premise of these algorithms is the use of an extrapolation step before performing an update; thanks to this exploration step, extra-gradient methods overcome many of the non-convergence issues that plague gradient descent/ascent schemes. On the other hand, as we show in this paper, running vanilla extragradient with stochastic gradients may jeopardize its convergence, even in simple bilinear models. To overcome this failure, we investigate a double stepsize extragradient algorithm where the exploration step evolves at a more aggressive time-scale compared to the update step. We show that this modification allows the method to converge even with stochastic gradients, and we derive sharp convergence rates under an error bound condition. |
Fast and Accurate $k$-means++ via Rejection Sampling | https://papers.nips.cc/paper_files/paper/2020/hash/babcff88f8be8c4795bd6f0f8cccca61-Abstract.html | Vincent Cohen-Addad, Silvio Lattanzi, Ashkan Norouzi-Fard, Christian Sohler, Ola Svensson | https://papers.nips.cc/paper_files/paper/2020/hash/babcff88f8be8c4795bd6f0f8cccca61-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/babcff88f8be8c4795bd6f0f8cccca61-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11086-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/babcff88f8be8c4795bd6f0f8cccca61-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/babcff88f8be8c4795bd6f0f8cccca61-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/babcff88f8be8c4795bd6f0f8cccca61-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/babcff88f8be8c4795bd6f0f8cccca61-Supplemental.pdf | $k$-means++ \cite{arthur2007k} is a widely used clustering algorithm that is easy to implement, has nice theoretical guarantees and strong empirical performance. Despite its wide adoption, $k$-means++ sometimes suffers from being slow on large data-sets so a natural question has been to obtain more efficient algorithms with similar guarantees. In this paper, we present such a near linear time algorithm for $k$-means++ seeding. Interestingly our algorithm obtains the same theoretical guarantees as $k$-means++ and significantly improves earlier results on fast $k$-means++ seeding. Moreover, we show empirically that our algorithm is significantly faster than $k$-means++ and obtains solutions of equivalent quality. |
Variational Amodal Object Completion | https://papers.nips.cc/paper_files/paper/2020/hash/bacadc62d6e67d7897cef027fa2d416c-Abstract.html | Huan Ling, David Acuna, Karsten Kreis, Seung Wook Kim, Sanja Fidler | https://papers.nips.cc/paper_files/paper/2020/hash/bacadc62d6e67d7897cef027fa2d416c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bacadc62d6e67d7897cef027fa2d416c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11087-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bacadc62d6e67d7897cef027fa2d416c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bacadc62d6e67d7897cef027fa2d416c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bacadc62d6e67d7897cef027fa2d416c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bacadc62d6e67d7897cef027fa2d416c-Supplemental.pdf | In images of complex scenes, objects are often occluding each other which makes perception tasks such as object detection and tracking, or robotic control tasks such as planning, challenging. To facilitate downstream tasks, it is thus important to reason about the full extent of objects, i.e., seeing behind occlusion, typically referred to as amodal instance completion. In this paper, we propose a variational generative framework for amodal completion, referred to as AMODAL-VAE, which does not require any amodal labels at training time, as it is able to utilize widely available object instance masks. We showcase our approach on the downstream task of scene editing where the user is presented with interactive tools to complete and erase objects in photographs. Experiments on complex street scenes demonstrate state-of-the-art performance in amodal mask completion and showcase high-quality scene editing results. Interestingly, a user study shows that humans prefer object completions inferred by our model to the human-labeled ones. |
When Counterpoint Meets Chinese Folk Melodies | https://papers.nips.cc/paper_files/paper/2020/hash/bae876e53dab654a3d9d9768b1b7b91a-Abstract.html | Nan Jiang, Sheng Jin, Zhiyao Duan, Changshui Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/bae876e53dab654a3d9d9768b1b7b91a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bae876e53dab654a3d9d9768b1b7b91a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11088-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bae876e53dab654a3d9d9768b1b7b91a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bae876e53dab654a3d9d9768b1b7b91a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bae876e53dab654a3d9d9768b1b7b91a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bae876e53dab654a3d9d9768b1b7b91a-Supplemental.zip | Counterpoint is an important concept in Western music theory. In the past century, there have been significant interests in incorporating counterpoint into Chinese folk music composition. In this paper, we propose a reinforcement learning-based system, named FolkDuet, towards the online countermelody generation for Chinese folk melodies. With no existing data of Chinese folk duets, FolkDuet employs two reward models based on out-of-domain data, i.e. Bach chorales, and monophonic Chinese folk melodies. An interaction reward model is trained on the duets formed from outer parts of Bach chorales to model counterpoint interaction, while a style reward model is trained on monophonic melodies of Chinese folk songs to model melodic patterns. With both rewards, the generator of FolkDuet is trained to generate countermelodies while maintaining the Chinese folk style. The entire generation process is performed in an online fashion, allowing real-time interactive human-machine duet improvisation. Experiments show that the proposed algorithm achieves better subjective and objective results than the baselines. |
Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search Spaces | https://papers.nips.cc/paper_files/paper/2020/hash/bb073f2855d769be5bf191f6378f7150-Abstract.html | Hung Tran-The, Sunil Gupta, Santu Rana, Huong Ha, Svetha Venkatesh | https://papers.nips.cc/paper_files/paper/2020/hash/bb073f2855d769be5bf191f6378f7150-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bb073f2855d769be5bf191f6378f7150-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11089-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bb073f2855d769be5bf191f6378f7150-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bb073f2855d769be5bf191f6378f7150-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bb073f2855d769be5bf191f6378f7150-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bb073f2855d769be5bf191f6378f7150-Supplemental.zip | Bayesian optimisation is a popular method for efficient optimisation of expensive black-box functions. Traditionally, BO assumes that the search space is known. However, in many problems, this assumption does not hold. To this end, we propose a novel BO algorithm which expands (and shifts) the search space over iterations based on controlling the expansion rate thought a \emph{hyperharmonic series}. Further, we propose another variant of our algorithm that scales to high dimensions. We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates. Our experiments with synthetic and real-world optimisation tasks demonstrate the superiority of our algorithms over the current state-of-the-art methods for Bayesian optimisation in unknown search space. |
Universal Domain Adaptation through Self Supervision | https://papers.nips.cc/paper_files/paper/2020/hash/bb7946e7d85c81a9e69fee1cea4a087c-Abstract.html | Kuniaki Saito, Donghyun Kim, Stan Sclaroff, Kate Saenko | https://papers.nips.cc/paper_files/paper/2020/hash/bb7946e7d85c81a9e69fee1cea4a087c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bb7946e7d85c81a9e69fee1cea4a087c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11090-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bb7946e7d85c81a9e69fee1cea4a087c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bb7946e7d85c81a9e69fee1cea4a087c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bb7946e7d85c81a9e69fee1cea4a087c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bb7946e7d85c81a9e69fee1cea4a087c-Supplemental.pdf | Unsupervised domain adaptation methods traditionally assume that all source categories are present in the target domain. In practice, little may be known about the category overlap between the two domains. While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori. We propose a more universally applicable domain adaptation approach that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE). Our approach combines two novel ideas: First, as we cannot fully rely on source categories to learn features discriminative for the target, we propose a novel neighborhood clustering technique to learn the structure of the target domain in a self-supervised way. Second, we use entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy.
We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings. |
Patch2Self: Denoising Diffusion MRI with Self-Supervised Learning | https://papers.nips.cc/paper_files/paper/2020/hash/bc047286b224b7bfa73d4cb02de1238d-Abstract.html | Shreyas Fadnavis, Joshua Batson, Eleftherios Garyfallidis | https://papers.nips.cc/paper_files/paper/2020/hash/bc047286b224b7bfa73d4cb02de1238d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bc047286b224b7bfa73d4cb02de1238d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11091-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bc047286b224b7bfa73d4cb02de1238d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bc047286b224b7bfa73d4cb02de1238d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bc047286b224b7bfa73d4cb02de1238d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bc047286b224b7bfa73d4cb02de1238d-Supplemental.pdf | Diffusion-weighted magnetic resonance imaging (DWI) is the only non-invasive method for quantifying microstructure and reconstructing white-matter pathways in the living human brain. Fluctuations from multiple sources create significant noise in DWI data which must be suppressed before subsequent microstructure analysis. We introduce a self-supervised learning method for denoising DWI data, Patch2Self, which uses the entire volume to learn a full-rank locally linear denoiser for that volume. By taking advantage of the oversampled q-space of DWI data, Patch2Self can separate structure from noise without requiring an explicit model for either. We demonstrate the effectiveness of Patch2Self via quantitative and qualitative improvements in microstructure modeling, tracking (via fiber bundle coherency) and model estimation relative to other unsupervised methods on real and simulated data. |
Stochastic Normalization | https://papers.nips.cc/paper_files/paper/2020/hash/bc573864331a9e42e4511de6f678aa83-Abstract.html | Zhi Kou, Kaichao You, Mingsheng Long, Jianmin Wang | https://papers.nips.cc/paper_files/paper/2020/hash/bc573864331a9e42e4511de6f678aa83-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bc573864331a9e42e4511de6f678aa83-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11092-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bc573864331a9e42e4511de6f678aa83-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bc573864331a9e42e4511de6f678aa83-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bc573864331a9e42e4511de6f678aa83-Review.html | null | Fine-tuning pre-trained deep networks on a small dataset is an important component in the deep learning pipeline. A critical problem in fine-tuning is how to avoid over-fitting when data are limited. Existing efforts work from two aspects: (1) impose regularization on parameters or features; (2) transfer prior knowledge to fine-tuning by reusing pre-trained parameters. In this paper, we take an alternative approach by refactoring the widely used Batch Normalization (BN) module to mitigate over-fitting. We propose a two-branch design with one branch normalized by mini-batch statistics and the other branch normalized by moving statistics. During training, two branches are stochastically selected to avoid over-depending on some sample statistics, resulting in a strong regularization effect, which we interpret as ``architecture regularization.'' The resulting method is dubbed stochastic normalization (\textbf{StochNorm}). With the two-branch architecture, it naturally incorporates pre-trained moving statistics in BN layers during fine-tuning, exploiting more prior knowledge of pre-trained networks. Extensive empirical experiments show that StochNorm is a powerful tool to avoid over-fitting in fine-tuning with small datasets. Besides, StochNorm is readily pluggable in modern CNN backbones. It is complementary to other fine-tuning methods and can work together to achieve stronger regularization effect. |
Constrained episodic reinforcement learning in concave-convex and knapsack settings | https://papers.nips.cc/paper_files/paper/2020/hash/bc6d753857fe3dd4275dff707dedf329-Abstract.html | Kianté Brantley, Miro Dudik, Thodoris Lykouris, Sobhan Miryoosefi, Max Simchowitz, Aleksandrs Slivkins, Wen Sun | https://papers.nips.cc/paper_files/paper/2020/hash/bc6d753857fe3dd4275dff707dedf329-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bc6d753857fe3dd4275dff707dedf329-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11093-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bc6d753857fe3dd4275dff707dedf329-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bc6d753857fe3dd4275dff707dedf329-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bc6d753857fe3dd4275dff707dedf329-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bc6d753857fe3dd4275dff707dedf329-Supplemental.pdf | We propose an algorithm for tabular episodic reinforcement learning with constraints. We provide a modular analysis with strong theoretical guarantees for settings with concave rewards and convex constraints, and for settings with hard constraints (knapsacks). Most of the previous work in constrained reinforcement learning is limited to linear constraints, and the remaining work focuses on either the feasibility question or settings with a single episode. Our experiments demonstrate that the proposed algorithm significantly outperforms these approaches in existing constrained episodic environments. |
On Learning Ising Models under Huber's Contamination Model | https://papers.nips.cc/paper_files/paper/2020/hash/bca382c81484983f2d437f97d1e141f3-Abstract.html | Adarsh Prasad, Vishwak Srinivasan, Sivaraman Balakrishnan, Pradeep Ravikumar | https://papers.nips.cc/paper_files/paper/2020/hash/bca382c81484983f2d437f97d1e141f3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bca382c81484983f2d437f97d1e141f3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11094-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bca382c81484983f2d437f97d1e141f3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bca382c81484983f2d437f97d1e141f3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bca382c81484983f2d437f97d1e141f3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bca382c81484983f2d437f97d1e141f3-Supplemental.pdf | We study the problem of learning Ising models in a setting where some of the samples from the underlying distribution can be arbitrarily corrupted. In such a setup, we aim to design statistically optimal estimators in a
high-dimensional scaling in which the number of nodes p, the number of edges k and the maximal node degree d are allowed to increase to infinity as a function of the sample size n. Our analysis is based on exploiting moments of the underlying distribution, coupled with novel reductions to univariate estimation. Our proposed estimators achieve an optimal dimension independent dependence on the fraction of corrupted data in the contaminated setting, while also simultaneously achieving high-probability error guarantees with optimal sample-complexity. We corroborate our theoretical results by simulations. |
Cross-validation Confidence Intervals for Test Error | https://papers.nips.cc/paper_files/paper/2020/hash/bce9abf229ffd7e570818476ee5d7dde-Abstract.html | Pierre Bayle, Alexandre Bayle, Lucas Janson, Lester Mackey | https://papers.nips.cc/paper_files/paper/2020/hash/bce9abf229ffd7e570818476ee5d7dde-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bce9abf229ffd7e570818476ee5d7dde-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11095-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bce9abf229ffd7e570818476ee5d7dde-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bce9abf229ffd7e570818476ee5d7dde-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bce9abf229ffd7e570818476ee5d7dde-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bce9abf229ffd7e570818476ee5d7dde-Supplemental.pdf | This work develops central limit theorems for cross-validation and consistent estimators of the asymptotic variance under weak stability conditions on the learning algorithm. Together, these results provide practical, asymptotically-exact confidence intervals for k-fold test error and valid, powerful hypothesis tests of whether one learning algorithm has smaller k-fold test error than another. These results are also the first of their kind for the popular choice of leave-one-out cross-validation. In our experiments with diverse learning algorithms, the resulting intervals and tests outperform the most popular alternative methods from the literature. |
DeepSVG: A Hierarchical Generative Network for Vector Graphics Animation | https://papers.nips.cc/paper_files/paper/2020/hash/bcf9d6bd14a2095866ce8c950b702341-Abstract.html | Alexandre Carlier, Martin Danelljan, Alexandre Alahi, Radu Timofte | https://papers.nips.cc/paper_files/paper/2020/hash/bcf9d6bd14a2095866ce8c950b702341-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bcf9d6bd14a2095866ce8c950b702341-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11096-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bcf9d6bd14a2095866ce8c950b702341-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bcf9d6bd14a2095866ce8c950b702341-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bcf9d6bd14a2095866ce8c950b702341-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bcf9d6bd14a2095866ce8c950b702341-Supplemental.zip | Scalable Vector Graphics (SVG) are ubiquitous in modern 2D interfaces due to their ability to scale to different resolutions. However, despite the success of deep learning-based models applied to rasterized images, the problem of vector graphics representation learning and generation remains largely unexplored. In this work, we propose a novel hierarchical generative network, called DeepSVG, for complex SVG icons generation and interpolation. Our architecture effectively disentangles high-level shapes from the low-level commands that encode the shape itself. The network directly predicts a set of shapes in a non-autoregressive fashion. We introduce the task of complex SVG icons generation by releasing a new large-scale dataset along with an open-source library for SVG manipulation. We demonstrate that our network learns to accurately reconstruct diverse vector graphics, and can serve as a powerful animation tool by performing interpolations and other latent space operations. Our code is available at https://github.com/alexandre01/deepsvg. |
Bayesian Attention Modules | https://papers.nips.cc/paper_files/paper/2020/hash/bcff3f632fd16ff099a49c2f0932b47a-Abstract.html | Xinjie Fan, Shujian Zhang, Bo Chen, Mingyuan Zhou | https://papers.nips.cc/paper_files/paper/2020/hash/bcff3f632fd16ff099a49c2f0932b47a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bcff3f632fd16ff099a49c2f0932b47a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11097-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bcff3f632fd16ff099a49c2f0932b47a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bcff3f632fd16ff099a49c2f0932b47a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bcff3f632fd16ff099a49c2f0932b47a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bcff3f632fd16ff099a49c2f0932b47a-Supplemental.pdf | Attention modules, as simple and effective tools, have not only enabled deep neural networks to achieve state-of-the-art results in many domains, but also enhanced their interpretability. Most current models use deterministic attention modules due to their simplicity and ease of optimization. Stochastic counterparts, on the other hand, are less popular despite their potential benefits. The main reason is that stochastic attention often introduces optimization issues or requires significant model changes. In this paper, we propose a scalable stochastic version of attention that is easy to implement and optimize. We construct simplex-constrained attention distributions by normalizing reparameterizable distributions, making the training process differentiable. We learn their parameters in a Bayesian framework where a data-dependent prior is introduced for regularization. We apply the proposed stochastic attention modules to various attention-based models, with applications to graph node classification, visual question answering, image captioning, machine translation, and language understanding. Our experiments show the proposed method brings consistent improvements over the corresponding baselines. |
Robustness Analysis of Non-Convex Stochastic Gradient Descent using Biased Expectations | https://papers.nips.cc/paper_files/paper/2020/hash/bd4d08cd70f4be1982372107b3b448ef-Abstract.html | Kevin Scaman, Cedric Malherbe | https://papers.nips.cc/paper_files/paper/2020/hash/bd4d08cd70f4be1982372107b3b448ef-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bd4d08cd70f4be1982372107b3b448ef-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11098-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bd4d08cd70f4be1982372107b3b448ef-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bd4d08cd70f4be1982372107b3b448ef-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bd4d08cd70f4be1982372107b3b448ef-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bd4d08cd70f4be1982372107b3b448ef-Supplemental.pdf | This work proposes a novel analysis of stochastic gradient descent (SGD) for non-convex and smooth optimization. Our analysis sheds light on the impact of the probability distribution of the gradient noise on the convergence rate of the norm of the gradient. In the case of sub-Gaussian and centered noise, we prove that, with probability $1-\delta$, the number of iterations to reach a precision $\varepsilon$ for the squared gradient norm is $O(\varepsilon^{-2}\ln(1/\delta))$. In the case of centered and integrable heavy-tailed noise, we show that, while the expectation of the iterates may be infinite, the squared gradient norm still converges with probability $1-\delta$ in $O(\varepsilon^{-p}\delta^{-q})$ iterations, where $p,q > 2$. This result shows that heavy-tailed noise on the gradient slows down the convergence of SGD without preventing it, proving that SGD is robust to gradient noise with unbounded variance, a setting of interest for Deep Learning. In addition, it indicates that choosing a step size proportional to $T^{-1/b}$ where $b$ is the tail-parameter of the noise and $T$ is the number of iterations leads to the best convergence rates. Both results are simple corollaries of a unified analysis using the novel concept of biased expectations, a simple and intuitive mathematical tool to obtain concentration inequalities. Using this concept, we propose a new quantity to measure the amount of noise added to the gradient, and discuss its value in multiple scenarios. |
SoftFlow: Probabilistic Framework for Normalizing Flow on Manifolds | https://papers.nips.cc/paper_files/paper/2020/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html | Hyeongju Kim, Hyeonseung Lee, Woo Hyun Kang, Joun Yeop Lee, Nam Soo Kim | https://papers.nips.cc/paper_files/paper/2020/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bdbca288fee7f92f2bfa9f7012727740-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11099-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bdbca288fee7f92f2bfa9f7012727740-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bdbca288fee7f92f2bfa9f7012727740-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bdbca288fee7f92f2bfa9f7012727740-Supplemental.zip | Flow-based generative models are composed of invertible transformations between two random variables of the same dimension. Therefore, flow-based models cannot be adequately trained if the dimension of the data distribution does not match that of the underlying target distribution. In this paper, we propose SoftFlow, a probabilistic framework for training normalizing flows on manifolds. To sidestep the dimension mismatch problem, SoftFlow estimates a conditional distribution of the perturbed input data instead of learning the data distribution directly. We experimentally show that SoftFlow can capture the innate structure of the manifold data and generate high-quality samples unlike the conventional flow-based models. Furthermore, we apply the proposed framework to 3D point clouds to alleviate the difficulty of forming thin structures for flow-based models. The proposed model for 3D point clouds, namely SoftPointFlow, can estimate the distribution of various shapes more accurately and achieves state-of-the-art performance in point cloud generation. |
A meta-learning approach to (re)discover plasticity rules that carve a desired function into a neural network | https://papers.nips.cc/paper_files/paper/2020/hash/bdbd5ebfde4934142c8a88e7a3796cd5-Abstract.html | Basile Confavreux, Friedemann Zenke, Everton Agnes, Timothy Lillicrap, Tim Vogels | https://papers.nips.cc/paper_files/paper/2020/hash/bdbd5ebfde4934142c8a88e7a3796cd5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bdbd5ebfde4934142c8a88e7a3796cd5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11100-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bdbd5ebfde4934142c8a88e7a3796cd5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bdbd5ebfde4934142c8a88e7a3796cd5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bdbd5ebfde4934142c8a88e7a3796cd5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bdbd5ebfde4934142c8a88e7a3796cd5-Supplemental.pdf | The search for biologically faithful synaptic plasticity rules has resulted in a large body of models. They are usually inspired by -- and fitted to -- experimental data, but they rarely produce neural dynamics that serve complex functions. These failures suggest that current plasticity models are still under-constrained by existing data. Here, we present an alternative approach that uses meta-learning to discover plausible synaptic plasticity rules. Instead of experimental data, the rules are constrained by the functions they implement and the structure they are meant to produce. Briefly, we parameterize synaptic plasticity rules by a Volterra expansion and then use supervised learning methods (gradient descent or evolutionary strategies) to minimize a problem-dependent loss function that quantifies how effectively a candidate plasticity rule transforms an initially random network into one with the desired function. We first validate our approach by re-discovering previously described plasticity rules, starting at the single-neuron level and ``Oja’s rule'', a simple Hebbian plasticity rule that captures the direction of most variability of inputs to a neuron (i.e., the first principal component). We expand the problem to the network level and ask the framework to find Oja’s rule together with an anti-Hebbian rule such that an initially random two-layer firing-rate network will recover several principal components of the input space after learning. Next, we move to networks of integrate-and-fire neurons with plastic inhibitory afferents. We train for rules that achieve a target firing rate by countering tuned excitation. Our algorithm discovers a specific subset of the manifold of rules that can solve this task. Our work is a proof of principle of an automated and unbiased approach to unveil synaptic plasticity rules that obey biological constraints and can solve complex functions. |
Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough | https://papers.nips.cc/paper_files/paper/2020/hash/be23c41621390a448779ee72409e5f49-Abstract.html | Mao Ye, Lemeng Wu, Qiang Liu | https://papers.nips.cc/paper_files/paper/2020/hash/be23c41621390a448779ee72409e5f49-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/be23c41621390a448779ee72409e5f49-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11101-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/be23c41621390a448779ee72409e5f49-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/be23c41621390a448779ee72409e5f49-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/be23c41621390a448779ee72409e5f49-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/be23c41621390a448779ee72409e5f49-Supplemental.pdf | Despite the great success of deep learning, recent works show that large deep neural networks are often highly redundant and can be significantly reduced in size. However, the theoretical question of how much we can prune a neural network given a specified tolerance of accuracy drop is still open. This paper provides one answer to this question by proposing a greedy optimization based pruning method. The proposed method has the guarantee that the discrepancy between the pruned network and the original network decays with exponentially fast rate w.r.t. the size of the pruned network, under weak assumptions that apply for most practical settings. Empirically, our method improves prior arts on pruning various network architectures including ResNet, MobilenetV2/V3 on ImageNet. |
Path Integral Based Convolution and Pooling for Graph Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/be53d253d6bc3258a8160556dda3e9b2-Abstract.html | Zheng Ma, Junyu Xuan, Yu Guang Wang, Ming Li, Pietro Liò | https://papers.nips.cc/paper_files/paper/2020/hash/be53d253d6bc3258a8160556dda3e9b2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/be53d253d6bc3258a8160556dda3e9b2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11102-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/be53d253d6bc3258a8160556dda3e9b2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/be53d253d6bc3258a8160556dda3e9b2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/be53d253d6bc3258a8160556dda3e9b2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/be53d253d6bc3258a8160556dda3e9b2-Supplemental.zip | Graph neural networks (GNNs) extends the functionality of traditional neural networks to graph-structured data. Similar to CNNs, an optimized design of graph convolution and pooling is key to success. Borrowing ideas from physics, we propose a path integral based graph neural networks (PAN) for classification and regression tasks on graphs. Specifically, we consider a convolution operation that involves every path linking the message sender and receiver with learnable weights depending on the path length, which corresponds to the maximal entropy random walk. It generalizes the graph Laplacian to a new transition matrix we call \emph{maximal entropy transition} (MET) matrix derived from a path integral formalism. Importantly, the diagonal entries of the MET matrix are directly related to the subgraph centrality, thus lead to a natural and adaptive pooling mechanism. PAN provides a versatile framework that can be tailored for different graph data with varying sizes and structures. We can view most existing GNN architectures as special cases of PAN. Experimental results show that PAN achieves state-of-the-art performance on various graph classification/regression tasks, including a new benchmark dataset from statistical mechanics we propose to boost applications of GNN in physical sciences. |
Estimating the Effects of Continuous-valued Interventions using Generative Adversarial Networks | https://papers.nips.cc/paper_files/paper/2020/hash/bea5955b308361a1b07bc55042e25e54-Abstract.html | Ioana Bica, James Jordon, Mihaela van der Schaar | https://papers.nips.cc/paper_files/paper/2020/hash/bea5955b308361a1b07bc55042e25e54-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bea5955b308361a1b07bc55042e25e54-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11103-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bea5955b308361a1b07bc55042e25e54-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bea5955b308361a1b07bc55042e25e54-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bea5955b308361a1b07bc55042e25e54-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bea5955b308361a1b07bc55042e25e54-Supplemental.pdf | While much attention has been given to the problem of estimating the effect of discrete interventions from observational data, relatively little work has been done in the setting of continuous-valued interventions, such as treatments associated with a dosage parameter. In this paper, we tackle this problem by building on a modification of the generative adversarial networks (GANs) framework. Our model, SCIGAN, is flexible and capable of simultaneously estimating counterfactual outcomes for several different continuous interventions. The key idea is to use a significantly modified GAN model to learn to generate counterfactual outcomes, which can then be used to learn an inference model, using standard supervised methods, capable of estimating these counterfactuals for a new sample. To address the challenges presented by shifting to continuous interventions, we propose a novel architecture for our discriminator - we build a hierarchical discriminator that leverages the structure of the continuous intervention setting. Moreover, we provide theoretical results to support our use of the GAN framework and of the hierarchical discriminator. In the experiments section, we introduce a new semi-synthetic data simulation for use in the continuous intervention setting and demonstrate improvements over the existing benchmark models. |
Latent Dynamic Factor Analysis of High-Dimensional Neural Recordings | https://papers.nips.cc/paper_files/paper/2020/hash/beb04c41b45927cf7e9f8fd4bb519e86-Abstract.html | Heejong Bong, Zongge Liu, Zhao Ren, Matthew Smith, Valerie Ventura, Robert E. Kass | https://papers.nips.cc/paper_files/paper/2020/hash/beb04c41b45927cf7e9f8fd4bb519e86-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/beb04c41b45927cf7e9f8fd4bb519e86-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11104-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/beb04c41b45927cf7e9f8fd4bb519e86-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/beb04c41b45927cf7e9f8fd4bb519e86-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/beb04c41b45927cf7e9f8fd4bb519e86-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/beb04c41b45927cf7e9f8fd4bb519e86-Supplemental.pdf | High-dimensional neural recordings across multiple brain regions can be used to establish functional connectivity with good spatial and temporal resolution. We designed and implemented a novel method, Latent Dynamic Factor Analysis of High-dimensional time series (LDFA-H), which combines (a) a new approach to estimating the covariance structure among high-dimensional time series (for the observed variables) and (b) a new extension of probabilistic CCA to dynamic time series (for the latent variables). Our interest is in the cross-correlations among the latent variables which, in neural recordings, may capture the flow of information from one brain region to another. Simulations show that LDFA-H outperforms existing methods in the sense that it captures target factors even when within-region correlation due to noise dominates cross-region correlation. We applied our method to local field potential (LFP) recordings from 192 electrodes in Prefrontal Cortex (PFC) and visual area V4 during a memory-guided saccade task. The results capture time-varying lead-lag dependencies between PFC and V4, and display the associated spatial distribution of the signals. |
Conditioning and Processing: Techniques to Improve Information-Theoretic Generalization Bounds | https://papers.nips.cc/paper_files/paper/2020/hash/befe5b0172188ad14d48c3ebe9cf76bf-Abstract.html | Hassan Hafez-Kolahi, Zeinab Golgooni, Shohreh Kasaei, Mahdieh Soleymani | https://papers.nips.cc/paper_files/paper/2020/hash/befe5b0172188ad14d48c3ebe9cf76bf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/befe5b0172188ad14d48c3ebe9cf76bf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11105-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/befe5b0172188ad14d48c3ebe9cf76bf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/befe5b0172188ad14d48c3ebe9cf76bf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/befe5b0172188ad14d48c3ebe9cf76bf-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/befe5b0172188ad14d48c3ebe9cf76bf-Supplemental.zip | Obtaining generalization bounds for learning algorithms is one of the main subjects studied in theoretical machine learning. In recent years, information-theoretic bounds on generalization have gained the attention of researchers. This approach provides an insight into learning algorithms by considering the mutual information between the model and the training set. In this paper, a probabilistic graphical representation of this approach is adopted and two general techniques to improve the bounds are introduced, namely conditioning and processing. In conditioning, a random variable in the graph is considered as given, while in processing a random variable is substituted with one of its children. These techniques can be used to improve the bounds by either sharpening them or increasing their applicability. It is demonstrated that the proposed framework provides a simple and unified way to explain a variety of recent tightening results. New improved bounds derived by utilizing these techniques are also proposed. |
Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and Reasoning | https://papers.nips.cc/paper_files/paper/2020/hash/bf15e9bbff22c7719020f9df4badc20a-Abstract.html | Weili Nie, Zhiding Yu, Lei Mao, Ankit B. Patel, Yuke Zhu, Anima Anandkumar | https://papers.nips.cc/paper_files/paper/2020/hash/bf15e9bbff22c7719020f9df4badc20a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bf15e9bbff22c7719020f9df4badc20a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11106-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bf15e9bbff22c7719020f9df4badc20a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bf15e9bbff22c7719020f9df4badc20a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bf15e9bbff22c7719020f9df4badc20a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bf15e9bbff22c7719020f9df4badc20a-Supplemental.pdf | Humans have an inherent ability to learn novel concepts from only a few samples and generalize these concepts to different situations. Even though today's machine learning models excel with a plethora of training data on standard recognition tasks, a considerable gap exists between machine-level pattern recognition and human-level concept learning. To narrow this gap, the Bongard Problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems. Albeit new advances in representation learning and learning to learn, BPs remain a daunting challenge for modern AI. Inspired by the original one hundred BPs, we propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning. We develop a program-guided generation technique to produce a large set of human-interpretable visual cognition problems in action-oriented LOGO language. Our benchmark captures three core properties of human cognition: 1) context-dependent perception, in which the same object may have disparate interpretations given different contexts; 2) analogy-making perception, in which some meaningful concepts are traded off for other meaningful concepts; and 3) perception with a few samples but infinite vocabulary. In experiments, we show that the state-of-the-art deep learning methods perform substantially worse than human subjects, implying that they fail to capture core human cognition properties. Finally, we discuss research directions towards a general architecture for visual reasoning to tackle this benchmark. |
GAN Memory with No Forgetting | https://papers.nips.cc/paper_files/paper/2020/hash/bf201d5407a6509fa536afc4b380577e-Abstract.html | Yulai Cong, Miaoyun Zhao, Jianqiao Li, Sijia Wang, Lawrence Carin | https://papers.nips.cc/paper_files/paper/2020/hash/bf201d5407a6509fa536afc4b380577e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bf201d5407a6509fa536afc4b380577e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11107-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bf201d5407a6509fa536afc4b380577e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bf201d5407a6509fa536afc4b380577e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bf201d5407a6509fa536afc4b380577e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bf201d5407a6509fa536afc4b380577e-Supplemental.pdf | As a fundamental issue in lifelong learning, catastrophic forgetting is directly caused by inaccessible historical data; accordingly, if the data (information) were memorized perfectly, no forgetting should be expected. Motivated by that, we propose a GAN memory for lifelong learning, which is capable of remembering a stream of datasets via generative processes, with \emph{no} forgetting. Our GAN memory is based on recognizing that one can modulate the ``style'' of a GAN model to form perceptually-distant targeted generation. Accordingly, we propose to do sequential style modulations atop a well-behaved base GAN model, to form sequential targeted generative models, while simultaneously benefiting from the transferred base knowledge. The GAN memory -- that is motivated by lifelong learning -- is therefore itself manifested by a form of lifelong learning, via forward transfer and modulation of information from prior tasks. Experiments demonstrate the superiority of our method over existing approaches and its effectiveness in alleviating catastrophic forgetting for lifelong classification problems. Code is available at \url{https://github.com/MiaoyunZhao/GANmemory_LifelongLearning}. |
Deep Reinforcement Learning with Stacked Hierarchical Attention for Text-based Games | https://papers.nips.cc/paper_files/paper/2020/hash/bf65417dcecc7f2b0006e1f5793b7143-Abstract.html | Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Joey Tianyi Zhou, Chengqi Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/bf65417dcecc7f2b0006e1f5793b7143-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/bf65417dcecc7f2b0006e1f5793b7143-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11108-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/bf65417dcecc7f2b0006e1f5793b7143-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/bf65417dcecc7f2b0006e1f5793b7143-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/bf65417dcecc7f2b0006e1f5793b7143-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/bf65417dcecc7f2b0006e1f5793b7143-Supplemental.pdf | We study reinforcement learning (RL) for text-based games, which are interactive simulations in the context of natural language. While different methods have been developed to represent the environment information and language actions, existing RL agents are not empowered with any reasoning capabilities to deal with textual games. In this work, we aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure. We propose a stacked hierarchical attention mechanism to construct an explicit representation of the reasoning process by exploiting the structure of the knowledge graph. We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents. |
Gaussian Gated Linear Networks | https://papers.nips.cc/paper_files/paper/2020/hash/c0356641f421b381e475776b602a5da8-Abstract.html | David Budden, Adam Marblestone, Eren Sezener, Tor Lattimore, Gregory Wayne, Joel Veness | https://papers.nips.cc/paper_files/paper/2020/hash/c0356641f421b381e475776b602a5da8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c0356641f421b381e475776b602a5da8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11109-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c0356641f421b381e475776b602a5da8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c0356641f421b381e475776b602a5da8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c0356641f421b381e475776b602a5da8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c0356641f421b381e475776b602a5da8-Supplemental.pdf | We propose the Gaussian Gated Linear Network (G-GLN), an extension to the recently proposed GLN family of deep neural networks. Instead of using backpropagation to learn features, GLNs have a distributed and local credit assignment mechanism based on optimizing a convex objective. This gives rise to many desirable properties including universality, data-efficient online learning, trivial interpretability and robustness to catastrophic forgetting. We extend the GLN framework from classification to multiple regression and density modelling by generalizing geometric mixing to a product of Gaussian densities. The G-GLN achieves competitive or state-of-the-art performance on several univariate and multivariate regression benchmarks, and we demonstrate its applicability to practical tasks including online contextual bandits and density estimation via denoising. |
Node Classification on Graphs with Few-Shot Novel Labels via Meta Transformed Network Embedding | https://papers.nips.cc/paper_files/paper/2020/hash/c055dcc749c2632fd4dd806301f05ba6-Abstract.html | Lin Lan, Pinghui Wang, Xuefeng Du, Kaikai Song, Jing Tao, Xiaohong Guan | https://papers.nips.cc/paper_files/paper/2020/hash/c055dcc749c2632fd4dd806301f05ba6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c055dcc749c2632fd4dd806301f05ba6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11110-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c055dcc749c2632fd4dd806301f05ba6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c055dcc749c2632fd4dd806301f05ba6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c055dcc749c2632fd4dd806301f05ba6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c055dcc749c2632fd4dd806301f05ba6-Supplemental.pdf | We study the problem of node classification on graphs with few-shot novel labels, which has two distinctive properties: (1) There are novel labels to emerge in the graph; (2) The novel labels have only a few representative nodes for training a classifier. The study of this problem is instructive and corresponds to many applications such as recommendations for newly formed groups with only a few users in online social networks. To cope with this problem, we propose a novel Meta Transformed Network Embedding framework (MetaTNE), which consists of three modules: (1) A \emph{structural module} provides each node a latent representation according to the graph structure. (2) A \emph{meta-learning module} captures the relationships between the graph structure and the node labels as prior knowledge in a meta-learning manner. Additionally, we introduce an \emph{embedding transformation function} that remedies the deficiency of the straightforward use of meta-learning. Inherently, the meta-learned prior knowledge can be used to facilitate the learning of few-shot novel labels. (3) An \emph{optimization module} employs a simple yet effective scheduling strategy to train the above two modules with a balance between graph structure learning and meta-learning. Experiments on four real-world datasets show that MetaTNE brings a huge improvement over the state-of-the-art methods. |
Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning | https://papers.nips.cc/paper_files/paper/2020/hash/c0a271bc0ecb776a094786474322cb82-Abstract.html | Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Page-Caccia, Issam Hadj Laradji, Irina Rish, Alexandre Lacoste, David Vázquez, Laurent Charlin | https://papers.nips.cc/paper_files/paper/2020/hash/c0a271bc0ecb776a094786474322cb82-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c0a271bc0ecb776a094786474322cb82-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11111-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c0a271bc0ecb776a094786474322cb82-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c0a271bc0ecb776a094786474322cb82-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c0a271bc0ecb776a094786474322cb82-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c0a271bc0ecb776a094786474322cb82-Supplemental.pdf | Continual learning agents experience a stream of (related) tasks. The main challenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream. We are interested in the intersection of two recent continual-learning scenarios. In meta-continual learning, the model is pre-trained using meta-learning to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation. In their original formulations, both methods have limitations. We stand on their shoulders to propose a more general scenario, OSAKA, where an agent must quickly solve new (out-of-distribution) tasks, while also requiring fast remembering. We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario.
We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario. We show in an empirical study that Continual-MAML is better suited to the new scenario than the aforementioned methodologies including standard continual learning and meta-learning approaches. |
Convex optimization based on global lower second-order models | https://papers.nips.cc/paper_files/paper/2020/hash/c0c3a9fb8385d8e03a46adadde9af3bf-Abstract.html | Nikita Doikov, Yurii Nesterov | https://papers.nips.cc/paper_files/paper/2020/hash/c0c3a9fb8385d8e03a46adadde9af3bf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c0c3a9fb8385d8e03a46adadde9af3bf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11112-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c0c3a9fb8385d8e03a46adadde9af3bf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c0c3a9fb8385d8e03a46adadde9af3bf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c0c3a9fb8385d8e03a46adadde9af3bf-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c0c3a9fb8385d8e03a46adadde9af3bf-Supplemental.pdf | In this work, we present new second-order algorithms for composite convex optimization, called Contracting-domain Newton methods. These algorithms are affine-invariant and based on global second-order lower approximation for the smooth component of the objective. Our approach has an interpretation both as a second-order generalization of the conditional gradient method, or as a variant of trust-region scheme. Under the assumption, that the problem domain is bounded, we prove $O(1/k^2)$ global rate of convergence in functional residual, where $k$ is the iteration counter, minimizing convex functions with Lipschitz continuous Hessian. This significantly improves the previously known bound $O(1/k)$ for this type of algorithms. Additionally, we propose a stochastic extension of our method, and present computational results for solving empirical risk minimization problem. |
Simultaneously Learning Stochastic and Adversarial Episodic MDPs with Known Transition | https://papers.nips.cc/paper_files/paper/2020/hash/c0f971d8cd24364f2029fcb9ac7b71f5-Abstract.html | Tiancheng Jin, Haipeng Luo | https://papers.nips.cc/paper_files/paper/2020/hash/c0f971d8cd24364f2029fcb9ac7b71f5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c0f971d8cd24364f2029fcb9ac7b71f5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11113-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c0f971d8cd24364f2029fcb9ac7b71f5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c0f971d8cd24364f2029fcb9ac7b71f5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c0f971d8cd24364f2029fcb9ac7b71f5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c0f971d8cd24364f2029fcb9ac7b71f5-Supplemental.pdf | This work studies the problem of learning episodic Markov Decision Processes with known transition and bandit feedback. We develop the first algorithm with a ``best-of-both-worlds'' guarantee: it achieves O(log T) regret when the losses are stochastic, and simultaneously enjoys worst-case robustness with \tilde{O}(\sqrt{T}) regret even when the losses are adversarial, where T is the number of episodes. More generally, it achieves \tilde{O}(\sqrt{C}) regret in an intermediate setting where the losses are corrupted by a total amount of C.
Our algorithm is based on the Follow-the-Regularized-Leader method from Zimin and Neu (2013), with a novel hybrid regularizer inspired by recent works of Zimmert et al. (2019a, 2019b) for the special case of multi-armed bandits. Crucially, our regularizer admits a non-diagonal Hessian with a highly complicated inverse. Analyzing such a regularizer and deriving a particular self-bounding regret guarantee is our key technical contribution and might be of independent interest. |
Relative gradient optimization of the Jacobian term in unsupervised deep learning | https://papers.nips.cc/paper_files/paper/2020/hash/c10f48884c9c7fdbd9a7959c59eebea8-Abstract.html | Luigi Gresele, Giancarlo Fissore, Adrián Javaloy, Bernhard Schölkopf, Aapo Hyvarinen | https://papers.nips.cc/paper_files/paper/2020/hash/c10f48884c9c7fdbd9a7959c59eebea8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c10f48884c9c7fdbd9a7959c59eebea8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11114-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c10f48884c9c7fdbd9a7959c59eebea8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c10f48884c9c7fdbd9a7959c59eebea8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c10f48884c9c7fdbd9a7959c59eebea8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c10f48884c9c7fdbd9a7959c59eebea8-Supplemental.pdf | Learning expressive probabilistic models correctly describing the data is a ubiquitous problem in machine learning. A popular approach for solving it is mapping the observations into a representation space with a simple joint distribution, which can typically be written as a product of its marginals — thus drawing a connection with the field of nonlinear independent component analysis. Deep density models have been widely used for this task, but their maximum likelihood based training requires estimating the log-determinant of the Jacobian and is computationally expensive, thus imposing a trade-off between computation and expressive power. In this work, we propose a new approach for exact training of such neural networks. Based on relative gradients, we exploit the matrix structure of neural network parameters to compute updates efficiently even in high-dimensional spaces; the computational cost of the training is quadratic in the input size, in contrast with the cubic scaling of naive approaches. This allows fast training with objective functions involving the log-determinant of the Jacobian, without imposing constraints on its structure, in stark contrast to autoregressive normalizing flows. |
Self-Supervised Visual Representation Learning from Hierarchical Grouping | https://papers.nips.cc/paper_files/paper/2020/hash/c1502ae5a4d514baec129f72948c266e-Abstract.html | Xiao Zhang, Michael Maire | https://papers.nips.cc/paper_files/paper/2020/hash/c1502ae5a4d514baec129f72948c266e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c1502ae5a4d514baec129f72948c266e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11115-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c1502ae5a4d514baec129f72948c266e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c1502ae5a4d514baec129f72948c266e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c1502ae5a4d514baec129f72948c266e-Review.html | null | We create a framework for bootstrapping visual representation learning from a primitive visual grouping capability. We operationalize grouping via a contour detector that partitions an image into regions, followed by merging of those regions into a tree hierarchy. A small supervised dataset suffices for training this grouping primitive. Across a large unlabeled dataset, we apply this learned primitive to automatically predict hierarchical region structure. These predictions serve as guidance for self-supervised contrastive feature learning: we task a deep network with producing per-pixel embeddings whose pairwise distances respect the region hierarchy. Experiments demonstrate that our approach can serve as state-of-the-art generic pre-training, benefiting downstream tasks. We additionally explore applications to semantic region search and video-based object instance tracking. |
Optimal Variance Control of the Score-Function Gradient Estimator for Importance-Weighted Bounds | https://papers.nips.cc/paper_files/paper/2020/hash/c15203a83f778ce8934d0efaf2d5c6f3-Abstract.html | Valentin Liévin, Andrea Dittadi, Anders Christensen, Ole Winther | https://papers.nips.cc/paper_files/paper/2020/hash/c15203a83f778ce8934d0efaf2d5c6f3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c15203a83f778ce8934d0efaf2d5c6f3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11116-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c15203a83f778ce8934d0efaf2d5c6f3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c15203a83f778ce8934d0efaf2d5c6f3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c15203a83f778ce8934d0efaf2d5c6f3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c15203a83f778ce8934d0efaf2d5c6f3-Supplemental.pdf | This paper introduces novel results for the score-function gradient estimator of the importance-weighted variational bound (IWAE). We prove that in the limit of large $K$ (number of importance samples) one can choose the control variate such that the Signal-to-Noise ratio (SNR) of the estimator grows as $\sqrt{K}$. This is in contrast to the standard pathwise gradient estimator where the SNR decreases as $1/\sqrt{K}$. Based on our theoretical findings we develop a novel control variate that extends on VIMCO. Empirically, for the training of both continuous and discrete generative models, the proposed method yields superior variance reduction, resulting in an SNR for IWAE that increases with $K$ without relying on the reparameterization trick. The novel estimator is competitive with state-of-the-art reparameterization-free gradient estimators such as Reweighted Wake-Sleep (RWS) and the thermodynamic variational objective (TVO) when training generative models. |
Explicit Regularisation in Gaussian Noise Injections | https://papers.nips.cc/paper_files/paper/2020/hash/c16a5320fa475530d9583c34fd356ef5-Abstract.html | Alexander Camuto, Matthew Willetts, Umut Simsekli, Stephen J. Roberts, Chris C. Holmes | https://papers.nips.cc/paper_files/paper/2020/hash/c16a5320fa475530d9583c34fd356ef5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c16a5320fa475530d9583c34fd356ef5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11117-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c16a5320fa475530d9583c34fd356ef5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c16a5320fa475530d9583c34fd356ef5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c16a5320fa475530d9583c34fd356ef5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c16a5320fa475530d9583c34fd356ef5-Supplemental.pdf | We study the regularisation induced in neural networks by Gaussian noise injections (GNIs). Though such injections have been extensively studied when applied to data, there have been few studies on understanding the regularising effect they induce when applied to network activations. Here we derive the explicit regulariser of GNIs, obtained by marginalising out the injected noise, and show that it penalises functions with high-frequency components in the Fourier domain; particularly in layers closer to a neural network's output. We show analytically and empirically that such regularisation produces calibrated classifiers with large classification margins. |
Numerically Solving Parametric Families of High-Dimensional Kolmogorov Partial Differential Equations via Deep Learning | https://papers.nips.cc/paper_files/paper/2020/hash/c1714160652ca6408774473810765950-Abstract.html | Julius Berner, Markus Dablander, Philipp Grohs | https://papers.nips.cc/paper_files/paper/2020/hash/c1714160652ca6408774473810765950-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c1714160652ca6408774473810765950-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11118-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c1714160652ca6408774473810765950-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c1714160652ca6408774473810765950-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c1714160652ca6408774473810765950-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c1714160652ca6408774473810765950-Supplemental.zip | We present a deep learning algorithm for the numerical solution of parametric families of high-dimensional linear Kolmogorov partial differential equations (PDEs). Our method is based on reformulating the numerical approximation of a whole family of Kolmogorov PDEs as a single statistical learning problem using the Feynman-Kac formula. Successful numerical experiments are presented, which empirically confirm the functionality and efficiency of our proposed algorithm in the case of heat equations and Black-Scholes option pricing models parametrized by affine-linear coefficient functions. We show that a single deep neural network trained on simulated data is capable of learning the solution functions of an entire family of PDEs on a full space-time region. Most notably, our numerical observations and theoretical results also demonstrate that the proposed method does not suffer from the curse of dimensionality, distinguishing it from almost all standard numerical methods for PDEs. |
Finite-Time Analysis for Double Q-learning | https://papers.nips.cc/paper_files/paper/2020/hash/c20bb2d9a50d5ac1f713f8b34d9aac5a-Abstract.html | Huaqing Xiong, Lin Zhao, Yingbin Liang, Wei Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/c20bb2d9a50d5ac1f713f8b34d9aac5a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c20bb2d9a50d5ac1f713f8b34d9aac5a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11119-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c20bb2d9a50d5ac1f713f8b34d9aac5a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c20bb2d9a50d5ac1f713f8b34d9aac5a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c20bb2d9a50d5ac1f713f8b34d9aac5a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c20bb2d9a50d5ac1f713f8b34d9aac5a-Supplemental.pdf | Although Q-learning is one of the most successful algorithms for finding the best action-value function (and thus the optimal policy) in reinforcement learning, its implementation often suffers from large overestimation of Q-function values incurred by random sampling. The double Q-learning algorithm proposed in~\citet{hasselt2010double} overcomes such an overestimation issue by randomly switching the update between two Q-estimators, and has thus gained significant popularity in practice. However, the theoretical understanding of double Q-learning is rather limited. So far only the asymptotic convergence has been established, which does not characterize how fast the algorithm converges. In this paper, we provide the first non-asymptotic (i.e., finite-time) analysis for double Q-learning. We show that both synchronous and asynchronous double Q-learning are guaranteed to converge to an $\epsilon$-accurate neighborhood of the global optimum by taking
$\tilde{\Omega}\left(\left( \frac{1}{(1-\gamma)^6\epsilon^2}\right)^{\frac{1}{\omega}} +\left(\frac{1}{1-\gamma}\right)^{\frac{1}{1-\omega}}\right)$ iterations, where $\omega\in(0,1)$ is the decay parameter of the learning rate, and $\gamma$ is the discount factor. Our analysis develops novel techniques to derive finite-time bounds on the difference between two inter-connected stochastic processes, which is new to the literature of stochastic approximation. |
Learning to Detect Objects with a 1 Megapixel Event Camera | https://papers.nips.cc/paper_files/paper/2020/hash/c213877427b46fa96cff6c39e837ccee-Abstract.html | Etienne Perot, Pierre de Tournemire, Davide Nitti, Jonathan Masci, Amos Sironi | https://papers.nips.cc/paper_files/paper/2020/hash/c213877427b46fa96cff6c39e837ccee-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c213877427b46fa96cff6c39e837ccee-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11120-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c213877427b46fa96cff6c39e837ccee-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c213877427b46fa96cff6c39e837ccee-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c213877427b46fa96cff6c39e837ccee-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c213877427b46fa96cff6c39e837ccee-Supplemental.zip | Event cameras encode visual information with high temporal precision,
low data-rate, and high-dynamic range.
Thanks to these characteristics, event cameras are particularly suited for scenarios
with high motion, challenging lighting conditions and requiring low latency.
However, due to the novelty of the field, the performance of event-based systems
on many vision tasks is still lower compared to conventional frame-based solutions.
The main reasons for this performance gap are: the lower spatial resolution of event sensors,
compared to frame cameras; the lack of large-scale training datasets;
the absence of well established deep learning architectures for event-based processing.
In this paper, we address all these problems in the context of an event-based object detection task.
First, we publicly release the first high-resolution large-scale dataset for object detection.
The dataset contains more than 14 hours recordings of a 1 megapixel event camera,
in automotive scenarios, together with 25M bounding boxes of
cars, pedestrians, and two-wheelers, labeled at high frequency.
Second, we introduce a novel recurrent architecture for event-based detection
and a temporal consistency loss for better-behaved training.
The ability to compactly represent the sequence of events into the internal memory
of the model is essential to achieve high accuracy. Our model outperforms by a
large margin feed-forward event-based architectures.
Moreover, our method does not require any reconstruction
of intensity images from events, showing that training directly from raw events is possible,
more efficient, and more accurate than passing through an intermediate intensity image.
Experiments on the dataset introduced in this work, for which events
and gray level images are available, show performance on par with that of
highly tuned and studied frame-based detectors. |
End-to-End Learning and Intervention in Games | https://papers.nips.cc/paper_files/paper/2020/hash/c21f4ce780c5c9d774f79841b81fdc6d-Abstract.html | Jiayang Li, Jing Yu, Yu Nie, Zhaoran Wang | https://papers.nips.cc/paper_files/paper/2020/hash/c21f4ce780c5c9d774f79841b81fdc6d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c21f4ce780c5c9d774f79841b81fdc6d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11121-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c21f4ce780c5c9d774f79841b81fdc6d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c21f4ce780c5c9d774f79841b81fdc6d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c21f4ce780c5c9d774f79841b81fdc6d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c21f4ce780c5c9d774f79841b81fdc6d-Supplemental.zip | In a social system, the self-interest of agents can be detrimental to the collective good, sometimes leading to social dilemmas. To resolve such a conflict, a central designer may intervene by either redesigning the system or incentivizing the agents to change their behaviors. To be effective, the designer must anticipate how the agents react to the intervention, which is dictated by their often unknown payoff functions. Therefore, learning about the agents is a prerequisite for intervention. In this paper, we provide a unified framework for learning and intervention in games. We cast the equilibria of games as individual layers and integrate them into an end-to-end optimization framework. To enable the backward propagation through the equilibria of games, we propose two approaches, respectively based on explicit and implicit differentiation. Specifically, we cast the equilibria as the solutions to variational inequalities (VIs). The explicit approach unrolls the projection method for solving VIs, while the implicit approach exploits the sensitivity of the solutions to VIs. At the core of both approaches is the differentiation through a projection operator. Moreover, we establish the correctness of both approaches and identify the conditions under which one approach is more desirable than the other. The analytical results are validated using several real-world problems. |
Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms | https://papers.nips.cc/paper_files/paper/2020/hash/c22abfa379f38b5b0411bc11fa9bf92f-Abstract.html | Dheeraj Nagaraj, Xian Wu, Guy Bresler, Prateek Jain, Praneeth Netrapalli | https://papers.nips.cc/paper_files/paper/2020/hash/c22abfa379f38b5b0411bc11fa9bf92f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c22abfa379f38b5b0411bc11fa9bf92f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11122-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c22abfa379f38b5b0411bc11fa9bf92f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c22abfa379f38b5b0411bc11fa9bf92f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c22abfa379f38b5b0411bc11fa9bf92f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c22abfa379f38b5b0411bc11fa9bf92f-Supplemental.pdf | We study the problem of least squares linear regression where the datapoints are dependent and are sampled from a Markov chain. We establish sharp information theoretic minimax lower bounds for this problem in terms of $\tmix$, the mixing time of the underlying Markov chain, under different noise settings. Our results establish that in general, optimization with Markovian data is strictly harder than optimization with independent data and a trivial algorithm (SGD-DD) that works with only one in every $\tmix$ samples, which are approximately independent, is minimax optimal. In fact, it is strictly better than the popular Stochastic Gradient Descent (SGD) method with constant step-size which is otherwise minimax optimal in the regression with independent data setting.
Beyond a worst case analysis, we investigate whether structured datasets seen in practice such as Gaussian auto-regressive dynamics can admit more efficient optimization schemes. Surprisingly, even in this specific and natural setting, Stochastic Gradient Descent (SGD) with constant step-size is still no better than SGD-DD. Instead, we propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate. Our improved rate serves as one of the first results where an algorithm outperforms SGD-DD on an interesting Markov chain and also provides one of the first theoretical analyses to support the use of experience replay in practice. |
Predictive coding in balanced neural networks with noise, chaos and delays | https://papers.nips.cc/paper_files/paper/2020/hash/c236337b043acf93c7df397fdb9082b3-Abstract.html | Jonathan Kadmon, Jonathan Timcheck, Surya Ganguli | https://papers.nips.cc/paper_files/paper/2020/hash/c236337b043acf93c7df397fdb9082b3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c236337b043acf93c7df397fdb9082b3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11123-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c236337b043acf93c7df397fdb9082b3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c236337b043acf93c7df397fdb9082b3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c236337b043acf93c7df397fdb9082b3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c236337b043acf93c7df397fdb9082b3-Supplemental.pdf | Biological neural networks face a formidable task: performing reliable computations in the face of intrinsic stochasticity in individual neurons, imprecisely specified synaptic connectivity, and nonnegligible delays in synaptic transmission. A common approach to combatting such biological heterogeneity involves averaging over large redundant networks of N neurons resulting in coding errors that decrease classically as the square root of N. Recent work demonstrated a novel mechanism whereby recurrent spiking networks could efficiently encode dynamic stimuli achieving a superclassical scaling in which coding errors decrease as 1/N. This specific mechanism involved two key ideas: predictive coding, and a tight balance, or cancellation between strong feedforward inputs and strong recurrent feedback. However, the theoretical principles governing the efficacy of balanced predictive coding and its robustness to noise, synaptic weight heterogeneity and communication delays remain poorly understood. To discover such principles, we introduce an analytically tractable model of balanced predictive coding, in which the degree of balance and the degree of weight disorder can be dissociated unlike in previous balanced network models, and we develop a mean-field theory of coding accuracy. Overall, our work provides and solves a general theoretical framework for dissecting the differential contributions neural noise, synaptic disorder, chaos, synaptic delays, and balance to the fidelity of predictive neural codes, reveals the fundamental role that balance plays in achieving superclassical scaling, and unifies previously disparate models in theoretical neuroscience. |
Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs | https://papers.nips.cc/paper_files/paper/2020/hash/c24c65259d90ed4a19ab37b6fd6fe716-Abstract.html | Talgat Daulbaev, Alexandr Katrutsa, Larisa Markeeva, Julia Gusak, Andrzej Cichocki, Ivan Oseledets | https://papers.nips.cc/paper_files/paper/2020/hash/c24c65259d90ed4a19ab37b6fd6fe716-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/c24c65259d90ed4a19ab37b6fd6fe716-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/11124-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/c24c65259d90ed4a19ab37b6fd6fe716-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/c24c65259d90ed4a19ab37b6fd6fe716-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/c24c65259d90ed4a19ab37b6fd6fe716-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/c24c65259d90ed4a19ab37b6fd6fe716-Supplemental.zip | We propose a simple interpolation-based method for the efficient approximation of gradients in neural ODE models.
We compare it with reverse dynamic method (known in literature as “adjoint method”) to train neural ODEs on classification, density estimation and inference approximation tasks.
We also propose a theoretical justification of our approach using logarithmic norm formalism.
As a result, our method allows faster model training than the reverse dynamic method what was confirmed and validated by extensive numerical experiments for several standard benchmarks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.