title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
Online Influence Maximization under Linear Threshold Model
https://papers.nips.cc/paper_files/paper/2020/hash/0d352b4d3a317e3eae221199fdb49651-Abstract.html
Shuai Li, Fang Kong, Kejie Tang, Qizhi Li, Wei Chen
https://papers.nips.cc/paper_files/paper/2020/hash/0d352b4d3a317e3eae221199fdb49651-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9825-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d352b4d3a317e3eae221199fdb49651-Supplemental.pdf
Online influence maximization (OIM) is a popular problem in social networks to learn influence propagation model parameters and maximize the influence spread at the same time. Most previous studies focus on the independent cascade (IC) model under the edge-level feedback. In this paper, we address OIM in the linear threshold (LT) model. Because node activations in the LT model are due to the aggregated effect of all active neighbors, it is more natural to model OIM with the nodel-level feedback. And this brings new challenge in online learning since we only observe aggregated effect from groups of nodes and the groups are also random. Based on the linear structure in node activations, we incorporate ideas from linear bandits and design an algorithm $\ltlinucb$ that is consistent with the observed feedback. By proving group observation modulated (GOM) bounded smoothness property, a novel result of the influence difference in terms of the random observations, we provide a regret of order $\tilde{O}(\mathrm{poly}(m)\sqrt{T})$, where $m$ is the number of edges and $T$ is the number of rounds. This is the first theoretical result in such order for OIM under the LT model. In the end, we also provide an algorithm $\oimetc$ with regret bound $O(\mathrm{poly}(m)\ T^{2/3})$, which is model-independent, simple and has less requirement on online feedback and offline computation.
Ensembling geophysical models with Bayesian Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/0d5501edb21a59a43435efa67f200828-Abstract.html
Ushnish Sengupta, Matt Amos, Scott Hosking, Carl Edward Rasmussen, Matthew Juniper, Paul Young
https://papers.nips.cc/paper_files/paper/2020/hash/0d5501edb21a59a43435efa67f200828-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9826-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d5501edb21a59a43435efa67f200828-Supplemental.pdf
Ensembles of geophysical models improve projection accuracy and express uncertainties. We develop a novel data-driven ensembling strategy for combining geophysical models using Bayesian Neural Networks, which infers spatiotemporally varying model weights and bias while accounting for heteroscedastic uncertainties in the observations. This produces more accurate and uncertainty-aware projections without sacrificing interpretability. Applied to the prediction of total column ozone from an ensemble of 15 chemistry-climate models, we find that the Bayesian neural network ensemble (BayNNE) outperforms existing ensembling methods, achieving a 49.4% reduction in RMSE for temporal extrapolation, and a 67.4% reduction in RMSE for polar data voids, compared to a weighted mean. Uncertainty is also well-characterized, with 90.6% of the data points in our extrapolation validation dataset lying within 2 standard deviations and 98.5% within 3 standard deviations.
Delving into the Cyclic Mechanism in Semi-supervised Video Object Segmentation
https://papers.nips.cc/paper_files/paper/2020/hash/0d5bd023a3ee11c7abca5b42a93c4866-Abstract.html
Yuxi Li, Ning Xu, Jinlong Peng, John See, Weiyao Lin
https://papers.nips.cc/paper_files/paper/2020/hash/0d5bd023a3ee11c7abca5b42a93c4866-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9827-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d5bd023a3ee11c7abca5b42a93c4866-Supplemental.pdf
In this paper, we take attempt to incorporate the cyclic mechanism with the vision task of semi-supervised video object segmentation. By resorting to the accurate reference mask of the first frame, we try to mitigate the error propagation problem in most of current video object segmentation pipelines. Firstly, we propose a cyclic scheme for offline training of segmentation networks. Then, we extend the offline pipeline to an online method by introducing a simple gradient correction module while keeping high efficiency as other offline methods. Finally we develop cycle effective receptive field (cycle-ERF) from gradient correction to provide a new perspective for analyzing object-specific regions of interests. We conduct comprehensive experiments on benchmarks of DAVIS17 and Youtube-VOS, demonstrating that our introduced cyclic mechanism is helpful to boost the segmentation quality.
Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
https://papers.nips.cc/paper_files/paper/2020/hash/0d770c496aa3da6d2c3f2bd19e7b9d6b-Abstract.html
Christopher Frye, Colin Rowat, Ilya Feige
https://papers.nips.cc/paper_files/paper/2020/hash/0d770c496aa3da6d2c3f2bd19e7b9d6b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9828-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d770c496aa3da6d2c3f2bd19e7b9d6b-Supplemental.pdf
Explaining AI systems is fundamental both to the development of high performing models and to the trust placed in them by their users. The Shapley framework for explainability has strength in its general applicability combined with its precise, rigorous foundation: it provides a common, model-agnostic language for AI explainability and uniquely satisfies a set of intuitive mathematical axioms. However, Shapley values are too restrictive in one significant regard: they ignore all causal structure in the data. We introduce a less restrictive framework, Asymmetric Shapley values (ASVs), which are rigorously founded on a set of axioms, applicable to any AI system, and can flexibly incorporate any causal structure known to be respected by the data. We demonstrate that ASVs can (i) improve model explanations by incorporating causal information, (ii) provide an unambiguous test for unfair discrimination in model predictions, (iii) enable sequentially incremental explanations in time-series models, and (iv) support feature-selection studies without the need for model retraining.
Understanding Deep Architecture with Reasoning Layer
https://papers.nips.cc/paper_files/paper/2020/hash/0d82627e10660af39ea7eb69c3568955-Abstract.html
Xinshi Chen, Yufei Zhang, Christoph Reisinger, Le Song
https://papers.nips.cc/paper_files/paper/2020/hash/0d82627e10660af39ea7eb69c3568955-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9829-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d82627e10660af39ea7eb69c3568955-Supplemental.zip
Recently, there is a surge of interest in combining deep learning models with reasoning in order to handle more sophisticated learning tasks. In many cases, a reasoning task can be solved by an iterative algorithm. This algorithm is often unrolled, truncated, and used as a specialized layer in the deep architecture, which can be trained end-to-end with other neural components. Although such hybrid deep architectures have led to many empirical successes, theoretical understandings of such architectures, especially the interplay between algorithm layers and other neural layers, remains largely unexplored. In this paper, we take an initial step toward an understanding of such hybrid deep architectures by showing that properties of the algorithm layers, such as convergence, stability and sensitivity, are intimately related to the approximation and generalization abilities of the end-to-end model. Furthermore, our analysis matches nicely with experimental observations under various conditions, suggesting that our theory can provide useful guidelines for designing deep architectures with reasoning layers.
Planning in Markov Decision Processes with Gap-Dependent Sample Complexity
https://papers.nips.cc/paper_files/paper/2020/hash/0d85eb24e2add96ff1a7021f83c1abc9-Abstract.html
Anders Jonsson, Emilie Kaufmann, Pierre Menard, Omar Darwiche Domingues, Edouard Leurent, Michal Valko
https://papers.nips.cc/paper_files/paper/2020/hash/0d85eb24e2add96ff1a7021f83c1abc9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9830-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0d85eb24e2add96ff1a7021f83c1abc9-Supplemental.pdf
We propose MDP-GapE, a new trajectory-based Monte-Carlo Tree Search algorithm for planning in a Markov Decision Process in which transitions have a finite support. We prove an upper bound on the number of sampled trajectories needed for MDP-GapE to identify a near-optimal action with high probability. This problem-dependent result is expressed in terms of the sub-optimality gaps of the state-action pairs that are visited during exploration. Our experiments reveal that MDP-GapE is also effective in practice, in contrast with other algorithms with sample complexity guarantees in the fixed-confidence setting, that are mostly theoretical.
Provably Good Batch Off-Policy Reinforcement Learning Without Great Exploration
https://papers.nips.cc/paper_files/paper/2020/hash/0dc23b6a0e4abc39904388dd3ffadcd1-Abstract.html
Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill
https://papers.nips.cc/paper_files/paper/2020/hash/0dc23b6a0e4abc39904388dd3ffadcd1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9831-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0dc23b6a0e4abc39904388dd3ffadcd1-Supplemental.pdf
Batch reinforcement learning (RL) is important to apply RL algorithms to many high stakes tasks. Doing batch RL in a way that yields a reliable new policy in large domains is challenging: a new decision policy may visit states and actions outside the support of the batch data, and function approximation and optimization with limited samples can further increase the potential of learning policies with overly optimistic estimates of their future performance. Some recent approaches to address these concerns have shown promise, but can still be overly optimistic in their expected outcomes. Theoretical work that provides strong guarantees on the performance of the output policy relies on a strong concentrability assumption, which makes it unsuitable for cases where the ratio between state-action distributions of behavior policy and some candidate policies is large. This is because, in the traditional analysis, the error bound scales up with this ratio. We show that using \emph{pessimistic value estimates} in the low-data regions in Bellman optimality and evaluation back-up can yield more adaptive and stronger guarantees when the concentrability assumption does not hold. In certain settings, they can find the approximately best policy within the state-action space explored by the batch data, without requiring a priori assumptions of concentrability. We highlight the necessity of our pessimistic update and the limitations of previous algorithms and analyses by illustrative MDP examples and demonstrate an empirical comparison of our algorithm and other state-of-the-art batch RL baselines in standard benchmarks.
Detection as Regression: Certified Object Detection with Median Smoothing
https://papers.nips.cc/paper_files/paper/2020/hash/0dd1bc593a91620daecf7723d2235624-Abstract.html
Ping-yeh Chiang, Michael Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein
https://papers.nips.cc/paper_files/paper/2020/hash/0dd1bc593a91620daecf7723d2235624-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9832-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0dd1bc593a91620daecf7723d2235624-Supplemental.pdf
Despite the vulnerability of object detectors to adversarial attacks, very few defenses are known to date. While adversarial training can improve the empirical robustness of image classifiers, a direct extension to object detection is very expensive. This work is motivated by recent progress on certified classification by randomized smoothing. We start by presenting a reduction from object detection to a regression problem. Then, to enable certified regression, where standard mean smoothing fails, we propose median smoothing, which is of independent interest. We obtain the first model-agnostic, training-free, and certified defense for object detection against $\ell_2$-bounded attacks.
Contextual Reserve Price Optimization in Auctions via Mixed Integer Programming
https://papers.nips.cc/paper_files/paper/2020/hash/0e1bacf07b14673fcdb553da51b999a5-Abstract.html
Joey Huchette, Haihao Lu, Hossein Esfandiari, Vahab Mirrokni
https://papers.nips.cc/paper_files/paper/2020/hash/0e1bacf07b14673fcdb553da51b999a5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9833-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e1bacf07b14673fcdb553da51b999a5-Supplemental.pdf
We study the problem of learning a linear model to set the reserve price in an auction, given contextual information, in order to maximize expected revenue from the seller side. First, we show that it is not possible to solve this problem in polynomial time unless the Exponential Time Hypothesis fails. Second, we present a strong mixed-integer programming (MIP) formulation for this problem, which is capable of exactly modeling the nonconvex and discontinuous expected reward function. Moreover, we show that this MIP formulation is ideal (i.e. the strongest possible formulation) for the revenue function of a single impression. Since it can be computationally expensive to exactly solve the MIP formulation in practice, we also study the performance of its linear programming (LP) relaxation. Though it may work well in practice, we show that, unfortunately, in the worst case the optimal objective of the LP relaxation can be O(number of samples) times larger than the optimal objective of the true problem. Finally, we present computational results, showcasing that the MIP formulation, along with its LP relaxation, are able to achieve superior in- and out-of-sample performance, as compared to state-of-the-art algorithms on both real and synthetic datasets. More broadly, we believe this work offers an indication of the strength of optimization methodologies like MIP to exactly model intrinsic discontinuities in machine learning problems.
ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks
https://papers.nips.cc/paper_files/paper/2020/hash/0e1ebad68af7f0ae4830b7ac92bc3c6f-Abstract.html
Shuxuan Guo, Jose M. Alvarez, Mathieu Salzmann
https://papers.nips.cc/paper_files/paper/2020/hash/0e1ebad68af7f0ae4830b7ac92bc3c6f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9834-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-Supplemental.pdf
We introduce an approach to training a given compact network. To this end, we leverage over-parameterization, which typically improves both neural network optimization and generalization. Specifically, we propose to expand each linear layer of the compact network into multiple consecutive linear layers, without adding any nonlinearity. As such, the resulting expanded network, or ExpandNet, can be contracted back to the compact one algebraically at inference. In particular, we introduce two convolutional expansion strategies and demonstrate their benefits on several tasks, including image classification, object detection, and semantic segmentation. As evidenced by our experiments, our approach outperforms both training the compact network from scratch and performing knowledge distillation from a teacher. Furthermore, our linear over-parameterization empirically reduces gradient confusion during training and improves the network generalization.
FleXOR: Trainable Fractional Quantization
https://papers.nips.cc/paper_files/paper/2020/hash/0e230b1a582d76526b7ad7fc62ae937d-Abstract.html
Dongsoo Lee, Se Jung Kwon, Byeongwook Kim, Yongkweon Jeon, Baeseong Park, Jeongin Yun
https://papers.nips.cc/paper_files/paper/2020/hash/0e230b1a582d76526b7ad7fc62ae937d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9835-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e230b1a582d76526b7ad7fc62ae937d-Supplemental.pdf
Quantization based on the binary codes is gaining attention because each quantized bit can be directly utilized for computations without dequantization using look-up tables. Previous attempts, however, only allow for integer numbers of quantization bits, which ends up restricting the search space for compression ratio and accuracy. In this paper, we propose an encryption algorithm/architecture to compress quantized weights so as to achieve fractional numbers of bits per weight. Decryption during inference is implemented by digital XOR-gate networks added into the neural network model while XOR gates are described by utilizing $\tanh(x)$ for backward propagation to enable gradient calculations. We perform experiments using MNIST, CIFAR-10, and ImageNet to show that inserting XOR gates learns quantization/encrypted bit decisions through training and obtains high accuracy even for fractional sub 1-bit weights. As a result, our proposed method yields smaller size and higher model accuracy compared to binary neural networks.
The Implications of Local Correlation on Learning Some Deep Functions
https://papers.nips.cc/paper_files/paper/2020/hash/0e4ceef65add6cf21c0f3f9da53b71c0-Abstract.html
Eran Malach, Shai Shalev-Shwartz
https://papers.nips.cc/paper_files/paper/2020/hash/0e4ceef65add6cf21c0f3f9da53b71c0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9836-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e4ceef65add6cf21c0f3f9da53b71c0-Supplemental.pdf
It is known that learning deep neural-networks is computationally hard in the worst-case. In fact, the proofs of such hardness results show that even weakly learning deep networks is hard. In other words, no efficient algorithm can find a predictor that is slightly better than a random guess. However, we observe that on natural distributions of images, small patches of the input image are corre- lated to the target label, which implies that on such natural data, efficient weak learning is trivial. While in the distribution-free setting, the celebrated boosting results show that weak learning implies strong learning, in the distribution-specific setting this is not necessarily the case. We introduce a property of distributions, denoted “local correlation”, which requires that small patches of the input image and of intermediate layers of the target function are correlated to the target label. We empirically demonstrate that this property holds for the CIFAR and ImageNet data sets. The main technical results of the paper is proving that, for some classes of deep functions, weak learning implies efficient strong learning under the “local correlation” assumption.
Learning to search efficiently for causally near-optimal treatments
https://papers.nips.cc/paper_files/paper/2020/hash/0e900ad84f63618452210ab8baae0218-Abstract.html
Samuel Håkansson, Viktor Lindblom, Omer Gottesman, Fredrik D. Johansson
https://papers.nips.cc/paper_files/paper/2020/hash/0e900ad84f63618452210ab8baae0218-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9837-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0e900ad84f63618452210ab8baae0218-Supplemental.pdf
Finding an effective medical treatment often requires a search by trial and error. Making this search more efficient by minimizing the number of unnecessary trials could lower both costs and patient suffering. We formalize this problem as learning a policy for finding a near-optimal treatment in a minimum number of trials using a causal inference framework. We give a model-based dynamic programming algorithm which learns from observational data while being robust to unmeasured confounding. To reduce time complexity, we suggest a greedy algorithm which bounds the near-optimality constraint. The methods are evaluated on synthetic and real-world healthcare data and compared to model-free reinforcement learning. We find that our methods compare favorably to the model-free baseline while offering a more transparent trade-off between search time and treatment efficacy.
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses
https://papers.nips.cc/paper_files/paper/2020/hash/0ea6f098a59fcf2462afc50d130ff034-Abstract.html
Ambar Pal, Rene Vidal
https://papers.nips.cc/paper_files/paper/2020/hash/0ea6f098a59fcf2462afc50d130ff034-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9838-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ea6f098a59fcf2462afc50d130ff034-Supplemental.pdf
Research in adversarial learning follows a cat and mouse game between attackers and defenders where attacks are proposed, they are mitigated by new defenses, and subsequently new attacks are proposed that break earlier defenses, and so on. However, it has remained unclear as to whether there are conditions under which no better attacks or defenses can be proposed. In this paper, we propose a game-theoretic framework for studying attacks and defenses which exist in equilibrium. Under a locally linear decision boundary model for the underlying binary classifier, we prove that the Fast Gradient Method attack and a Randomized Smoothing defense form a Nash Equilibrium. We then show how this equilibrium defense can be approximated given finitely many samples from a data-generating distribution, and derive a generalization bound for the performance of our approximation.
Posterior Network: Uncertainty Estimation without OOD Samples via Density-Based Pseudo-Counts
https://papers.nips.cc/paper_files/paper/2020/hash/0eac690d7059a8de4b48e90f14510391-Abstract.html
Bertrand Charpentier, Daniel Zügner, Stephan Günnemann
https://papers.nips.cc/paper_files/paper/2020/hash/0eac690d7059a8de4b48e90f14510391-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9839-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0eac690d7059a8de4b48e90f14510391-Supplemental.pdf
In this work we propose the Posterior Network (PostNet), which uses Normalizing Flows to predict an individual closed-form posterior distribution over predicted probabilites for any input sample. The posterior distributions learned by PostNet accurately reflect uncertainty for in- and out-of-distribution data -- without requiring access to OOD data at training time. PostNet achieves state-of-the art results in OOD detection and in uncertainty calibration under dataset shifts.
Recurrent Quantum Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/0ec96be397dd6d3cf2fecb4a2d627c1c-Abstract.html
Johannes Bausch
https://papers.nips.cc/paper_files/paper/2020/hash/0ec96be397dd6d3cf2fecb4a2d627c1c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9840-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ec96be397dd6d3cf2fecb4a2d627c1c-Supplemental.zip
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning, such as machine translation and speech synthesis. With applied quantum computing in its infancy, there already exist quantum machine learning models such as variational quantum eigensolvers which have been used e.g. in the context of energy minimization tasks. Yet, to date, no viable recurrent quantum network has been proposed.
No-Regret Learning and Mixed Nash Equilibria: They Do Not Mix
https://papers.nips.cc/paper_files/paper/2020/hash/0ed9422357395a0d4879191c66f4faa2-Abstract.html
Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Thanasis Lianeas, Panayotis Mertikopoulos, Georgios Piliouras
https://papers.nips.cc/paper_files/paper/2020/hash/0ed9422357395a0d4879191c66f4faa2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9841-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ed9422357395a0d4879191c66f4faa2-Supplemental.pdf
Understanding the behavior of no-regret dynamics in general N-player games is a fundamental question in online learning and game theory. A folk result in the field states that, in finite games, the empirical frequency of play under no-regret learning converges to the game’s set of coarse correlated equilibria. By contrast, our understanding of how the day-to-day behavior of the dynamics correlates to the game’s Nash equilibria is much more limited, and only partial results are known for certain classes of games (such as zero-sum or congestion games). In this paper, we study the dynamics of follow the regularized leader (FTRL), arguably the most well-studied class of no-regret dynamics, and we establish a sweeping negative result showing that the notion of mixed Nash equilibrium is antithetical to no-regret learning. Specifically, we show that any Nash equilibrium which is not strict (in that every player has a unique best response) cannot be stable and attracting under the dynamics of FTRL. This result has significant implications for predicting the outcome of a learning process as it shows unequivocally that only strict (and hence, pure) Nash equilibria can emerge as stable limit points thereof.
A Unifying View of Optimism in Episodic Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/0f0e13216262f4a201bec128044dd30f-Abstract.html
Gergely Neu, Ciara Pike-Burke
https://papers.nips.cc/paper_files/paper/2020/hash/0f0e13216262f4a201bec128044dd30f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9842-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0f0e13216262f4a201bec128044dd30f-Supplemental.pdf
The principle of ``optimism in the face of uncertainty'' underpins many theoretically successful reinforcement learning algorithms. In this paper we provide a general framework for designing, analyzing and implementing such algorithms in the episodic reinforcement learning problem. This framework is built upon Lagrangian duality, and demonstrates that every model-optimistic algorithm that constructs an optimistic MDP has an equivalent representation as a value-optimistic dynamic programming algorithm. Typically, it was thought that these two classes of algorithms were distinct, with model-optimistic algorithms benefiting from a cleaner probabilistic analysis while value-optimistic algorithms are easier to implement and thus more practical. With the framework developed in this paper, we show that it is possible to get the best of both worlds by providing a class of algorithms which have a computationally efficient dynamic-programming implementation and also a simple probabilistic analysis. Besides being able to capture many existing algorithms in the tabular setting, our framework can also address large-scale problems under realizable function approximation, where it enables a simple model-based analysis of some recently proposed methods.
Continuous Submodular Maximization: Beyond DR-Submodularity
https://papers.nips.cc/paper_files/paper/2020/hash/0f34132b15dd02f282a11ea1e322a96d-Abstract.html
Moran Feldman, Amin Karbasi
https://papers.nips.cc/paper_files/paper/2020/hash/0f34132b15dd02f282a11ea1e322a96d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9843-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0f34132b15dd02f282a11ea1e322a96d-Supplemental.pdf
In this paper, we propose the first continuous optimization algorithms that achieve a constant factor approximation guarantee for the problem of monotone continuous submodular maximization subject to a linear constraint. We first prove that a simple variant of the vanilla coordinate ascent, called \COORDINATE-ASCENT+, achieves a $(\frac{e-1}{2e-1}-\eps)$-approximation guarantee while performing $O(n/\epsilon)$ iterations, where the computational complexity of each iteration is roughly $O(n/\sqrt{\epsilon}+n\log n)$ (here, $n$ denotes the dimension of the optimization problem). We then propose \COORDINATE-ASCENT++, that achieves the tight $(1-1/e-\eps)$-approximation guarantee while performing the same number of iterations, but at a higher computational complexity of roughly $O(n^3/\eps^{2.5} + n^3 \log n / \eps^2)$ per iteration. However, the computation of each round of \COORDINATE-ASCENT++ can be easily parallelized so that the computational cost per machine scales as $O(n/\sqrt{\epsilon}+n\log n)$.
An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/0f34314d2dd0c1b9311cb8f40eb4f255-Abstract.html
Andrea Tirinzoni, Matteo Pirotta, Marcello Restelli, Alessandro Lazaric
https://papers.nips.cc/paper_files/paper/2020/hash/0f34314d2dd0c1b9311cb8f40eb4f255-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9844-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0f34314d2dd0c1b9311cb8f40eb4f255-Supplemental.pdf
In the contextual linear bandit setting, algorithms built on the optimism principle fail to exploit the structure of the problem and have been shown to be asymptotically suboptimal. In this paper, we follow recent approaches of deriving asymptotically optimal algorithms from problem-dependent regret lower bounds and we introduce a novel algorithm improving over the state-of-the-art along multiple dimensions. We build on a reformulation of the lower bound, where context distribution and exploration policy are decoupled, and we obtain an algorithm robust to unbalanced context distributions. Then, using an incremental primal-dual approach to solve the Lagrangian relaxation of the lower bound, we obtain a scalable and computationally efficient algorithm. Finally, we remove forced exploration and build on confidence intervals of the optimization problem to encourage a minimum level of exploration that is better adapted to the problem structure. We demonstrate the asymptotic optimality of our algorithm, while providing both problem-dependent and worst-case finite-time regret guarantees. Our bounds scale with the logarithm of the number of arms, thus avoiding the linear dependence common in all related prior works. Notably, we establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits. Finally, we verify that our algorithm obtains better empirical performance than state-of-the-art baselines.
Assessing SATNet's Ability to Solve the Symbol Grounding Problem
https://papers.nips.cc/paper_files/paper/2020/hash/0ff8033cf9437c213ee13937b1c4c455-Abstract.html
Oscar Chang, Lampros Flokas, Hod Lipson, Michael Spranger
https://papers.nips.cc/paper_files/paper/2020/hash/0ff8033cf9437c213ee13937b1c4c455-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9845-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ff8033cf9437c213ee13937b1c4c455-Supplemental.zip
SATNet is an award-winning MAXSAT solver that can be used to infer logical rules and integrated as a differentiable layer in a deep neural network. It had been shown to solve Sudoku puzzles visually from examples of puzzle digit images, and was heralded as an impressive achievement towards the longstanding AI goal of combining pattern recognition with logical reasoning. In this paper, we clarify SATNet's capabilities by showing that in the absence of intermediate labels that identify individual Sudoku digit images with their logical representations, SATNet completely fails at visual Sudoku (0% test accuracy). More generally, the failure can be pinpointed to its inability to learn to assign symbols to perceptual phenomena, also known as the symbol grounding problem, which has long been thought to be a prerequisite for intelligent agents to perform real-world logical reasoning. We propose an MNIST based test as an easy instance of the symbol grounding problem that can serve as a sanity check for differentiable symbolic solvers in general. Naive applications of SATNet on this test lead to performance worse than that of models without logical reasoning capabilities. We report on the causes of SATNet’s failure and how to prevent them.
A Bayesian Nonparametrics View into Deep Representations
https://papers.nips.cc/paper_files/paper/2020/hash/0ffaca95e3e5242ba1097ad8a9a6e95d-Abstract.html
Michał Jamroż, Marcin Kurdziel, Mateusz Opala
https://papers.nips.cc/paper_files/paper/2020/hash/0ffaca95e3e5242ba1097ad8a9a6e95d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9846-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/0ffaca95e3e5242ba1097ad8a9a6e95d-Supplemental.pdf
We investigate neural network representations from a probabilistic perspective. Specifically, we leverage Bayesian nonparametrics to construct models of neural activations in Convolutional Neural Networks (CNNs) and latent representations in Variational Autoencoders (VAEs). This allows us to formulate a tractable complexity measure for distributions of neural activations and to explore global structure of latent spaces learned by VAEs. We use this machinery to uncover how memorization and two common forms of regularization, i.e. dropout and input augmentation, influence representational complexity in CNNs. We demonstrate that networks that can exploit patterns in data learn vastly less complex representations than networks forced to memorize. We also show marked differences between effects of input augmentation and dropout, with the latter strongly depending on network width. Next, we investigate latent representations learned by standard $\beta$-VAEs and Maximum Mean Discrepancy (MMD) $\beta$-VAEs. We show that aggregated posterior in standard VAEs quickly collapses to the diagonal prior when regularization strength increases. MMD-VAEs, on the other hand, learn more complex posterior distributions, even with strong regularization. While this gives a richer sample space, MMD-VAEs do not exhibit independence of latent dimensions. Finally, we leverage our probabilistic models as an effective sampling strategy for latent codes, improving quality of samples in VAEs with rich posteriors.
On the Similarity between the Laplace and Neural Tangent Kernels
https://papers.nips.cc/paper_files/paper/2020/hash/1006ff12c465532f8c574aeaa4461b16-Abstract.html
Amnon Geifman, Abhay Yadav, Yoni Kasten, Meirav Galun, David Jacobs, Basri Ronen
https://papers.nips.cc/paper_files/paper/2020/hash/1006ff12c465532f8c574aeaa4461b16-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9847-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1006ff12c465532f8c574aeaa4461b16-Supplemental.pdf
Recent theoretical work has shown that massively overparameterized neural networks are equivalent to kernel regressors that use Neural Tangent Kernels (NTKs). Experiments show that these kernel methods perform similarly to real neural networks. Here we show that NTK for fully connected networks with ReLU activation is closely related to the standard Laplace kernel. We show theoretically that for normalized data on the hypersphere both kernels have the same eigenfunctions and their eigenvalues decay polynomially at the same rate, implying that their Reproducing Kernel Hilbert Spaces (RKHS) include the same sets of functions. This means that both kernels give rise to classes of functions with the same smoothness properties. The two kernels differ for data off the hypersphere, but experiments indicate that when data is properly normalized these differences are not significant. Finally, we provide experiments on real data comparing NTK and the Laplace kernel, along with a larger class of $\gamma$-exponential kernels. We show that these perform almost identically. Our results suggest that much insight about neural networks can be obtained from analysis of the well-known Laplace kernel, which has a simple closed form.
A causal view of compositional zero-shot recognition
https://papers.nips.cc/paper_files/paper/2020/hash/1010cedf85f6a7e24b087e63235dc12e-Abstract.html
Yuval Atzmon, Felix Kreuk, Uri Shalit, Gal Chechik
https://papers.nips.cc/paper_files/paper/2020/hash/1010cedf85f6a7e24b087e63235dc12e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9848-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1010cedf85f6a7e24b087e63235dc12e-Supplemental.pdf
Here we describe an approach for compositional generalization that builds on causal ideas. First, we describe compositional zero-shot learning from a causal perspective, and propose to view zero-shot inference as finding "which intervention caused the image?". Second, we present a causal-inspired embedding model that learns disentangled representations of elementary components of visual objects from correlated (confounded) training data. We evaluate this approach on two datasets for predicting new combinations of attribute-object pairs: A well-controlled synthesized images dataset and a real world dataset which consists of fine-grained types of shoes. We show improvements compared to strong baselines.
HiPPO: Recurrent Memory with Optimal Polynomial Projections
https://papers.nips.cc/paper_files/paper/2020/hash/102f0bb6efb3a6128a3c750dd16729be-Abstract.html
Albert Gu, Tri Dao, Stefano Ermon, Atri Rudra, Christopher Ré
https://papers.nips.cc/paper_files/paper/2020/hash/102f0bb6efb3a6128a3c750dd16729be-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9849-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/102f0bb6efb3a6128a3c750dd16729be-Supplemental.pdf
A central problem in learning from sequential data is representing cumulative history in an incremental fashion as more data is processed. We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto polynomial bases. Given a measure that specifies the importance of each time step in the past, HiPPO produces an optimal solution to a natural online function approximation problem. As special cases, our framework yields a short derivation of the recent Legendre Memory Unit (LMU) from first principles, and generalizes the ubiquitous gating mechanism of recurrent neural networks such as GRUs. This formal framework yields a new memory update mechanism (HiPPO-LegS) that scales through time to remember all history, avoiding priors on the timescale. HiPPO-LegS enjoys the theoretical benefits of timescale robustness, fast updates, and bounded gradients. By incorporating the memory dynamics into recurrent neural networks, HiPPO RNNs can empirically capture complex temporal dependencies. On the benchmark permuted MNIST dataset, HiPPO-LegS sets a new state-of-the-art accuracy of 98.3%. Finally, on a novel trajectory classification task testing robustness to out-of-distribution timescales and missing data, HiPPO-LegS outperforms RNN and neural ODE baselines by 25-40% accuracy.
Auto Learning Attention
https://papers.nips.cc/paper_files/paper/2020/hash/103303dd56a731e377d01f6a37badae3-Abstract.html
Benteng Ma, Jing Zhang, Yong Xia, Dacheng Tao
https://papers.nips.cc/paper_files/paper/2020/hash/103303dd56a731e377d01f6a37badae3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9850-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/103303dd56a731e377d01f6a37badae3-Supplemental.pdf
Attention modules have been demonstrated effective in strengthening the representation ability of a neural network via reweighting spatial or channel features or stacking both operations sequentially. However, designing the structures of different attention operations requires a bulk of computation and extensive expertise. In this paper, we devise an Auto Learning Attention (AutoLA) method, which is the first attempt on automatic attention design. Specifically, we define a novel attention module named high order group attention (HOGA) as a directed acyclic graph (DAG) where each group represents a node, and each edge represents an operation of heterogeneous attentions. A typical HOGA architecture can be searched automatically via the differential AutoLA method within 1 GPU day using the ResNet-20 backbone on CIFAR10. Further, the searched attention module can generalize to various backbones as a plug-and-play component and outperforms popular manually designed channel and spatial attentions for many vision tasks, including image classification on CIFAR100 and ImageNet, object detection and human keypoint detection on COCO dataset. The code will be released.
CASTLE: Regularization via Auxiliary Causal Graph Discovery
https://papers.nips.cc/paper_files/paper/2020/hash/1068bceb19323fe72b2b344ccf85c254-Abstract.html
Trent Kyono, Yao Zhang, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2020/hash/1068bceb19323fe72b2b344ccf85c254-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9851-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1068bceb19323fe72b2b344ccf85c254-Supplemental.zip
Regularization improves generalization of supervised models to out-of-sample data. Prior works have shown that prediction in the causal direction (effect from cause) results in lower testing error than the anti-causal direction. However, existing regularization methods are agnostic of causality. We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables. CASTLE learns the causal directed acyclical graph (DAG) as an adjacency matrix embedded in the neural network's input layers, thereby facilitating the discovery of optimal predictors. Furthermore, CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features. We provide a theoretical generalization bound for our approach and conduct experiments on a plethora of synthetic and real publicly available datasets demonstrating that CASTLE consistently leads to better out-of-sample predictions as compared to other popular benchmark regularizers.
Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect
https://papers.nips.cc/paper_files/paper/2020/hash/1091660f3dff84fd648efe31391c5524-Abstract.html
Kaihua Tang, Jianqiang Huang, Hanwang Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/1091660f3dff84fd648efe31391c5524-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9852-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1091660f3dff84fd648efe31391c5524-Supplemental.pdf
As the class size grows, maintaining a balanced dataset across many classes is challenging because the data are long-tailed in nature; it is even impossible when the sample-of-interest co-exists with each other in one collectable unit, e.g., multiple visual instances in one image. Therefore, long-tailed classification is the key to deep learning at scale. However, existing methods are mainly based on re-weighting/re-sampling heuristics that lack a fundamental theory. In this paper, we establish a causal inference framework, which not only unravels the whys of previous methods, but also derives a new principled solution. Specifically, our theory shows that the SGD momentum is essentially a confounder in long-tailed classification. On one hand, it has a harmful causal effect that misleads the tail prediction biased towards the head. On the other hand, its induced mediation also benefits the representation learning and head prediction. Our framework elegantly disentangles the paradoxical effects of the momentum, by pursuing the direct causal effect caused by an input sample. In particular, we use causal intervention in training, and counterfactual reasoning in inference, to remove the bad'' while keep thegood''. We achieve new state-of-the-arts on three long-tailed visual recognition benchmarks: Long-tailed CIFAR-10/-100, ImageNet-LT for image classification and LVIS for instance segmentation.
Explainable Voting
https://papers.nips.cc/paper_files/paper/2020/hash/10c72a9d42dd07a028ee910f7854da5d-Abstract.html
Dominik Peters, Ariel D. Procaccia, Alexandros Psomas, Zixin Zhou
https://papers.nips.cc/paper_files/paper/2020/hash/10c72a9d42dd07a028ee910f7854da5d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9853-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/10c72a9d42dd07a028ee910f7854da5d-Supplemental.pdf
The design of voting rules is traditionally guided by desirable axioms. Recent work shows that, surprisingly, the axiomatic approach can also support the generation of explanations for voting outcomes. However, no bounds on the size of these explanations is given; for all we know, they may be unbearably tedious. We prove, however, that outcomes of the important Borda rule can be explained using $O(m^2)$ steps, where $m$ is the number of alternatives. Our main technical result is a general lower bound that, in particular, implies that the foregoing bound is asymptotically tight. We discuss the significance of our results for AI and machine learning, including their potential to bolster an emerging paradigm of automated decision making called virtual democracy.
Deep Archimedean Copulas
https://papers.nips.cc/paper_files/paper/2020/hash/10eb6500bd1e4a3704818012a1593cc3-Abstract.html
Chun Kai Ling, Fei Fang, J. Zico Kolter
https://papers.nips.cc/paper_files/paper/2020/hash/10eb6500bd1e4a3704818012a1593cc3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9854-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/10eb6500bd1e4a3704818012a1593cc3-Supplemental.pdf
A central problem in machine learning and statistics is to model joint densities of random variables from data. Copulas are joint cumulative distribution functions with uniform marginal distributions and are used to capture interdependencies in isolation from marginals. Copulas are widely used within statistics, but have not gained traction in the context of modern deep learning. In this paper, we introduce ACNet, a novel differentiable neural network architecture that enforces structural properties and enables one to learn an important class of copulas--Archimedean Copulas. Unlike Generative Adversarial Networks, Variational Autoencoders, or Normalizing Flow methods, which learn either densities or the generative process directly, ACNet learns a generator of the copula, which implicitly defines the cumulative distribution function of a joint distribution. We give a probabilistic interpretation of the network parameters of ACNet and use this to derive a simple but efficient sampling algorithm for the learned copula. Our experiments show that ACNet is able to both approximate common Archimedean Copulas and generate new copulas which may provide better fits to data.
Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/10fb6cfa4c990d2bad5ddef4f70e8ba2-Abstract.html
Ben Letham, Roberto Calandra, Akshara Rai, Eytan Bakshy
https://papers.nips.cc/paper_files/paper/2020/hash/10fb6cfa4c990d2bad5ddef4f70e8ba2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9855-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-Supplemental.pdf
Bayesian optimization (BO) is a popular approach to optimize expensive-to-evaluate black-box functions. A significant challenge in BO is to scale to high-dimensional parameter spaces while retaining sample efficiency. A solution considered in existing literature is to embed the high-dimensional space in a lower-dimensional manifold, often via a random linear embedding. In this paper, we identify several crucial issues and misconceptions about the use of linear embeddings for BO. We study the properties of linear embeddings from the literature and show that some of the design choices in current approaches adversely impact their performance. We show empirically that properly addressing these issues significantly improves the efficacy of linear embeddings for BO on a range of problems, including learning a gait policy for robot locomotion.
UnModNet: Learning to Unwrap a Modulo Image for High Dynamic Range Imaging
https://papers.nips.cc/paper_files/paper/2020/hash/1102a326d5f7c9e04fc3c89d0ede88c9-Abstract.html
Chu Zhou, Hang Zhao, Jin Han, Chang Xu, Chao Xu, Tiejun Huang, Boxin Shi
https://papers.nips.cc/paper_files/paper/2020/hash/1102a326d5f7c9e04fc3c89d0ede88c9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9856-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1102a326d5f7c9e04fc3c89d0ede88c9-Supplemental.pdf
A conventional camera often suffers from over- or under-exposure when recording a real-world scene with a very high dynamic range (HDR). In contrast, a modulo camera with a Markov random field (MRF) based unwrapping algorithm can theoretically accomplish unbounded dynamic range but shows degenerate performances when there are modulus-intensity ambiguity, strong local contrast, and color misalignment. In this paper, we reformulate the modulo image unwrapping problem into a series of binary labeling problems and propose a modulo edge-aware model, named as UnModNet, to iteratively estimate the binary rollover masks of the modulo image for unwrapping. Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm thanks to the GPU acceleration.
Thunder: a Fast Coordinate Selection Solver for Sparse Learning
https://papers.nips.cc/paper_files/paper/2020/hash/11348e03e23b137d55d94464250a67a2-Abstract.html
Shaogang Ren, Weijie Zhao, Ping Li
https://papers.nips.cc/paper_files/paper/2020/hash/11348e03e23b137d55d94464250a67a2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9857-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/11348e03e23b137d55d94464250a67a2-Supplemental.pdf
L1 regularization has been broadly employed to pursue model sparsity. Despite the non-smoothness, people have developed efficient algorithms by leveraging the sparsity and convexity of the problems. In this paper, we propose a novel active incremental approach to further improve the efficiency of the solvers. We show that our method performs well even when the existing methods fail due to the low sparseness or high solution accuracy request. Theoretical analysis and experimental results on synthetic and real-world data sets validate the advantages of the method.
Neural Networks Fail to Learn Periodic Functions and How to Fix It
https://papers.nips.cc/paper_files/paper/2020/hash/1160453108d3e537255e9f7b931f4e90-Abstract.html
Liu Ziyin, Tilman Hartwig, Masahito Ueda
https://papers.nips.cc/paper_files/paper/2020/hash/1160453108d3e537255e9f7b931f4e90-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9858-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1160453108d3e537255e9f7b931f4e90-Supplemental.pdf
Previous literature offers limited clues on how to learn a periodic function using modern neural networks. We start with a study of the extrapolation properties of neural networks; we prove and demonstrate experimentally that the standard activations functions, such as ReLU, tanh, sigmoid, along with their variants, all fail to learn to extrapolate simple periodic functions. We hypothesize that this is due to their lack of a ``periodic" inductive bias. As a fix of this problem, we propose a new activation, namely, $x + \sin^2(x)$, which achieves the desired periodic inductive bias to learn a periodic function while maintaining a favorable optimization property of the $\relu$-based activations. Experimentally, we apply the proposed method to temperature and financial data prediction.
Distribution Matching for Crowd Counting
https://papers.nips.cc/paper_files/paper/2020/hash/118bd558033a1016fcc82560c65cca5f-Abstract.html
Boyu Wang, Huidong Liu, Dimitris Samaras, Minh Hoai Nguyen
https://papers.nips.cc/paper_files/paper/2020/hash/118bd558033a1016fcc82560c65cca5f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9859-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/118bd558033a1016fcc82560c65cca5f-Supplemental.pdf
In crowd counting, each training image contains multiple people, where each person is annotated by a dot. Existing crowd counting methods need to use a Gaussian to smooth each annotated dot or to estimate the likelihood of every pixel given the annotated point. In this paper, we show that imposing Gaussians to annotations hurts generalization performance. Instead, we propose to use Distribution Matching for crowd COUNTing (DM-Count). In DM-Count, we use Optimal Transport (OT) to measure the similarity between the normalized predicted density map and the normalized ground truth density map. To stabilize OT computation, we include a Total Variation loss in our model. We show that the generalization error bound of DM-Count is tighter than that of the Gaussian smoothed methods. In terms of Mean Absolute Error, DM-Count outperforms the previous state-of-the-art methods by a large margin on two large-scale counting datasets, UCF-QNRF and NWPU, and achieves the state-of-the-art results on the ShanghaiTech and UCF-CC50 datasets. DM-Count reduced the error of the state-of-the-art published result by approximately 16%. Code is available at https://github.com/cvlab-stonybrook/DM-Count.
Correspondence learning via linearly-invariant embedding
https://papers.nips.cc/paper_files/paper/2020/hash/11953163dd7fb12669b41a48f78a29b6-Abstract.html
Riccardo Marin, Marie-Julie Rakotosaona, Simone Melzi, Maks Ovsjanikov
https://papers.nips.cc/paper_files/paper/2020/hash/11953163dd7fb12669b41a48f78a29b6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9860-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/11953163dd7fb12669b41a48f78a29b6-Supplemental.pdf
In this paper, we propose a fully differentiable pipeline for estimating accurate dense correspondences between 3D point clouds. The proposed pipeline is an extension and a generalization of the functional maps framework. However, instead of using the Laplace-Beltrami eigenfunctions as done in virtually all previous works in this domain, we demonstrate that learning the basis from data can both improve robustness and lead to better accuracy in challenging settings. We interpret the basis as a learned embedding into a higher dimensional space. Following the functional map paradigm the optimal transformation in this embedding space must be linear and we propose a separate architecture aimed at estimating the transformation by learning optimal descriptor functions. This leads to the first end-to-end trainable functional map-based correspondence approach in which both the basis and the descriptors are learned from data. Interestingly, we also observe that learning a canonical embedding leads to worse results, suggesting that leaving an extra linear degree of freedom to the embedding network gives it more robustness, thereby also shedding light onto the success of previous methods. Finally, we demonstrate that our approach achieves state-of-the-art results in challenging non-rigid 3D point cloud correspondence applications.
Learning to Dispatch for Job Shop Scheduling via Deep Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/11958dfee29b6709f48a9ba0387a2431-Abstract.html
Cong Zhang, Wen Song, Zhiguang Cao, Jie Zhang, Puay Siew Tan, Xu Chi
https://papers.nips.cc/paper_files/paper/2020/hash/11958dfee29b6709f48a9ba0387a2431-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9861-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/11958dfee29b6709f48a9ba0387a2431-Supplemental.pdf
Priority dispatching rule (PDR) is widely used for solving real-world Job-shop scheduling problem (JSSP). However, the design of effective PDRs is a tedious task, requiring a myriad of specialized knowledge and often delivering limited performance. In this paper, we propose to automatically learn PDRs via an end-to-end deep reinforcement learning agent. We exploit the disjunctive graph representation of JSSP, and propose a Graph Neural Network based scheme to embed the states encountered during solving. The resulting policy network is size-agnostic, effectively enabling generalization on large-scale instances. Experiments show that the agent can learn high-quality PDRs from scratch with elementary raw features, and demonstrates strong performance against the best existing PDRs. The learned policies also perform well on much larger instances that are unseen in training.
On Adaptive Attacks to Adversarial Example Defenses
https://papers.nips.cc/paper_files/paper/2020/hash/11f38f8ecd71867b42433548d1078e38-Abstract.html
Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry
https://papers.nips.cc/paper_files/paper/2020/hash/11f38f8ecd71867b42433548d1078e38-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9862-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/11f38f8ecd71867b42433548d1078e38-Supplemental.zip
While prior evaluation papers focused mainly on the end result---showing that a defense was ineffective---this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. Some of our attack strategies are generalizable, but no single strategy would have been sufficient for all defenses. This underlines our key message that adaptive attacks cannot be automated and always require careful and appropriate tuning to a given defense. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
Sinkhorn Natural Gradient for Generative Models
https://papers.nips.cc/paper_files/paper/2020/hash/122e27d57ae8ecb37f3f1da67abb33cb-Abstract.html
Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani
https://papers.nips.cc/paper_files/paper/2020/hash/122e27d57ae8ecb37f3f1da67abb33cb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9863-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/122e27d57ae8ecb37f3f1da67abb33cb-Supplemental.pdf
We consider the problem of minimizing a functional over a parametric family of probability measures, where the parameterization is characterized via a push-forward structure. An important application of this problem is in training generative adversarial networks. In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence. We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically with respect to the desired accuracy. This is in sharp contrast to existing natural gradient methods that can only be carried out approximately. Moreover, in practical applications when only Monte-Carlo type integration is available, we design an empirical estimator for SIM and provide the stability analysis. In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
Online Sinkhorn: Optimal Transport distances from sample streams
https://papers.nips.cc/paper_files/paper/2020/hash/123650dd0560587918b3d771cf0c0171-Abstract.html
Arthur Mensch, Gabriel Peyré
https://papers.nips.cc/paper_files/paper/2020/hash/123650dd0560587918b3d771cf0c0171-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9864-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/123650dd0560587918b3d771cf0c0171-Supplemental.pdf
Optimal Transport (OT) distances are now routinely used as loss functions in ML tasks. Yet, computing OT distances between arbitrary (i.e. not necessarily discrete) probability distributions remains an open problem. This paper introduces a new online estimator of entropy-regularized OT distances between two such arbitrary distributions. It uses streams of samples from both distributions to iteratively enrich a non-parametric representation of the transportation plan. Compared to the classic Sinkhorn algorithm, our method leverages new samples at each iteration, which enables a consistent estimation of the true regularized OT distance. We provide a theoretical analysis of the convergence of the online Sinkhorn algorithm, showing a nearly-1/n asymptotic sample complexity for the iterate sequence. We validate our method on synthetic 1-d to 10-d data and on real 3-d shape data.
Ultrahyperbolic Representation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/123b7f02433572a0a560e620311a469c-Abstract.html
Marc Law, Jos Stam
https://papers.nips.cc/paper_files/paper/2020/hash/123b7f02433572a0a560e620311a469c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9865-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/123b7f02433572a0a560e620311a469c-Supplemental.pdf
In machine learning, data is usually represented in a (flat) Euclidean space where distances between points are along straight lines. Researchers have recently considered more exotic (non-Euclidean) Riemannian manifolds such as hyperbolic space which is well suited for tree-like data. In this paper, we propose a representation living on a pseudo-Riemannian manifold of constant nonzero curvature. It is a generalization of hyperbolic and spherical geometries where the non-degenerate metric tensor need not be positive definite. We provide the necessary learning tools in this geometry and extend gradient method optimization techniques. More specifically, we provide closed-form expressions for distances via geodesics and define a descent direction to minimize some objective function. Our novel framework is applied to graph representations.
Locally-Adaptive Nonparametric Online Learning
https://papers.nips.cc/paper_files/paper/2020/hash/12780ea688a71dabc284b064add459a4-Abstract.html
Ilja Kuzborskij, Nicolò Cesa-Bianchi
https://papers.nips.cc/paper_files/paper/2020/hash/12780ea688a71dabc284b064add459a4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9866-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/12780ea688a71dabc284b064add459a4-Supplemental.pdf
One of the main strengths of online algorithms is their ability to adapt to arbitrary data sequences. This is especially important in nonparametric settings, where performance is measured against rich classes of comparator functions that are able to fit complex environments. Although such hard comparators and complex environments may exhibit local regularities, efficient algorithms, which can provably take advantage of these local patterns, are hardly known. We fill this gap by introducing efficient online algorithms (based on a single versatile master algorithm) each adapting to one of the following regularities: (i) local Lipschitzness of the competitor function, (ii) local metric dimension of the instance sequence, (iii) local performance of the predictor across different regions of the instance space. Extending previous approaches, we design algorithms that dynamically grow hierarchical ε-nets on the instance space whose prunings correspond to different “locality profiles” for the problem at hand. Using a technique based on tree experts, we simultaneously and efficiently compete against all such prunings, and prove regret bounds each scaling with a quantity associated with a different type of local regularity. When competing against “simple” locality profiles, our technique delivers regret bounds that are significantly better than those proven using the previous approach. On the other hand, the time dependence of our bounds is not worse than that obtained by ignoring any local regularities.
Compositional Generalization via Neural-Symbolic Stack Machines
https://papers.nips.cc/paper_files/paper/2020/hash/12b1e42dc0746f22cf361267de07073f-Abstract.html
Xinyun Chen, Chen Liang, Adams Wei Yu, Dawn Song, Denny Zhou
https://papers.nips.cc/paper_files/paper/2020/hash/12b1e42dc0746f22cf361267de07073f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9867-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/12b1e42dc0746f22cf361267de07073f-Supplemental.pdf
Despite achieving tremendous success, existing deep learning models have exposed limitations in compositional generalization, the capability to learn compositional rules and apply them to unseen cases in a systematic manner. To tackle this issue, we propose the Neural-Symbolic Stack Machine (NeSS). It contains a neural network to generate traces, which are then executed by a symbolic stack machine enhanced with sequence manipulation operations. NeSS combines the expressive power of neural sequence models with the recursion supported by the symbolic stack machine. Without training supervision on execution traces, NeSS achieves 100% generalization performance in four domains: the SCAN benchmark of language-driven navigation tasks, the task of few-shot learning of compositional instructions, the compositional machine translation benchmark, and context-free grammar parsing tasks.
Graphon Neural Networks and the Transferability of Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/12bcd658ef0a540cabc36cdf2b1046fd-Abstract.html
Luana Ruiz, Luiz Chamon, Alejandro Ribeiro
https://papers.nips.cc/paper_files/paper/2020/hash/12bcd658ef0a540cabc36cdf2b1046fd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9868-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/12bcd658ef0a540cabc36cdf2b1046fd-Supplemental.pdf
Graph neural networks (GNNs) rely on graph convolutions to extract local features from network data. These graph convolutions combine information from adjacent nodes using coefficients that are shared across all nodes. Since these coefficients are shared and do not depend on the graph, one can envision using the same coefficients to define a GNN on another graph. This motivates analyzing the transferability of GNNs across graphs. In this paper we introduce graphon NNs as limit objects of GNNs and prove a bound on the difference between the output of a GNN and its limit graphon-NN. This bound vanishes with growing number of nodes if the graph convolutional filters are bandlimited in the graph spectral domain. This result establishes a tradeoff between discriminability and transferability of GNNs.
Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
https://papers.nips.cc/paper_files/paper/2020/hash/12d16adf4a9355513f9d574b76087a08-Abstract.html
Mohsen Bayati, Nima Hamidi, Ramesh Johari, Khashayar Khosravi
https://papers.nips.cc/paper_files/paper/2020/hash/12d16adf4a9355513f9d574b76087a08-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12d16adf4a9355513f9d574b76087a08-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9869-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12d16adf4a9355513f9d574b76087a08-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12d16adf4a9355513f9d574b76087a08-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12d16adf4a9355513f9d574b76087a08-Review.html
null
We study the structure of regret-minimizing policies in the {\em many-armed} Bayesian multi-armed bandit problem: in particular, with $k$ the number of arms and $T$ the time horizon, we consider the case where $k \geq \sqrt{T}$. We first show that {\em subsampling} is a critical step for designing optimal policies. In particular, the standard UCB algorithm leads to sub-optimal regret bounds in the many-armed regime. However, a subsampled UCB (SS-UCB), which samples $\Theta(\sqrt{T})$ arms and executes UCB only on that subset, is rate-optimal. Despite theoretically optimal regret, even SS-UCB performs poorly due to excessive exploration of suboptimal arms. In particular, in numerical experiments SS-UCB performs worse than a simple greedy algorithm (and its subsampled version) that pulls the current empirical best arm at every time period. We show that these insights hold even in a contextual setting, using real-world data. These empirical results suggest a novel form of {\em free exploration} in the many-armed regime that benefits greedy algorithms. We theoretically study this new source of free exploration and find that it is deeply connected to the distribution of a certain tail event for the prior distribution of arm rewards. This is a fundamentally distinct phenomenon from free exploration as discussed in the recent literature on contextual bandits, where free exploration arises due to variation in contexts. We use this insight to prove that the subsampled greedy algorithm is rate-optimal for Bernoulli bandits when $k > \sqrt{T}$, and achieves sublinear regret with more general distributions. This is a case where theoretical rate optimality does not tell the whole story: when complemented by the empirical observations of our paper, the power of greedy algorithms becomes quite evident. Taken together, from a practical standpoint, our results suggest that in applications it may be preferable to use a variant of the greedy algorithm in the many-armed regime.
Gamma-Models: Generative Temporal Difference Learning for Infinite-Horizon Prediction
https://papers.nips.cc/paper_files/paper/2020/hash/12ffb0968f2f56e51a59a6beb37b2859-Abstract.html
Michael Janner, Igor Mordatch, Sergey Levine
https://papers.nips.cc/paper_files/paper/2020/hash/12ffb0968f2f56e51a59a6beb37b2859-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9870-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/12ffb0968f2f56e51a59a6beb37b2859-Supplemental.pdf
We introduce the gamma-model, a predictive model of environment dynamics with an infinite, probabilistic horizon. Replacing standard single-step models with gamma-models leads to generalizations of the procedures that form the foundation of model-based control, including the model rollout and model-based value estimation. The gamma-model, trained with a generative reinterpretation of temporal difference learning, is a natural continuous analogue of the successor representation and a hybrid between model-free and model-based mechanisms. Like a value function, it contains information about the long-term future; like a standard predictive model, it is independent of task reward. We instantiate the gamma-model as both a generative adversarial network and normalizing flow, discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors, and empirically investigate its utility for prediction and control.
Deep Transformers with Latent Depth
https://papers.nips.cc/paper_files/paper/2020/hash/1325cdae3b6f0f91a1b629307bf2d498-Abstract.html
Xian Li, Asa Cooper Stickland, Yuqing Tang, Xiang Kong
https://papers.nips.cc/paper_files/paper/2020/hash/1325cdae3b6f0f91a1b629307bf2d498-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9871-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1325cdae3b6f0f91a1b629307bf2d498-Supplemental.pdf
The Transformer model has achieved state-of-the-art performance in many sequence modeling tasks. However, how to leverage model capacity with large or variable depths is still an open challenge. We present a probabilistic framework to automatically learn which layer(s) to use by learning the posterior distributions of layer selection. As an extension of this framework, we propose a novel method to train one shared Transformer network for multilingual machine translation with different layer selection posteriors for each language pair. The proposed method alleviates the vanishing gradient issue and enables stable training of deep Transformers (e.g. 100 layers). We evaluate on WMT English-German machine translation and masked language modeling tasks, where our method outperforms existing approaches for training deeper Transformers. Experiments on multilingual machine translation demonstrate that this approach can effectively leverage increased model capacity and bring universal improvement for both many-to-one and one-to-many translation with diverse language pairs.
Neural Mesh Flow: 3D Manifold Mesh Generation via Diffeomorphic Flows
https://papers.nips.cc/paper_files/paper/2020/hash/1349b36b01e0e804a6c2909a6d0ec72a-Abstract.html
Kunal Gupta, Manmohan Chandraker
https://papers.nips.cc/paper_files/paper/2020/hash/1349b36b01e0e804a6c2909a6d0ec72a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9872-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1349b36b01e0e804a6c2909a6d0ec72a-Supplemental.zip
Meshes are important representations of physical 3D entities in the virtual world. Applications like rendering, simulations and 3D printing require meshes to be manifold so that they can interact with the world like the real objects they represent. Prior methods generate meshes with great geometric accuracy but poor manifoldness. In this work, we propose NeuralMeshFlow (NMF) to generate two-manifold meshes for genus-0 shapes. Specifically, NMF is a shape auto-encoder consisting of several Neural Ordinary Differential Equation (NODE)(1) blocks that learn accurate mesh geometry by progressively deforming a spherical mesh. Training NMF is simpler compared to state-of-the-art methods since it does not require any explicit mesh-based regularization. Our experiments demonstrate that NMF facilitates several applications such as single-view mesh reconstruction, global shape parameterization, texture mapping, shape deformation and correspondence. Importantly, we demonstrate that manifold meshes generated using NMF are better-suited for physically-based rendering and simulation compared to prior works.
Statistical control for spatio-temporal MEG/EEG source imaging with desparsified mutli-task Lasso
https://papers.nips.cc/paper_files/paper/2020/hash/1359aa933b48b754a2f54adb688bfa77-Abstract.html
Jerome-Alexis Chevalier, Joseph Salmon, Alexandre Gramfort, Bertrand Thirion
https://papers.nips.cc/paper_files/paper/2020/hash/1359aa933b48b754a2f54adb688bfa77-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9873-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1359aa933b48b754a2f54adb688bfa77-Supplemental.zip
Detecting where and when brain regions activate in a cognitive task or in a given clinical condition is the promise of non-invasive techniques like magnetoencephalography (MEG) or electroencephalography (EEG). This problem, referred to as source localization, or source imaging, poses however a high-dimensional statistical inference challenge. While sparsity promoting regularizations have been proposed to address the regression problem, it remains unclear how to ensure statistical control of false detections in this setting. Moreover, MEG/EEG source imaging requires to work with spatio-temporal data and autocorrelated noise. To deal with this, we adapt the desparsified Lasso estimator ---an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions--- to temporal data corrupted with autocorrelated noise. We call it the desparsified multi-task Lasso (d-MTLasso). We combine d-MTLasso with spatially constrained clustering to reduce data dimension and with ensembling to mitigate the arbitrary choice of clustering; the resulting estimator is called ensemble of clustered desparsified multi-task Lasso (ecd-MTLasso). With respect to the current procedures, the two advantages of ecd-MTLasso are that i)it offers statistical guarantees and ii)it allows to trade spatial specificity for sensitivity, leading to a powerful adaptive method. Extensive simulations on realistic head geometries, as well as empirical results on various MEG datasets, demonstrate the high recovery performance of ecd-MTLasso and its primary practical benefit: offer a statistically principled way to threshold MEG/EEG source maps.
A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees
https://papers.nips.cc/paper_files/paper/2020/hash/1373b284bc381890049e92d324f56de0-Abstract.html
Haoran Zhu, Pavankumar Murali, Dzung Phan, Lam Nguyen, Jayant Kalagnanam
https://papers.nips.cc/paper_files/paper/2020/hash/1373b284bc381890049e92d324f56de0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9874-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1373b284bc381890049e92d324f56de0-Supplemental.pdf
Several recent publications report advances in training optimal decision trees (ODTs) using mixed-integer programs (MIPs), due to algorithmic advances in integer programming and a growing interest in addressing the inherent suboptimality of heuristic approaches such as CART. In this paper, we propose a novel MIP formulation, based on 1-norm support vector machine model, to train a binary oblique ODT for classification problems. We further present techniques, such as cutting planes, to tighten its linear relaxation, to improve run times to reach optimality. Using 36 datasets from the University of California Irvine Machine Learning Repository, we demonstrate that our training approach outperforms its counterparts from literature in terms of out-of-sample performance (around 10% improvement in mean out-of-sample testing accuracy). Towards our goal of developing a scalable framework to train multivariate ODT on large datasets, we propose a new linear programming based data selection method to choose a subset of the data, and use it to train a decision tree through our proposed MIP model. We conclude this paper with extensive numerical testing results, that showcase the generalization performance of our new MIP formulation, and the improvement in mean out-of-sample accuracy on large datasets.
Efficient Exact Verification of Binarized Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/1385974ed5904a438616ff7bdb3f7439-Abstract.html
Kai Jia, Martin Rinard
https://papers.nips.cc/paper_files/paper/2020/hash/1385974ed5904a438616ff7bdb3f7439-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9875-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1385974ed5904a438616ff7bdb3f7439-Supplemental.pdf
Concerned with the reliability of neural networks, researchers have developed verification techniques to prove their robustness. Most verifiers work with real-valued networks. Unfortunately, the exact (complete and sound) verifiers face scalability challenges and provide no correctness guarantees due to floating point errors. We argue that Binarized Neural Networks (BNNs) provide comparable robustness and allow exact and significantly more efficient verification. We present a new system, EEV, for efficient and exact verification of BNNs. EEV consists of two parts: (i) a novel SAT solver that speeds up BNN verification by natively handling the reified cardinality constraints arising in BNN encodings; and (ii) strategies to train solver-friendly robust BNNs by inducing balanced layer-wise sparsity and low cardinality bounds, and adaptively cancelling the gradients. We demonstrate the effectiveness of EEV by presenting the first exact verification results for L-inf-bounded adversarial robustness of nontrivial convolutional BNNs on the MNIST and CIFAR10 datasets. Compared to exact verification of real-valued networks of the same architectures on the same tasks, EEV verifies BNNs hundreds to thousands of times faster, while delivering comparable verifiable accuracy in most cases.
Ultra-Low Precision 4-bit Training of Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/13b919438259814cd5be8cb45877d577-Abstract.html
Xiao Sun, Naigang Wang, Chia-Yu Chen, Jiamin Ni, Ankur Agrawal, Xiaodong Cui, Swagath Venkataramani, Kaoutar El Maghraoui, Vijayalakshmi (Viji) Srinivasan, Kailash Gopalakrishnan
https://papers.nips.cc/paper_files/paper/2020/hash/13b919438259814cd5be8cb45877d577-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9876-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13b919438259814cd5be8cb45877d577-Supplemental.pdf
In this paper, we propose a number of novel techniques and numerical representation formats that enable, for the very first time, the precision of training systems to be aggressively scaled from 8-bits to 4-bits. To enable this advance, we explore a novel adaptive Gradient Scaling technique (Gradscale) that addresses the challenges of insufficient range and resolution in quantized gradients as well as explores the impact of quantization errors observed during model training. We theoretically analyze the role of bias in gradient quantization and propose solutions that mitigate the impact of this bias on model convergence. Finally, we examine our techniques on a spectrum of deep learning models in computer vision, speech, and NLP. In combination with previously proposed solutions for 4-bit quantization of weight and activation tensors, 4-bit training shows a non-significant loss in accuracy across application domains while enabling significant hardware acceleration (> 7X over state-of-the-art FP16 systems).
Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS
https://papers.nips.cc/paper_files/paper/2020/hash/13d4635deccc230c944e4ff6e03404b5-Abstract.html
Han Shi, Renjie Pi, Hang Xu, Zhenguo Li, James Kwok, Tong Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/13d4635deccc230c944e4ff6e03404b5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9877-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13d4635deccc230c944e4ff6e03404b5-Supplemental.zip
Neural Architecture Search (NAS) has shown great potentials in finding better neural network designs. Sample-based NAS is the most reliable approach which aims at exploring the search space and evaluating the most promising architectures. However, it is computationally very costly. As a remedy, the one-shot approach has emerged as a popular technique for accelerating NAS using weight-sharing. However, due to the weight-sharing of vastly different networks, the one-shot approach is less reliable than the sample-based approach. In this work, we propose BONAS (Bayesian Optimized Neural Architecture Search), a sample-based NAS framework which is accelerated using weight-sharing to evaluate multiple related architectures simultaneously. Specifically, we apply Graph Convolutional Network predictor as a surrogate model for Bayesian Optimization to select multiple related candidate models in each iteration. We then apply weight-sharing to train multiple candidate models simultaneously. This approach not only accelerates the traditional sample-based approach significantly, but also keeps its reliability. This is because weight-sharing among related architectures are more reliable than those in the one-shot approach. Extensive experiments are conducted to verify the effectiveness of our method over many competing algorithms.
On Numerosity of Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/13e36f06c66134ad65f532e90d898545-Abstract.html
Xi Zhang, Xiaolin Wu
https://papers.nips.cc/paper_files/paper/2020/hash/13e36f06c66134ad65f532e90d898545-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9878-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13e36f06c66134ad65f532e90d898545-Supplemental.pdf
Recently, a provocative claim was published that number sense spontaneously emerges in a deep neural network trained merely for visual object recognition. This has, if true, far reaching significance to the fields of machine learning and cognitive science alike. In this paper, we prove the above claim to be unfortunately incorrect. The statistical analysis to support the claim is flawed in that the sample set used to identify number-aware neurons is too small, compared to the huge number of neurons in the object recognition network. By this flawed analysis one could mistakenly identify number-sensing neurons in any randomly initialized deep neural networks that are not trained at all. With the above critique we ask the question what if a deep convolutional neural network is carefully trained for numerosity? Our findings are mixed. Even after being trained with number-depicting images, the deep learning approach still has difficulties to acquire the abstract concept of numbers, a cognitive task that preschoolers perform with ease. But on the other hand, we do find some encouraging evidences suggesting that deep neural networks are more robust to distribution shift for small numbers than for large numbers.
Outlier Robust Mean Estimation with Subgaussian Rates via Stability
https://papers.nips.cc/paper_files/paper/2020/hash/13ec9935e17e00bed6ec8f06230e33a9-Abstract.html
Ilias Diakonikolas, Daniel M. Kane, Ankit Pensia
https://papers.nips.cc/paper_files/paper/2020/hash/13ec9935e17e00bed6ec8f06230e33a9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9879-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13ec9935e17e00bed6ec8f06230e33a9-Supplemental.pdf
We study the problem of outlier robust high-dimensional mean estimation under a bounded covariance assumption, and more broadly under bounded low-degree moment assumptions. We consider a standard stability condition from the recent robust statistics literature and prove that, except with exponentially small failure probability, there exists a large fraction of the inliers satisfying this condition. As a corollary, it follows that a number of recently developed algorithms for robust mean estimation, including iterative filtering and non-convex gradient descent, give optimal error estimators with (near-)subgaussian rates. Previous analyses of these algorithms gave significantly suboptimal rates. As a corollary of our approach, we obtain the first computationally efficient algorithm for outlier robust mean estimation with subgaussian rates under a bounded covariance assumption.
Self-Supervised Relationship Probing
https://papers.nips.cc/paper_files/paper/2020/hash/13f320e7b5ead1024ac95c3b208610db-Abstract.html
Jiuxiang Gu, Jason Kuen, Shafiq Joty, Jianfei Cai, Vlad Morariu, Handong Zhao, Tong Sun
https://papers.nips.cc/paper_files/paper/2020/hash/13f320e7b5ead1024ac95c3b208610db-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9880-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13f320e7b5ead1024ac95c3b208610db-Supplemental.pdf
Structured representations of images that model visual relationships are beneficial for many vision and vision-language applications. However, current human-annotated visual relationship datasets suffer from the long-tailed predicate distribution problem which limits the potential of visual relationship models. In this work, we introduce a self-supervised method that implicitly learns the visual relationships without relying on any ground-truth visual relationship annotations. Our method relies on 1) intra- and inter-modality encodings to respectively model relationships within each modality separately and jointly, and 2) relationship probing, which seeks to discover the graph structure within each modality. By leveraging masked language modeling, contrastive learning, and dependency tree distances for self-supervision, our method learns better object features as well as implicit visual relationships. We verify the effectiveness of our proposed method on various vision-language tasks that benefit from improved visual relationship understanding.
Information Theoretic Counterfactual Learning from Missing-Not-At-Random Feedback
https://papers.nips.cc/paper_files/paper/2020/hash/13f3cf8c531952d72e5847c4183e6910-Abstract.html
Zifeng Wang, Xi Chen, Rui Wen, Shao-Lun Huang, Ercan Kuruoglu, Yefeng Zheng
https://papers.nips.cc/paper_files/paper/2020/hash/13f3cf8c531952d72e5847c4183e6910-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9881-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13f3cf8c531952d72e5847c4183e6910-Supplemental.zip
Counterfactual learning for dealing with missing-not-at-random data (MNAR) is an intriguing topic in the recommendation literature, since MNAR data are ubiquitous in modern recommender systems. Instead, missing-at-random (MAR) data, namely randomized controlled trials (RCTs), are usually required by most previous counterfactual learning methods. However, the execution of RCTs is extraordinarily expensive in practice. To circumvent the use of RCTs, we build an information theoretic counterfactual variational information bottleneck (CVIB), as an alternative for debiasing learning without RCTs. By separating the task-aware mutual information term in the original information bottleneck Lagrangian into factual and counterfactual parts, we derive a contrastive information loss and an additional output confidence penalty, which facilitates balanced learning between the factual and counterfactual domains. Empirical evaluation on real-world datasets shows that our CVIB significantly enhances both shallow and deep models, which sheds light on counterfactual learning in recommendation that goes beyond RCTs.
Prophet Attention: Predicting Attention with Future Attention
https://papers.nips.cc/paper_files/paper/2020/hash/13fe9d84310e77f13a6d184dbf1232f3-Abstract.html
Fenglin Liu, Xuancheng Ren, Xian Wu, Shen Ge, Wei Fan, Yuexian Zou, Xu Sun
https://papers.nips.cc/paper_files/paper/2020/hash/13fe9d84310e77f13a6d184dbf1232f3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9882-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/13fe9d84310e77f13a6d184dbf1232f3-Supplemental.pdf
Recently, attention based models have been used extensively in many sequence-to-sequence learning systems. Especially for image captioning, the attention based models are expected to ground correct image regions with proper generated words. However, for each time step in the decoding process, the attention based models usually use the hidden state of the current input to attend to the image regions. Under this setting, these attention models have a deviated focus'' problem that they calculate the attention weights based on previous words instead of the one to be generated, impairing the performance of both grounding and captioning. In this paper, we propose the Prophet Attention, similar to the form of self-supervision. In the training stage, this module utilizes the future information to calculate theideal'' attention weights towards image regions. These calculated ideal'' weights are further used to regularize thedeviated'' attention. In this manner, image regions are grounded with the correct words. The proposed Prophet Attention can be easily incorporated into existing image captioning models to improve their performance of both grounding and captioning. The experiments on the Flickr30k Entities and the MSCOCO datasets show that the proposed Prophet Attention consistently outperforms baselines in both automatic metrics and human evaluations. It is worth noticing that we set new state-of-the-arts on the two benchmark datasets and achieve the 1st place on the leaderboard of the online MSCOCO benchmark in terms of the default ranking score, i.e., CIDEr-c40.
Language Models are Few-Shot Learners
https://papers.nips.cc/paper_files/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei
https://papers.nips.cc/paper_files/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9883-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Supplemental.pdf
We demonstrate that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even becoming competitive with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks. We also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora.
Margins are Insufficient for Explaining Gradient Boosting
https://papers.nips.cc/paper_files/paper/2020/hash/146f7dd4c91bc9d80cf4458ad6d6cd1b-Abstract.html
Allan Grønlund, Lior Kamma, Kasper Green Larsen
https://papers.nips.cc/paper_files/paper/2020/hash/146f7dd4c91bc9d80cf4458ad6d6cd1b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9884-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/146f7dd4c91bc9d80cf4458ad6d6cd1b-Supplemental.pdf
Boosting is one of the most successful ideas in machine learning, achieving great practical performance with little fine-tuning. The success of boosted classifiers is most often attributed to improvements in margins. The focus on margin explanations was pioneered in the seminal work by Schaphire et al. (1998) and has culminated in the $k$'th margin generalization bound by Gao and Zhou (2013), which was recently proved to be near-tight for some data distributions (Gr\o nlund et al. 2019). In this work, we first demonstrate that the $k$'th margin bound is inadequate in explaining the performance of state-of-the-art gradient boosters. We then explain the short comings of the $k$'th margin bound and prove a stronger and more refined margin-based generalization bound that indeed succeeds in explaining the performance of modern gradient boosters. Finally, we improve upon the recent generalization lower bound by Gr\o nlund et al. (2019).
Fourier-transform-based attribution priors improve the interpretability and stability of deep learning models for genomics
https://papers.nips.cc/paper_files/paper/2020/hash/1487987e862c44b91a0296cf3866387e-Abstract.html
Alex Tseng, Avanti Shrikumar, Anshul Kundaje
https://papers.nips.cc/paper_files/paper/2020/hash/1487987e862c44b91a0296cf3866387e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9885-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1487987e862c44b91a0296cf3866387e-Supplemental.zip
Deep learning models can accurately map genomic DNA sequences to associated functional molecular readouts such as protein-DNA binding data. Base-resolution importance (i.e. "attribution") scores inferred from these models can highlight predictive sequence motifs and syntax. Unfortunately, these models are prone to overfitting and are sensitive to random initializations, often resulting in noisy and irreproducible attributions that obfuscate underlying motifs. To address these shortcomings, we propose a novel attribution prior, where the Fourier transform of input-level attribution scores are computed at training-time, and high-frequency components of the Fourier spectrum are penalized. We evaluate different model architectures with and without our attribution prior, training on genome-wide binary labels or continuous molecular profiles. We show that our attribution prior significantly improves models' stability, interpretability, and performance on held-out data, especially when training data is severely limited. Our attribution prior also allows models to identify biologically meaningful sequence motifs more sensitively and precisely within individual regulatory elements. The prior is agnostic to the model architecture or predicted experimental assay, yet provides similar gains across all experiments. This work represents an important advancement in improving the reliability of deep learning models for deciphering the regulatory code of the genome.
MomentumRNN: Integrating Momentum into Recurrent Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/149ef6419512be56a93169cd5e6fa8fd-Abstract.html
Tan Nguyen, Richard Baraniuk, Andrea Bertozzi, Stanley Osher, Bao Wang
https://papers.nips.cc/paper_files/paper/2020/hash/149ef6419512be56a93169cd5e6fa8fd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9886-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/149ef6419512be56a93169cd5e6fa8fd-Supplemental.pdf
Designing deep neural networks is an art that often involves an expensive search over candidate architectures. To overcome this for recurrent neural nets (RNNs), we establish a connection between the hidden state dynamics in an RNN and gradient descent (GD). We then integrate momentum into this framework and propose a new family of RNNs, called {\em MomentumRNNs}. We theoretically prove and numerically demonstrate that MomentumRNNs alleviate the vanishing gradient issue in training RNNs. We study the momentum long-short term memory (MomentumLSTM) and verify its advantages in convergence speed and accuracy over its LSTM counterpart across a variety of benchmarks. We also demonstrate that MomentumRNN is applicable to many types of recurrent cells, including those in the state-of-the-art orthogonal RNNs. Finally, we show that other advanced momentum-based optimization methods, such as Adam and Nesterov accelerated gradients with a restart, can be easily incorporated into the MomentumRNN framework for designing new recurrent cells with even better performance.
Marginal Utility for Planning in Continuous or Large Discrete Action Spaces
https://papers.nips.cc/paper_files/paper/2020/hash/14da15db887a4b50efe5c1bc66537089-Abstract.html
Zaheen Ahmad, Levi Lelis, Michael Bowling
https://papers.nips.cc/paper_files/paper/2020/hash/14da15db887a4b50efe5c1bc66537089-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9887-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/14da15db887a4b50efe5c1bc66537089-Supplemental.pdf
Sample-based planning is a powerful family of algorithms for generating intelligent behavior from a model of the environment. Generating good candidate actions is critical to the success of sample-based planners, particularly in continuous or large action spaces. Typically, candidate action generation exhausts the action space, uses domain knowledge, or more recently, involves learning a stochastic policy to provide such search guidance. In this paper we explore explicitly learning a candidate action generator by optimizing a novel objective, marginal utility. The marginal utility of an action generator measures the increase in value of an action over previously generated actions. We validate our approach in both curling, a challenging stochastic domain with continuous state and action spaces, and a location game with a discrete but large action space. We show that a generator trained with the marginal utility objective outperforms hand-coded schemes built on substantial domain knowledge, trained stochastic policies, and other natural objectives for generating actions for sampled-based planners.
Projected Stein Variational Gradient Descent
https://papers.nips.cc/paper_files/paper/2020/hash/14faf969228fc18fcd4fcf59437b0c97-Abstract.html
Peng Chen, Omar Ghattas
https://papers.nips.cc/paper_files/paper/2020/hash/14faf969228fc18fcd4fcf59437b0c97-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9888-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/14faf969228fc18fcd4fcf59437b0c97-Supplemental.pdf
The curse of dimensionality is a longstanding challenge in Bayesian inference in high dimensions. In this work, we propose a {projected Stein variational gradient descent} (pSVGD) method to overcome this challenge by exploiting the fundamental property of intrinsic low dimensionality of the data informed subspace stemming from ill-posedness of such problems. We adaptively construct the subspace using a gradient information matrix of the log-likelihood, and apply pSVGD to the much lower-dimensional coefficients of the parameter projection. The method is demonstrated to be more accurate and efficient than SVGD. It is also shown to be more scalable with respect to the number of parameters, samples, data points, and processor cores via experiments with parameters dimensions ranging from the hundreds to the tens of thousands.
Minimax Lower Bounds for Transfer Learning with Linear and One-hidden Layer Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/151d21647527d1079781ba6ae6571ffd-Abstract.html
Mohammadreza Mousavi Kalan, Zalan Fabian, Salman Avestimehr, Mahdi Soltanolkotabi
https://papers.nips.cc/paper_files/paper/2020/hash/151d21647527d1079781ba6ae6571ffd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9889-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/151d21647527d1079781ba6ae6571ffd-Supplemental.pdf
Transfer learning has emerged as a powerful technique for improving the performance of machine learning models on new domains where labeled training data may be scarce. In this approach a model trained for a source task, where plenty of labeled training data is available, is used as a starting point for training a model on a related target task with only few labeled training data. Despite recent empirical success of transfer learning approaches, the benefits and fundamental limits of transfer learning are poorly understood. In this paper we develop a statistical minimax framework to characterize the fundamental limits of transfer learning in the context of regression with linear and one-hidden layer neural network models. Specifically, we derive a lower-bound for the target generalization error achievable by any algorithm as a function of the number of labeled source and target data as well as appropriate notions of similarity between the source and target tasks. Our lower bound provides new insights into the benefits and limitations of transfer learning. We further corroborate our theoretical finding with various experiments.
SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks
https://papers.nips.cc/paper_files/paper/2020/hash/15231a7ce4ba789d13b722cc5c955834-Abstract.html
Fabian Fuchs, Daniel Worrall, Volker Fischer, Max Welling
https://papers.nips.cc/paper_files/paper/2020/hash/15231a7ce4ba789d13b722cc5c955834-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9890-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/15231a7ce4ba789d13b722cc5c955834-Supplemental.pdf
We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point-clouds, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model. The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy N-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.
On the equivalence of molecular graph convolution and molecular wave function with poor basis set
https://papers.nips.cc/paper_files/paper/2020/hash/1534b76d325a8f591b52d302e7181331-Abstract.html
Masashi Tsubaki, Teruyasu Mizoguchi
https://papers.nips.cc/paper_files/paper/2020/hash/1534b76d325a8f591b52d302e7181331-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9891-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1534b76d325a8f591b52d302e7181331-Supplemental.pdf
In this study, we demonstrate that the linear combination of atomic orbitals (LCAO), an approximation introduced by Pauling and Lennard-Jones in the 1920s, corresponds to graph convolutional networks (GCNs) for molecules. However, GCNs involve unnecessary nonlinearity and deep architecture. We also verify that molecular GCNs are based on a poor basis function set compared with the standard one used in theoretical calculations or quantum chemical simulations. From these observations, we describe the quantum deep field (QDF), a machine learning (ML) model based on an underlying quantum physics, in particular the density functional theory (DFT). We believe that the QDF model can be easily understood because it can be regarded as a single linear layer GCN. Moreover, it uses two vanilla feedforward neural networks to learn an energy functional and a Hohenberg--Kohn map that have nonlinearities inherent in quantum physics and the DFT. For molecular energy prediction tasks, we demonstrated the viability of an ``extrapolation,'' in which we trained a QDF model with small molecules, tested it with large molecules, and achieved high extrapolation performance. We believe that we should move away from the competition of interpolation accuracy within benchmark datasets and evaluate ML models based on physics using an extrapolation setting; this will lead to reliable and practical applications, such as fast, large-scale molecular screening for discovering effective materials.
The Power of Predictions in Online Control
https://papers.nips.cc/paper_files/paper/2020/hash/155fa09596c7e18e50b58eb7e0c6ccb4-Abstract.html
Chenkai Yu, Guanya Shi, Soon-Jo Chung, Yisong Yue, Adam Wierman
https://papers.nips.cc/paper_files/paper/2020/hash/155fa09596c7e18e50b58eb7e0c6ccb4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9892-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/155fa09596c7e18e50b58eb7e0c6ccb4-Supplemental.pdf
We study the impact of predictions in online Linear Quadratic Regulator control with both stochastic and adversarial disturbances in the dynamics. In both settings, we characterize the optimal policy and derive tight bounds on the minimum cost and dynamic regret. Perhaps surprisingly, our analysis shows that the conventional greedy MPC approach is a near-optimal policy in both stochastic and adversarial settings. Specifically, for length-$T$ problems, MPC requires only $O(\log T)$ predictions to reach $O(1)$ dynamic regret, which matches (up to lower-order terms) our lower bound on the required prediction horizon for constant regret.
Learning Affordance Landscapes for Interaction Exploration in 3D Environments
https://papers.nips.cc/paper_files/paper/2020/hash/15825aee15eb335cc13f9b559f166ee8-Abstract.html
Tushar Nagarajan, Kristen Grauman
https://papers.nips.cc/paper_files/paper/2020/hash/15825aee15eb335cc13f9b559f166ee8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9893-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/15825aee15eb335cc13f9b559f166ee8-Supplemental.zip
Embodied agents operating in human spaces must be able to master how their environment works: what objects can the agent use, and how can it use them? We introduce a reinforcement learning approach for exploration for interaction, whereby an embodied agent autonomously discovers the affordance landscape of a new unmapped 3D environment (such as an unfamiliar kitchen). Given an egocentric RGB-D camera and a high-level action space, the agent is rewarded for maximizing successful interactions while simultaneously training an image-based affordance segmentation model. The former yields a policy for acting efficiently in new environments to prepare for downstream interaction tasks, while the latter yields a convolutional neural network that maps image regions to the likelihood they permit each action, densifying the rewards for exploration. We demonstrate our idea with AI2-iTHOR. The results show agents can learn how to use new home environments intelligently and that it prepares them to rapidly address various downstream tasks like "find a knife and put it in the drawer." Project page: http://vision.cs.utexas.edu/projects/interaction-exploration/
Cooperative Multi-player Bandit Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/15ae3b9d6286f1b2a489ea4f3f4abaed-Abstract.html
Ilai Bistritz, Nicholas Bambos
https://papers.nips.cc/paper_files/paper/2020/hash/15ae3b9d6286f1b2a489ea4f3f4abaed-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9894-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/15ae3b9d6286f1b2a489ea4f3f4abaed-Supplemental.pdf
Consider a team of cooperative players that take actions in a networked-environment. At each turn, each player chooses an action and receives a reward that is an unknown function of all the players' actions. The goal of the team of players is to learn to play together the action profile that maximizes the sum of their rewards. However, players cannot observe the actions or rewards of other players, and can only get this information by communicating with their neighbors. We design a distributed learning algorithm that overcomes the informational bias players have towards maximizing the rewards of nearby players they got more information about. We assume twice continuously differentiable reward functions and constrained convex and compact action sets. Our communication graph is a random time-varying graph that follows an ergodic Markov chain. We prove that even if at every turn players take actions based only on the small random subset of the players' rewards that they know, our algorithm converges with probability 1 to the set of stationary points of (projected) gradient ascent on the sum of rewards function. Hence, if the sum of rewards is concave, then the algorithm converges with probability 1 to the optimal action profile.
Tight First- and Second-Order Regret Bounds for Adversarial Linear Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/15bb63b28926cd083b15e3b97567bbea-Abstract.html
Shinji Ito, Shuichi Hirahara, Tasuku Soma, Yuichi Yoshida
https://papers.nips.cc/paper_files/paper/2020/hash/15bb63b28926cd083b15e3b97567bbea-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9895-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/15bb63b28926cd083b15e3b97567bbea-Supplemental.pdf
We propose novel algorithms with first- and second-order regret bounds for adversarial linear bandits. These regret bounds imply that our algorithms perform well when there is an action achieving a small cumulative loss or the loss has a small variance. In addition, we need only assumptions weaker than those of existing algorithms; our algorithms work on discrete action sets as well as continuous ones without a priori knowledge about losses, and they run efficiently if a linear optimization oracle for the action set is available. These results are obtained by combining optimistic online optimization, continuous multiplicative weight update methods, and a novel technique that we refer to as distribution truncation. We also show that the regret bounds of our algorithms are tight up to polylogarithmic factors.
Just Pick a Sign: Optimizing Deep Multitask Models with Gradient Sign Dropout
https://papers.nips.cc/paper_files/paper/2020/hash/16002f7a455a94aa4e91cc34ebdb9f2d-Abstract.html
Zhao Chen, Jiquan Ngiam, Yanping Huang, Thang Luong, Henrik Kretzschmar, Yuning Chai, Dragomir Anguelov
https://papers.nips.cc/paper_files/paper/2020/hash/16002f7a455a94aa4e91cc34ebdb9f2d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9896-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/16002f7a455a94aa4e91cc34ebdb9f2d-Supplemental.pdf
The vast majority of deep models use multiple gradient signals, typically corresponding to a sum of multiple loss terms, to update a shared set of trainable weights. However, these multiple updates can impede optimal training by pulling the model in conflicting directions. We present Gradient Sign Dropout (GradDrop), a probabilistic masking procedure which samples gradients at an activation layer based on their level of consistency. GradDrop is implemented as a simple deep layer that can be used in any deep net and synergizes with other gradient balancing approaches. We show that GradDrop outperforms the state-of-the-art multiloss methods within traditional multitask and transfer learning settings, and we discuss how GradDrop reveals links between optimal multiloss training and gradient stochasticity.
A Loss Function for Generative Neural Networks Based on Watson’s Perceptual Model
https://papers.nips.cc/paper_files/paper/2020/hash/165a59f7cf3b5c4396ba65953d679f17-Abstract.html
Steffen Czolbe, Oswin Krause, Ingemar Cox, Christian Igel
https://papers.nips.cc/paper_files/paper/2020/hash/165a59f7cf3b5c4396ba65953d679f17-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9897-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/165a59f7cf3b5c4396ba65953d679f17-Supplemental.pdf
To train Variational Autoencoders (VAEs) to generate realistic imagery requires a loss function that reflects human perception of image similarity. We propose such a loss function based on Watson's perceptual model, which computes a weighted distance in frequency space and accounts for luminance and contrast masking. We extend the model to color images, increase its robustness to translation by using the Fourier Transform, remove artifacts due to splitting the image into blocks, and make it differentiable. In experiments, VAEs trained with the new loss function generated realistic, high-quality image samples. Compared to using the Euclidean distance and the Structural Similarity Index, the images were less blurry; compared to deep neural network based losses, the new approach required less computational resources and generated images with less artifacts.
Dynamic Fusion of Eye Movement Data and Verbal Narrations in Knowledge-rich Domains
https://papers.nips.cc/paper_files/paper/2020/hash/16837163fee34175358a47e0b51485ff-Abstract.html
Ervine Zheng, Qi Yu, Rui Li, Pengcheng Shi, Anne Haake
https://papers.nips.cc/paper_files/paper/2020/hash/16837163fee34175358a47e0b51485ff-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9898-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/16837163fee34175358a47e0b51485ff-Supplemental.pdf
We propose to jointly analyze experts' eye movements and verbal narrations to discover important and interpretable knowledge patterns to better understand their decision-making processes. The discovered patterns can further enhance data-driven statistical models by fusing experts' domain knowledge to support complex human-machine collaborative decision-making. Our key contribution is a novel dynamic Bayesian nonparametric model that assigns latent knowledge patterns into key phases involved in complex decision-making. Each phase is characterized by a unique distribution of word topics discovered from verbal narrations and their dynamic interactions with eye movement patterns, indicating experts' special perceptual behavior within a given decision-making stage. A new split-merge-switch sampler is developed to efficiently explore the posterior state space with an improved mixing rate. Case studies on diagnostic error prediction and disease morphology categorization help demonstrate the effectiveness of the proposed model and discovered knowledge patterns.
Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward
https://papers.nips.cc/paper_files/paper/2020/hash/168efc366c449fab9c2843e9b54e2a18-Abstract.html
Guannan Qu, Yiheng Lin, Adam Wierman, Na Li
https://papers.nips.cc/paper_files/paper/2020/hash/168efc366c449fab9c2843e9b54e2a18-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9899-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/168efc366c449fab9c2843e9b54e2a18-Supplemental.pdf
It has long been recognized that multi-agent reinforcement learning (MARL) faces significant scalability issues due to the fact that the size of the state and action spaces are exponentially large in the number of agents. In this paper, we identify a rich class of networked MARL problems where the model exhibits a local dependence structure that allows it to be solved in a scalable manner. Specifically, we propose a Scalable Actor-Critic (SAC) method that can learn a near optimal localized policy for optimizing the average reward with complexity scaling with the state-action space size of local neighborhoods, as opposed to the entire network. Our result centers around identifying and exploiting an exponential decay property that ensures the effect of agents on each other decays exponentially fast in their graph distance.
Optimizing Neural Networks via Koopman Operator Theory
https://papers.nips.cc/paper_files/paper/2020/hash/169806bb68ccbf5e6f96ddc60c40a044-Abstract.html
Akshunna S. Dogra, William Redman
https://papers.nips.cc/paper_files/paper/2020/hash/169806bb68ccbf5e6f96ddc60c40a044-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9900-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/169806bb68ccbf5e6f96ddc60c40a044-Supplemental.pdf
Koopman operator theory, a powerful framework for discovering the underlying dynamics of nonlinear dynamical systems, was recently shown to be intimately connected with neural network training. In this work, we take the first steps in making use of this connection. As Koopman operator theory is a linear theory, a successful implementation of it in evolving network weights and biases offers the promise of accelerated training, especially in the context of deep networks, where optimization is inherently a non-convex problem. We show that Koopman operator theoretic methods allow for accurate predictions of weights and biases of feedforward, fully connected deep networks over a non-trivial range of training time. During this window, we find that our approach is >10x faster than various gradient descent based methods (e.g. Adam, Adadelta, Adagrad), in line with our complexity analysis. We end by highlighting open questions in this exciting intersection between dynamical systems and neural network theory. We highlight additional methods by which our results could be expanded to broader classes of networks and larger training intervals, which shall be the focus of future work.
SVGD as a kernelized Wasserstein gradient flow of the chi-squared divergence
https://papers.nips.cc/paper_files/paper/2020/hash/16f8e136ee5693823268874e58795216-Abstract.html
Sinho Chewi, Thibaut Le Gouic, Chen Lu, Tyler Maunu, Philippe Rigollet
https://papers.nips.cc/paper_files/paper/2020/hash/16f8e136ee5693823268874e58795216-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9901-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/16f8e136ee5693823268874e58795216-Supplemental.pdf
Stein Variational Gradient Descent (SVGD), a popular sampling algorithm, is often described as the kernelized gradient flow for the Kullback-Leibler divergence in the geometry of optimal transport. We introduce a new perspective on SVGD that instead views SVGD as the kernelized gradient flow of the chi-squared divergence. Motivated by this perspective, we provide a convergence analysis of the chi-squared gradient flow. We also show that our new perspective provides better guidelines for choosing effective kernels for SVGD.
Adversarial Robustness of Supervised Sparse Coding
https://papers.nips.cc/paper_files/paper/2020/hash/170f6aa36530c364b77ddf83a84e7351-Abstract.html
Jeremias Sulam, Ramchandran Muthukumar, Raman Arora
https://papers.nips.cc/paper_files/paper/2020/hash/170f6aa36530c364b77ddf83a84e7351-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9902-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/170f6aa36530c364b77ddf83a84e7351-Supplemental.pdf
Several recent results provide theoretical insights into the phenomena of adversarial examples. Existing results, however, are often limited due to a gap between the simplicity of the models studied and the complexity of those deployed in practice. In this work, we strike a better balance by considering a model that involves learning a representation while at the same time giving a precise generalization bound and a robustness certificate. We focus on the hypothesis class obtained by combining a sparsity-promoting encoder coupled with a linear classifier, and show an interesting interplay between the expressivity and stability of the (supervised) representation map and a notion of margin in the feature space. We bound the robust risk (to $\ell_2$-bounded perturbations) of hypotheses parameterized by dictionaries that achieve a mild encoder gap on training data. Furthermore, we provide a robustness certificate for end-to-end classification. We demonstrate the applicability of our analysis by computing certified accuracy on real data, and compare with other alternatives for certified robustness.
Differentiable Meta-Learning of Bandit Policies
https://papers.nips.cc/paper_files/paper/2020/hash/171ae1bbb81475eb96287dd78565b38b-Abstract.html
Craig Boutilier, Chih-wei Hsu, Branislav Kveton, Martin Mladenov, Csaba Szepesvari, Manzil Zaheer
https://papers.nips.cc/paper_files/paper/2020/hash/171ae1bbb81475eb96287dd78565b38b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9903-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/171ae1bbb81475eb96287dd78565b38b-Supplemental.pdf
Exploration policies in Bayesian bandits maximize the average reward over problem instances drawn from some distribution P. In this work, we learn such policies for an unknown distribution P using samples from P. Our approach is a form of meta-learning and exploits properties of P without making strong assumptions about its form. To do this, we parameterize our policies in a differentiable way and optimize them by policy gradients, an approach that is pleasantly general and easy to implement. We derive effective gradient estimators and propose novel variance reduction techniques. We also analyze and experiment with various bandit policy classes, including neural networks and a novel softmax policy. The latter has regret guarantees and is a natural starting point for our optimization. Our experiments show the versatility of our approach. We also observe that neural network policies can learn implicit biases expressed only through the sampled instances.
Biologically Inspired Mechanisms for Adversarial Robustness
https://papers.nips.cc/paper_files/paper/2020/hash/17256f049f1e3fede17c7a313f7657f4-Abstract.html
Manish Reddy Vuyyuru, Andrzej Banburski, Nishka Pant, Tomaso Poggio
https://papers.nips.cc/paper_files/paper/2020/hash/17256f049f1e3fede17c7a313f7657f4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9904-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/17256f049f1e3fede17c7a313f7657f4-Supplemental.pdf
A convolutional neural network strongly robust to adversarial perturbations at reasonable computational and performance cost has not yet been demonstrated. The primate visual ventral stream seems to be robust to small perturbations in visual stimuli but the underlying mechanisms that give rise to this robust perception are not understood. In this work, we investigate the role of two biologically plausible mechanisms in adversarial robustness. We demonstrate that the non-uniform sampling performed by the primate retina and the presence of multiple receptive fields with a range of receptive field sizes at each eccentricity improve the robustness of neural networks to small adversarial perturbations. We verify that these two mechanisms do not suffer from gradient obfuscation and study their contribution to adversarial robustness through ablation studies.
Statistical-Query Lower Bounds via Functional Gradients
https://papers.nips.cc/paper_files/paper/2020/hash/17257e81a344982579af1ae6415a7b8c-Abstract.html
Surbhi Goel, Aravind Gollakota, Adam Klivans
https://papers.nips.cc/paper_files/paper/2020/hash/17257e81a344982579af1ae6415a7b8c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9905-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/17257e81a344982579af1ae6415a7b8c-Supplemental.pdf
We give the first statistical-query lower bounds for agnostically learning any non-polynomial activation with respect to Gaussian marginals (e.g., ReLU, sigmoid, sign). For the specific problem of ReLU regression (equivalently, agnostically learning a ReLU), we show that any statistical-query algorithm with tolerance $n^{-(1/\epsilon)^b}$ must use at least $2^{n^c} \epsilon$ queries for some constants $b, c > 0$, where $n$ is the dimension and $\epsilon$ is the accuracy parameter. Our results rule out {\em general} (as opposed to correlational) SQ learning algorithms, which is unusual for real-valued learning problems. Our techniques involve a gradient boosting procedure for ``amplifying'' recent lower bounds due to Diakonikolas et al.\ (COLT 2020) and Goel et al.\ (ICML 2020) on the SQ dimension of functions computed by two-layer neural networks. The crucial new ingredient is the use of a nonstandard convex functional during the boosting procedure. This also yields a best-possible reduction between two commonly studied models of learning: agnostic learning and probabilistic concepts.
Near-Optimal Reinforcement Learning with Self-Play
https://papers.nips.cc/paper_files/paper/2020/hash/172ef5a94b4dd0aa120c6878fc29f70c-Abstract.html
Yu Bai, Chi Jin, Tiancheng Yu
https://papers.nips.cc/paper_files/paper/2020/hash/172ef5a94b4dd0aa120c6878fc29f70c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9906-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/172ef5a94b4dd0aa120c6878fc29f70c-Supplemental.pdf
This paper considers the problem of designing optimal algorithms for reinforcement learning in two-player zero-sum games. We focus on self-play algorithms which learn the optimal policy by playing against itself without any direct supervision. In a tabular episodic Markov game with S states, A max-player actions and B min-player actions, the best existing algorithm for finding an approximate Nash equilibrium requires \tlO(S^2AB) steps of game playing, when only highlighting the dependency on (S,A,B). In contrast, the best existing lower bound scales as \Omega(S(A+B)) and has a significant gap from the upper bound. This paper closes this gap for the first time: we propose an optimistic variant of the Nash Q-learning algorithm with sample complexity \tlO(SAB), and a new Nash V-learning algorithm with sample complexity \tlO(S(A+B)). The latter result matches the information-theoretic lower bound in all problem-dependent parameters except for a polynomial factor of the length of each episode. In addition, we present a computational hardness result for learning the best responses against a fixed opponent in Markov games---a learning objective different from finding the Nash equilibrium.
Network Diffusions via Neural Mean-Field Dynamics
https://papers.nips.cc/paper_files/paper/2020/hash/1730f69e6f66d5f0c741799e82351f81-Abstract.html
Shushan He, Hongyuan Zha, Xiaojing Ye
https://papers.nips.cc/paper_files/paper/2020/hash/1730f69e6f66d5f0c741799e82351f81-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9907-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1730f69e6f66d5f0c741799e82351f81-Supplemental.pdf
We propose a novel learning framework based on neural mean-field dynamics for inference and estimation problems of diffusion on networks. Our new framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities, which renders a delay differential equation with memory integral approximated by learnable time convolution operators, resulting in a highly structured and interpretable RNN. Directly using cascade data, our framework can jointly learn the structure of the diffusion network and the evolution of infection probabilities, which are cornerstone to important downstream applications such as influence maximization. Connections between parameter learning and optimal control are also established. Empirical study shows that our approach is versatile and robust to variations of the underlying diffusion network models, and significantly outperform existing approaches in accuracy and efficiency on both synthetic and real-world data.
Self-Distillation as Instance-Specific Label Smoothing
https://papers.nips.cc/paper_files/paper/2020/hash/1731592aca5fb4d789c4119c65c10b4b-Abstract.html
Zhilu Zhang, Mert Sabuncu
https://papers.nips.cc/paper_files/paper/2020/hash/1731592aca5fb4d789c4119c65c10b4b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9908-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1731592aca5fb4d789c4119c65c10b4b-Supplemental.pdf
It has been recently demonstrated that multi-generational self-distillation can improve generalization. Despite this intriguing observation, reasons for the enhancement remain poorly understood. In this paper, we first demonstrate experimentally that the improved performance of multi-generational self-distillation is in part associated with the increasing diversity in teacher predictions. With this in mind, we offer a new interpretation for teacher-student training as amortized MAP estimation, such that teacher predictions enable instance-specific regularization. Our framework allows us to theoretically relate self-distillation to label smoothing, a commonly used technique that regularizes predictive uncertainty, and suggests the importance of predictive diversity in addition to predictive uncertainty. We present experimental results using multiple datasets and neural network architectures that, overall, demonstrate the utility of predictive diversity. Finally, we propose a novel instance-specific label smoothing technique that promotes predictive diversity without the need for a separately trained teacher model. We provide an empirical evaluation of the proposed method, which, we find, often outperforms classical label smoothing.
Towards Problem-dependent Optimal Learning Rates
https://papers.nips.cc/paper_files/paper/2020/hash/174f8f613332b27e9e8a5138adb7e920-Abstract.html
Yunbei Xu, Assaf Zeevi
https://papers.nips.cc/paper_files/paper/2020/hash/174f8f613332b27e9e8a5138adb7e920-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9909-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/174f8f613332b27e9e8a5138adb7e920-Supplemental.pdf
We study problem-dependent rates, i.e., generalization errors that scale tightly with the variance or the effective loss at the "best hypothesis." Existing uniform convergence and localization frameworks, the most widely used tools to study this problem, often fail to simultaneously provide parameter localization and optimal dependence on the sample size. As a result, existing problem-dependent rates are often rather weak when the hypothesis class is "rich" and the worst-case bound of the loss is large. In this paper we propose a new framework based on a "uniform localized convergence" principle. We provide the first (moment-penalized) estimator that achieves the optimal variance-dependent rate for general "rich" classes; we also establish improved loss-dependent rate for standard empirical risk minimization.
Cross-lingual Retrieval for Iterative Self-Supervised Training
https://papers.nips.cc/paper_files/paper/2020/hash/1763ea5a7e72dd7ee64073c2dda7a7a8-Abstract.html
Chau Tran, Yuqing Tang, Xian Li, Jiatao Gu
https://papers.nips.cc/paper_files/paper/2020/hash/1763ea5a7e72dd7ee64073c2dda7a7a8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9910-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1763ea5a7e72dd7ee64073c2dda7a7a8-Supplemental.pdf
Recent studies have demonstrated the cross-lingual alignment ability of multilingual pretrained language models. In this work, we found that the cross-lingual alignment can be further improved by training seq2seq models on sentence pairs mined using their own encoder outputs. We utilized these findings to develop a new approach --- cross-lingual retrieval for iterative self-supervised training (CRISS), where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. Using this method, we achieved state-of-the-art unsupervised machine translation results on 9 language directions with an average improvement of 2.4 BLEU, and on the Tatoeba sentence retrieval task in the XTREME benchmark on 16 languages with an average improvement of 21.5% in absolute accuracy. Furthermore, CRISS also brings an additional 1.8 BLEU improvement on average compared to mBART, when finetuned on supervised machine translation downstream tasks.
Rethinking pooling in graph neural networks
https://papers.nips.cc/paper_files/paper/2020/hash/1764183ef03fc7324eb58c3842bd9a57-Abstract.html
Diego Mesquita, Amauri Souza, Samuel Kaski
https://papers.nips.cc/paper_files/paper/2020/hash/1764183ef03fc7324eb58c3842bd9a57-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9911-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-Supplemental.pdf
Graph pooling is a central component of a myriad of graph neural network (GNN) architectures. As an inheritance from traditional CNNs, most approaches formulate graph pooling as a cluster assignment problem, extending the idea of local patches in regular grids to graphs. Despite the wide adherence to this design choice, no work has rigorously evaluated its influence on the success of GNNs. In this paper, we build upon representative GNNs and introduce variants that challenge the need for locality-preserving representations, either using randomization or clustering on the complement graph. Strikingly, our experiments demonstrate that using these variants does not result in any decrease in performance. To understand this phenomenon, we study the interplay between convolutional layers and the subsequent pooling ones. We show that the convolutions play a leading role in the learned representations. In contrast to the common belief, local pooling is not responsible for the success of GNNs on relevant and widely-used benchmarks.
Pointer Graph Networks
https://papers.nips.cc/paper_files/paper/2020/hash/176bf6219855a6eb1f3a30903e34b6fb-Abstract.html
Petar Veličković, Lars Buesing, Matthew Overlan, Razvan Pascanu, Oriol Vinyals, Charles Blundell
https://papers.nips.cc/paper_files/paper/2020/hash/176bf6219855a6eb1f3a30903e34b6fb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9912-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/176bf6219855a6eb1f3a30903e34b6fb-Supplemental.zip
Graph neural networks (GNNs) are typically applied to static graphs that are assumed to be known upfront. This static input structure is often informed purely by insight of the machine learning practitioner, and might not be optimal for the actual task the GNN is solving. In absence of reliable domain expertise, one might resort to inferring the latent graph structure, which is often difficult due to the vast search space of possible graphs. Here we introduce Pointer Graph Networks (PGNs) which augment sets or graphs with additional inferred edges for improved model generalisation ability. PGNs allow each node to dynamically point to another node, followed by message passing over these pointers. The sparsity of this adaptable graph structure makes learning tractable while still being sufficiently expressive to simulate complex algorithms. Critically, the pointing mechanism is directly supervised to model long-term sequences of operations on classical data structures, incorporating useful structural inductive biases from theoretical computer science. Qualitatively, we demonstrate that PGNs can learn parallelisable variants of pointer-based data structures, namely disjoint set unions and link/cut trees. PGNs generalise out-of-distribution to 5x larger test inputs on dynamic graph connectivity tasks, outperforming unrestricted GNNs and Deep Sets.
Gradient Regularized V-Learning for Dynamic Treatment Regimes
https://papers.nips.cc/paper_files/paper/2020/hash/17b3c7061788dbe82de5abe9f6fe22b3-Abstract.html
Yao Zhang, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2020/hash/17b3c7061788dbe82de5abe9f6fe22b3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9913-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/17b3c7061788dbe82de5abe9f6fe22b3-Supplemental.zip
Deciding how to optimally treat a patient, including how to select treatments over time among the multiple available treatments, represents one of the most important issues that need to be addressed in medicine today. A dynamic treatment regime (DTR) is a sequence of treatment rules indicating how to individualize treatments for a patient based on the previously assigned treatments and the evolving covariate history. However, DTR evaluation and learning based on offline data remain challenging problems due to the bias introduced by time-varying confounders that affect treatment assignment over time; this may lead to suboptimal treatment rules being used in practice. In this paper, we introduce Gradient Regularized V-learning (GRV), a novel method for estimating the value function of a DTR. GRV regularizes the underlying outcome and propensity score models with respect to the optimality condition in semiparametric estimation theory. On the basis of this design, we construct estimators that are efficient and stable in finite samples regime. Using multiple simulation studies and one real-world medical dataset, we demonstrate that our method is superior in DTR evaluation and learning, thereby providing improved treatment options over time for patients.
Faster Wasserstein Distance Estimation with the Sinkhorn Divergence
https://papers.nips.cc/paper_files/paper/2020/hash/17f98ddf040204eda0af36a108cbdea4-Abstract.html
Lénaïc Chizat, Pierre Roussillon, Flavien Léger, François-Xavier Vialard, Gabriel Peyré
https://papers.nips.cc/paper_files/paper/2020/hash/17f98ddf040204eda0af36a108cbdea4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9914-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/17f98ddf040204eda0af36a108cbdea4-Supplemental.pdf
The squared Wasserstein distance is a natural quantity to compare probability distributions in a non-parametric setting. This quantity is usually estimated with the plug-in estimator, defined via a discrete optimal transport problem which can be solved to $\epsilon$-accuracy by adding an entropic regularization of order $\epsilon$ and using for instance Sinkhorn's algorithm. In this work, we propose instead to estimate it with the Sinkhorn divergence, which is also built on entropic regularization but includes debiasing terms. We show that, for smooth densities, this estimator has a comparable sample complexity but allows higher regularization levels, of order $\epsilon^{1/2}$, which leads to improved computational complexity bounds and a strong speedup in practice. Our theoretical analysis covers the case of both randomly sampled densities and deterministic discretizations on uniform grids. We also propose and analyze an estimator based on Richardson extrapolation of the Sinkhorn divergence which enjoys improved statistical and computational efficiency guarantees, under a condition on the regularity of the approximation error, which is in particular satisfied for Gaussian densities. We finally demonstrate the efficiency of the proposed estimators with numerical experiments.
Forethought and Hindsight in Credit Assignment
https://papers.nips.cc/paper_files/paper/2020/hash/18064d61b6f93dab8681a460779b8429-Abstract.html
Veronica Chelu, Doina Precup, Hado P. van Hasselt
https://papers.nips.cc/paper_files/paper/2020/hash/18064d61b6f93dab8681a460779b8429-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9915-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/18064d61b6f93dab8681a460779b8429-Supplemental.pdf
We address the problem of credit assignment in reinforcement learning and explore fundamental questions regarding the way in which an agent can best use additional computation to propagate new information, by planning with internal models of the world to improve its predictions. Particularly, we work to understand the gains and peculiarities of planning employed as forethought via forward models or as hindsight operating with backward models. We establish the relative merits, limitations and complementary properties of both planning mechanisms in carefully constructed scenarios. Further, we investigate the best use of models in planning, primarily focusing on the selection of states in which predictions should be (re)-evaluated. Lastly, we discuss the issue of model estimation and highlight a spectrum of methods that stretch from environment dynamics predictors to planner-aware models.
Robust Recursive Partitioning for Heterogeneous Treatment Effects with Uncertainty Quantification
https://papers.nips.cc/paper_files/paper/2020/hash/1819020b02e926785cf3be594d957696-Abstract.html
Hyun-Suk Lee, Yao Zhang, William Zame, Cong Shen, Jang-Won Lee, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2020/hash/1819020b02e926785cf3be594d957696-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9916-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1819020b02e926785cf3be594d957696-Supplemental.pdf
Subgroup analysis of treatment effects plays an important role in applications from medicine to public policy to recommender systems. It allows physicians (for example) to identify groups of patients for whom a given drug or treatment is likely to be effective and groups of patients for which it is not. Most of the current methods of subgroup analysis begin with a particular algorithm for estimating individualized treatment effects (ITE) and identify subgroups by maximizing the difference across subgroups of the average treatment effect in each subgroup. These approaches have several weaknesses: they rely on a particular algorithm for estimating ITE, they ignore (in)homogeneity within identified subgroups, and they do not produce good confidence estimates. This paper develops a new method for subgroup analysis, R2P, that addresses all these weaknesses. R2P uses an arbitrary, exogenously prescribed algorithm for estimating ITE and quantifies the uncertainty of the ITE estimation, using a construction that is more robust than other methods. Experiments using synthetic and semi-synthetic datasets (based on real data) demonstrate that R2P constructs partitions that are simultaneously more homogeneous within groups and more heterogeneous across groups than the partitions produced by other methods. Moreover, because R2P can employ any ITE estimator, it also produces much narrower confidence intervals with a prescribed coverage guarantee than other methods.
Rescuing neural spike train models from bad MLE
https://papers.nips.cc/paper_files/paper/2020/hash/186b690e29892f137b4c34cfa40a3a4d-Abstract.html
Diego Arribas, Yuan Zhao, Il Memming Park
https://papers.nips.cc/paper_files/paper/2020/hash/186b690e29892f137b4c34cfa40a3a4d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9917-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/186b690e29892f137b4c34cfa40a3a4d-Supplemental.pdf
The standard approach to fitting an autoregressive spike train model is to maximize the likelihood for one-step prediction. This maximum likelihood estimation (MLE) often leads to models that perform poorly when generating samples recursively for more than one time step. Moreover, the generated spike trains can fail to capture important features of the data and even show diverging firing rates. To alleviate this, we propose to directly minimize the divergence between neural recorded and model generated spike trains using spike train kernels. We develop a method that stochastically optimizes the maximum mean discrepancy induced by the kernel. Experiments performed on both real and synthetic neural data validate the proposed approach, showing that it leads to well-behaving models. Using different combinations of spike train kernels, we show that we can control the trade-off between different features which is critical for dealing with model-mismatch.
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
https://papers.nips.cc/paper_files/paper/2020/hash/187acf7982f3c169b3075132380986e4-Abstract.html
Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtarik
https://papers.nips.cc/paper_files/paper/2020/hash/187acf7982f3c169b3075132380986e4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9918-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/187acf7982f3c169b3075132380986e4-Supplemental.zip
In this work, we consider the optimization formulation of personalized federated learning recently introduced by Hanzely & Richtarik (2020) which was shown to give an alternative explanation to the workings of local SGD methods. Our first contribution is establishing the first lower bounds for this formulation, for both the communication complexity and the local oracle complexity. Our second contribution is the design of several optimal methods matching these lower bounds in almost all regimes. These are the first provably optimal methods for personalized federated learning. Our optimal methods include an accelerated variant of FedProx, and an accelerated variance-reduced version of FedAvg/Local SGD. We demonstrate the practical superiority of our methods through extensive numerical experiments.
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework
https://papers.nips.cc/paper_files/paper/2020/hash/1896a3bf730516dd643ba67b4c447d36-Abstract.html
Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu
https://papers.nips.cc/paper_files/paper/2020/hash/1896a3bf730516dd643ba67b4c447d36-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9919-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1896a3bf730516dd643ba67b4c447d36-Supplemental.pdf
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for $\ell_2$ perturbation. We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified \functional optimization perspective. Our new framework allows us to identify a key trade-off between accuracy and robustness via designing smoothing distributions, helping to design new families of non-Gaussian smoothing distributions that work more efficiently for different $\ell_p$ settings, including $\ell_1$, $\ell_2$ and $\ell_\infty$ attacks. Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
Deep Imitation Learning for Bimanual Robotic Manipulation
https://papers.nips.cc/paper_files/paper/2020/hash/18a010d2a9813e91907ce88cd9143fdf-Abstract.html
Fan Xie, Alexander Chowdhury, M. Clara De Paolis Kaluza, Linfeng Zhao, Lawson Wong, Rose Yu
https://papers.nips.cc/paper_files/paper/2020/hash/18a010d2a9813e91907ce88cd9143fdf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9920-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/18a010d2a9813e91907ce88cd9143fdf-Supplemental.zip
We present a deep imitation learning framework for robotic bimanual manipulation in a continuous state-action space. A core challenge is to generalize the manipulation skills to objects in different locations. We hypothesize that modeling the relational information in the environment can significantly improve generalization. To achieve this, we propose to (i) decompose the multi-modal dynamics into elemental movement primitives, (ii) parameterize each primitive using a recurrent graph neural network to capture interactions, and (iii) integrate a high-level planner that composes primitives sequentially and a low-level controller to combine primitive dynamics and inverse kinematics control. Our model is a deep, hierarchical, modular architecture. Compared to baselines, our model generalizes better and achieves higher success rates on several simulated bimanual robotic manipulation tasks. We open source the code for simulation, data, and models at: https://github.com/Rose-STL-Lab/HDR-IL.
Stationary Activations for Uncertainty Calibration in Deep Learning
https://papers.nips.cc/paper_files/paper/2020/hash/18a411989b47ed75a60ac69d9da05aa5-Abstract.html
Lassi Meronen, Christabella Irwanto, Arno Solin
https://papers.nips.cc/paper_files/paper/2020/hash/18a411989b47ed75a60ac69d9da05aa5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9921-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/18a411989b47ed75a60ac69d9da05aa5-Supplemental.pdf
We introduce a new family of non-linear neural network activation functions that mimic the properties induced by the widely-used Mat\'ern family of kernels in Gaussian process (GP) models. This class spans a range of locally stationary models of various degrees of mean-square differentiability. We show an explicit link to the corresponding GP models in the case that the network consists of one infinitely wide hidden layer. In the limit of infinite smoothness the Mat\'ern family results in the RBF kernel, and in this case we recover RBF activations. Mat\'ern activation functions result in similar appealing properties to their counterparts in GP models, and we demonstrate that the local stationarity property together with limited mean-square differentiability shows both good performance and uncertainty calibration in Bayesian deep learning tasks. In particular, local stationarity helps calibrate out-of-distribution (OOD) uncertainty. We demonstrate these properties on classification and regression benchmarks and a radar emitter classification task.
Ensemble Distillation for Robust Model Fusion in Federated Learning
https://papers.nips.cc/paper_files/paper/2020/hash/18df51b97ccd68128e994804f3eccc87-Abstract.html
Tao Lin, Lingjing Kong, Sebastian U. Stich, Martin Jaggi
https://papers.nips.cc/paper_files/paper/2020/hash/18df51b97ccd68128e994804f3eccc87-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9922-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/18df51b97ccd68128e994804f3eccc87-Supplemental.pdf
In this work we investigate more powerful and more flexible aggregation schemes for FL. Specifically, we propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients. This knowledge distillation technique mitigates privacy risk and cost to the same extent as the baseline FL algorithms, but allows flexible aggregation over heterogeneous client models that can differ e.g. in size, numerical precision or structure. We show in extensive empirical experiments on various CV/NLP datasets (CIFAR-10/100, ImageNet, AG News, SST2) and settings (heterogeneous models/data) that the server model can be trained much faster, requiring fewer communication rounds than any existing FL technique so far.
Falcon: Fast Spectral Inference on Encrypted Data
https://papers.nips.cc/paper_files/paper/2020/hash/18fc72d8b8aba03a4d84f66efabce82e-Abstract.html
Qian Lou, Wen-jie Lu, Cheng Hong, Lei Jiang
https://papers.nips.cc/paper_files/paper/2020/hash/18fc72d8b8aba03a4d84f66efabce82e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/18fc72d8b8aba03a4d84f66efabce82e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9923-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/18fc72d8b8aba03a4d84f66efabce82e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/18fc72d8b8aba03a4d84f66efabce82e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/18fc72d8b8aba03a4d84f66efabce82e-Review.html
null
Homomorphic Encryption (HE) based secure Neural Networks(NNs) inference is one of the most promising security solutions to emerging Machine Learning as a Service (MLaaS). In the HE-based MLaaS setting, a client encrypts the sensitive data, and uploads the encrypted data to the server that directly processes the encrypted data without decryption, and returns the encrypted result to the client. The clients' data privacy is preserved since only the client has the private key. Existing HE-enabled Neural Networks (HENNs), however, suffer from heavy computational overheads. The state-of-the-art HENNs adopt ciphertext packing techniques to reduce homomorphic multiplications by packing multiple messages into one single ciphertext. Nevertheless, rotations are required in these HENNs to implement the sum of the elements within the same ciphertext. We observed that HENNs have to pay significant computing overhead on rotations, and each of rotations is $\sim 10\times$ more expensive than homomorphic multiplications between ciphertext and plaintext. So the massive rotations have become a primary obstacle of efficient HENNs. In this paper, we propose a fast, frequency-domain deep neural network called Falcon, for fast inferences on encrypted data. Falcon includes a fast Homomorphic Discrete Fourier Transform (HDFT) using block-circulant matrices to homomorphically support spectral operations. We also propose several efficient methods to reduce inference latency, including Homomorphic Spectral Convolution and Homomorphic Spectral Fully Connected operations by combing the batched HE and block-circulant matrices. Our experimental results show Falcon achieves the state-of-the-art inference accuracy and reduces the inference latency by $45.45\%\sim 85.34\%$ over prior HENNs on MNIST and CIFAR-10.
On Power Laws in Deep Ensembles
https://papers.nips.cc/paper_files/paper/2020/hash/191595dc11b4d6e54f01504e3aa92f96-Abstract.html
Ekaterina Lobacheva, Nadezhda Chirkova, Maxim Kodryan, Dmitry P. Vetrov
https://papers.nips.cc/paper_files/paper/2020/hash/191595dc11b4d6e54f01504e3aa92f96-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9924-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/191595dc11b4d6e54f01504e3aa92f96-Supplemental.pdf
Ensembles of deep neural networks are known to achieve state-of-the-art performance in uncertainty estimation and lead to accuracy improvement. In this work, we focus on a classification problem and investigate the behavior of both non-calibrated and calibrated negative log-likelihood (CNLL) of a deep ensemble as a function of the ensemble size and the member network size. We indicate the conditions under which CNLL follows a power law w. r. t. ensemble size or member network size, and analyze the dynamics of the parameters of the discovered power laws. Our important practical finding is that one large network may perform worse than an ensemble of several medium-size networks with the same total number of parameters (we call this ensemble a memory split). Using the detected power law-like dependencies, we can predict (1) the possible gain from the ensembling of networks with given structure, (2) the optimal memory split given a memory budget, based on a relatively small number of trained networks.