title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
Self-Learning Transformations for Improving Gaze and Head Redirection
https://papers.nips.cc/paper_files/paper/2020/hash/98f2d76d4d9caf408180b5abfa83ae87-Abstract.html
Yufeng Zheng, Seonwook Park, Xucong Zhang, Shalini De Mello, Otmar Hilliges
https://papers.nips.cc/paper_files/paper/2020/hash/98f2d76d4d9caf408180b5abfa83ae87-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/98f2d76d4d9caf408180b5abfa83ae87-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10825-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/98f2d76d4d9caf408180b5abfa83ae87-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/98f2d76d4d9caf408180b5abfa83ae87-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/98f2d76d4d9caf408180b5abfa83ae87-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/98f2d76d4d9caf408180b5abfa83ae87-Supplemental.zip
Many computer vision tasks rely on labeled data. Rapid progress in generative modeling has led to the ability to synthesize photorealistic images. However, controlling specific aspects of the generation process such that the data can be used for supervision of downstream tasks remains challenging. In this paper we propose a novel generative model for images of faces, that is capable of producing high-quality images under fine-grained control over eye gaze and head orientation angles. This requires the disentangling of many appearance related factors including gaze and head orientation but also lighting, hue etc. We propose a novel architecture which learns to discover, disentangle and encode these extraneous variations in a self-learned manner. We further show that explicitly disentangling task-irrelevant factors results in more accurate modelling of gaze and head orientation. A novel evaluation scheme shows that our method improves upon the state-of-the-art in redirection accuracy and disentanglement between gaze direction and head orientation changes. Furthermore, we show that in the presence of limited amounts of real-world training data, our method allows for improvements in the downstream task of semi-supervised cross-dataset gaze estimation. Please check our project page at: https://ait.ethz.ch/projects/2020/STED-gaze/
Language-Conditioned Imitation Learning for Robot Manipulation Tasks
https://papers.nips.cc/paper_files/paper/2020/hash/9909794d52985cbc5d95c26e31125d1a-Abstract.html
Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Stefan Lee, Chitta Baral, Heni Ben Amor
https://papers.nips.cc/paper_files/paper/2020/hash/9909794d52985cbc5d95c26e31125d1a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9909794d52985cbc5d95c26e31125d1a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10826-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9909794d52985cbc5d95c26e31125d1a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9909794d52985cbc5d95c26e31125d1a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9909794d52985cbc5d95c26e31125d1a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9909794d52985cbc5d95c26e31125d1a-Supplemental.zip
Imitation learning is a popular approach for teaching motor skills to robots. However, most approaches focus on extracting policy parameters from execution traces alone (i.e., motion trajectories and perceptual data). No adequate communication channel exists between the human expert and the robot to describe critical aspects of the task, such as the properties of the target object or the intended shape of the motion. Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent (e.g., "go to the large green bowl"). The training process then interrelates these two modalities to encode the correlations between language, perception, and motion. The resulting language-conditioned visuomotor policies can be conditioned at runtime on new human commands and instructions, which allows for more fine-grained control over the trained policies while also reducing situational ambiguity. We demonstrate in a set of simulation experiments how our approach can learn language-conditioned manipulation policies for a seven-degree-of-freedom robot arm and compare the results to a variety of alternative methods.
POMDPs in Continuous Time and Discrete Spaces
https://papers.nips.cc/paper_files/paper/2020/hash/992f0fed0720dbb9d4e060d03ed531ba-Abstract.html
Bastian Alt, Matthias Schultheis, Heinz Koeppl
https://papers.nips.cc/paper_files/paper/2020/hash/992f0fed0720dbb9d4e060d03ed531ba-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/992f0fed0720dbb9d4e060d03ed531ba-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10827-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/992f0fed0720dbb9d4e060d03ed531ba-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/992f0fed0720dbb9d4e060d03ed531ba-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/992f0fed0720dbb9d4e060d03ed531ba-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/992f0fed0720dbb9d4e060d03ed531ba-Supplemental.pdf
Many processes, such as discrete event systems in engineering or population dynamics in biology, evolve in discrete space and continuous time. We consider the problem of optimal decision making in such discrete state and action space systems under partial observability. This places our work at the intersection of optimal filtering and optimal control. At the current state of research, a mathematical description for simultaneous decision making and filtering in continuous time with finite state and action spaces is still missing. In this paper, we give a mathematical description of a continuous-time partial observable Markov decision process (POMDP). By leveraging optimal filtering theory we derive a Hamilton-Jacobi-Bellman (HJB) type equation that characterizes the optimal solution. Using techniques from deep learning we approximately solve the resulting partial integro-differential equation. We present (i) an approach solving the decision problem offline by learning an approximation of the value function and (ii) an online algorithm which provides a solution in belief space using deep reinforcement learning. We show the applicability on a set of toy examples which pave the way for future methods providing solutions for high dimensional problems.
Exemplar Guided Active Learning
https://papers.nips.cc/paper_files/paper/2020/hash/993edc98ca87f7e08494eec37fa836f7-Abstract.html
Jason S. Hartford, Kevin Leyton-Brown, Hadas Raviv, Dan Padnos, Shahar Lev, Barak Lenz
https://papers.nips.cc/paper_files/paper/2020/hash/993edc98ca87f7e08494eec37fa836f7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/993edc98ca87f7e08494eec37fa836f7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10828-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/993edc98ca87f7e08494eec37fa836f7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/993edc98ca87f7e08494eec37fa836f7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/993edc98ca87f7e08494eec37fa836f7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/993edc98ca87f7e08494eec37fa836f7-Supplemental.zip
We consider the problem of wisely using a limited budget to label a small subset of a large unlabeled dataset. For example, consider the NLP problem of word sense disambiguation. For any word, we have a set of candidate labels from a knowledge base, but the label set is not necessarily representative of what occurs in the data: there may exist labels in the knowledge base that very rarely occur in the corpus because the sense is rare in modern English; and conversely there may exist true labels that do not exist in our knowledge base. Our aim is to obtain a classifier that performs as well as possible on examples of each “common class” that occurs with frequency above a given threshold in the unlabeled set while annotating as few examples as possible from “rare classes” whose labels occur with less than this frequency. The challenge is that we are not informed which labels are common and which are rare, and the true label distribution may exhibit extreme skew. We describe an active learning approach that (1) explicitly searches for rare classes by leveraging the contextual embedding spaces provided by modern language models, and (2) incorporates a stopping rule that ignores classes once we prove that they occur below our target threshold with high probability. We prove that our algorithm only costs logarithmically more than a hypothetical approach that knows all true label frequencies and show experimentally that incorporating automated search can significantly reduce the number of samples needed to reach target accuracy levels.
Grasp Proposal Networks: An End-to-End Solution for Visual Learning of Robotic Grasps
https://papers.nips.cc/paper_files/paper/2020/hash/994d1cad9132e48c993d58b492f71fc1-Abstract.html
Chaozheng Wu, Jian Chen, Qiaoyu Cao, Jianchi Zhang, Yunxin Tai, Lin Sun, Kui Jia
https://papers.nips.cc/paper_files/paper/2020/hash/994d1cad9132e48c993d58b492f71fc1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/994d1cad9132e48c993d58b492f71fc1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10829-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/994d1cad9132e48c993d58b492f71fc1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/994d1cad9132e48c993d58b492f71fc1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/994d1cad9132e48c993d58b492f71fc1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/994d1cad9132e48c993d58b492f71fc1-Supplemental.zip
Learning robotic grasps from visual observations is a promising yet challenging task. Recent research shows its great potential by preparing and learning from large-scale synthetic datasets. For the popular, 6 degree-of-freedom (6-DOF) grasp setting of parallel-jaw gripper, most of existing methods take the strategy of heuristically sampling grasp candidates and then evaluating them using learned scoring functions. This strategy is limited in terms of the conflict between sampling efficiency and coverage of optimal grasps. To this end, we propose in this work a novel, end-to-end \emph{Grasp Proposal Network (GPNet)}, to predict a diverse set of 6-DOF grasps for an unseen object observed from a single and unknown camera view. GPNet builds on a key design of grasp proposal module that defines \emph{anchors of grasp centers} at discrete but regular 3D grid corners, which is flexible to support either more precise or more diverse grasp predictions. To test GPNet, we contribute a synthetic dataset of 6-DOF object grasps; evaluation is conducted using rule-based criteria, simulation test, and real test. Comparative results show the advantage of our methods over existing ones. Notably, GPNet gains better simulation results via the specified coverage, which helps achieve a ready translation in real test. Our code and dataset are available on \url{https://github.com/CZ-Wu/GPNet}.
Node Embeddings and Exact Low-Rank Representations of Complex Networks
https://papers.nips.cc/paper_files/paper/2020/hash/99503bdd3c5a4c4671ada72d6fd81433-Abstract.html
Sudhanshu Chanpuriya, Cameron Musco, Konstantinos Sotiropoulos, Charalampos Tsourakakis
https://papers.nips.cc/paper_files/paper/2020/hash/99503bdd3c5a4c4671ada72d6fd81433-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/99503bdd3c5a4c4671ada72d6fd81433-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10830-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/99503bdd3c5a4c4671ada72d6fd81433-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/99503bdd3c5a4c4671ada72d6fd81433-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/99503bdd3c5a4c4671ada72d6fd81433-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/99503bdd3c5a4c4671ada72d6fd81433-Supplemental.pdf
In this work we show that the results of Seshadhri et al. are intimately connected to the model they use rather than the low-dimensional structure of complex networks. Specifically, we prove that a minor relaxation of their model can generate sparse graphs with high triangle density. Surprisingly, we show that this same model leads to exact low-dimensional factorizations of many real-world networks. We give a simple algorithm based on logistic principal component analysis (LPCA) that succeeds in finding such exact embeddings. Finally, we perform a large number of experiments that verify the ability of very low-dimensional embeddings to capture local structure in real-world networks.
Fictitious Play for Mean Field Games: Continuous Time Analysis and Applications
https://papers.nips.cc/paper_files/paper/2020/hash/995ca733e3657ff9f5f3c823d73371e1-Abstract.html
Sarah Perrin, Julien Perolat, Mathieu Lauriere, Matthieu Geist, Romuald Elie, Olivier Pietquin
https://papers.nips.cc/paper_files/paper/2020/hash/995ca733e3657ff9f5f3c823d73371e1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/995ca733e3657ff9f5f3c823d73371e1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10831-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/995ca733e3657ff9f5f3c823d73371e1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/995ca733e3657ff9f5f3c823d73371e1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/995ca733e3657ff9f5f3c823d73371e1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/995ca733e3657ff9f5f3c823d73371e1-Supplemental.zip
In this paper, we deepen the analysis of continuous time Fictitious Play learning algorithm to the consideration of various finite state Mean Field Game settings (finite horizon, $\gamma$-discounted), allowing in particular for the introduction of an additional common noise. We first present a theoretical convergence analysis of the continuous time Fictitious Play process and prove that the induced exploitability decreases at a rate $O(\frac{1}{t})$. Such analysis emphasizes the use of exploitability as a relevant metric for evaluating the convergence towards a Nash equilibrium in the context of Mean Field Games. These theoretical contributions are supported by numerical experiments provided in either model-based or model-free settings. We provide hereby for the first time converging learning dynamics for Mean Field Games in the presence of common noise.
Steering Distortions to Preserve Classes and Neighbors in Supervised Dimensionality Reduction
https://papers.nips.cc/paper_files/paper/2020/hash/99607461cdb9c26e2bd5f31b12dcf27a-Abstract.html
Benoît Colange, Jaakko Peltonen, Michael Aupetit, Denys Dutykh, Sylvain Lespinats
https://papers.nips.cc/paper_files/paper/2020/hash/99607461cdb9c26e2bd5f31b12dcf27a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/99607461cdb9c26e2bd5f31b12dcf27a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10832-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/99607461cdb9c26e2bd5f31b12dcf27a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/99607461cdb9c26e2bd5f31b12dcf27a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/99607461cdb9c26e2bd5f31b12dcf27a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/99607461cdb9c26e2bd5f31b12dcf27a-Supplemental.pdf
Nonlinear dimensionality reduction of high-dimensional data is challenging as the low-dimensional embedding will necessarily contain distortions, and it can be hard to determine which distortions are the most important to avoid. When annotation of data into known relevant classes is available, it can be used to guide the embedding to avoid distortions that worsen class separation. The supervised mapping method introduced in the present paper, called ClassNeRV, proposes an original stress function that takes class annotation into account and evaluates embedding quality both in terms of false neighbors and missed neighbors. ClassNeRV shares the theoretical framework of a family of methods descended from Stochastic Neighbor Embedding (SNE). Our approach has a key advantage over previous ones: in the literature supervised methods often emphasize class separation at the price of distorting the data neighbors' structure; conversely, unsupervised methods provide better preservation of structure at the price of often mixing classes. Experiments show that ClassNeRV can preserve both neighbor structure and class separation, outperforming nine state of the art alternatives.
On Infinite-Width Hypernetworks
https://papers.nips.cc/paper_files/paper/2020/hash/999df4ce78b966de17aee1dc87111044-Abstract.html
Etai Littwin, Tomer Galanti, Lior Wolf, Greg Yang
https://papers.nips.cc/paper_files/paper/2020/hash/999df4ce78b966de17aee1dc87111044-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/999df4ce78b966de17aee1dc87111044-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10833-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/999df4ce78b966de17aee1dc87111044-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/999df4ce78b966de17aee1dc87111044-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/999df4ce78b966de17aee1dc87111044-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/999df4ce78b966de17aee1dc87111044-Supplemental.pdf
{\em Hypernetworks} are architectures that produce the weights of a task-specific {\em primary network}. A notable application of hypernetworks in the recent literature involves learning to output functional representations. In these scenarios, the hypernetwork learns a representation corresponding to the weights of a shallow MLP, which typically encodes shape or image information. While such representations have seen considerable success in practice, they remain lacking in the theoretical guarantees in the wide regime of the standard architectures. In this work, we study wide over-parameterized hypernetworks. We show that unlike typical architectures, infinitely wide hypernetworks do not guarantee convergence to a global minima under gradient descent. We further show that convexity can be achieved by increasing the dimensionality of the hypernetwork's output, to represent wide MLPs. In the dually infinite-width regime, we identify the functional priors of these architectures by deriving their corresponding GP and NTK kernels, the latter of which we refer to as the {\em hyperkernel}. As part of this study, we make a mathematical contribution by deriving tight bounds on high order Taylor expansion terms of standard fully connected ReLU networks.
Interferobot: aligning an optical interferometer by a reinforcement learning agent
https://papers.nips.cc/paper_files/paper/2020/hash/99ba5c4097c6b8fef5ed774a1a6714b8-Abstract.html
Dmitry Sorokin, Alexander Ulanov, Ekaterina Sazhina, Alexander Lvovsky
https://papers.nips.cc/paper_files/paper/2020/hash/99ba5c4097c6b8fef5ed774a1a6714b8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/99ba5c4097c6b8fef5ed774a1a6714b8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10834-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/99ba5c4097c6b8fef5ed774a1a6714b8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/99ba5c4097c6b8fef5ed774a1a6714b8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/99ba5c4097c6b8fef5ed774a1a6714b8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/99ba5c4097c6b8fef5ed774a1a6714b8-Supplemental.zip
Limitations in acquiring training data restrict potential applications of deep reinforcement learning (RL) methods to the training of real-world robots. Here we train an RL agent to align a Mach-Zehnder interferometer, which is an essential part of many optical experiments, based on images of interference fringes acquired by a monocular camera. The agent is trained in a simulated environment, without any hand-coded features or a priori information about the physics, and subsequently transferred to a physical interferometer. Thanks to a set of domain randomizations simulating uncertainties in physical measurements, the agent successfully aligns this interferometer without any fine-tuning, achieving a performance level of a human expert.
Program Synthesis with Pragmatic Communication
https://papers.nips.cc/paper_files/paper/2020/hash/99c83c904d0d64fbef50d919a5c66a80-Abstract.html
Yewen Pu, Kevin Ellis, Marta Kryven, Josh Tenenbaum, Armando Solar-Lezama
https://papers.nips.cc/paper_files/paper/2020/hash/99c83c904d0d64fbef50d919a5c66a80-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/99c83c904d0d64fbef50d919a5c66a80-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10835-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/99c83c904d0d64fbef50d919a5c66a80-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/99c83c904d0d64fbef50d919a5c66a80-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/99c83c904d0d64fbef50d919a5c66a80-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/99c83c904d0d64fbef50d919a5c66a80-Supplemental.zip
Program synthesis techniques construct or infer programs from user-provided specifications, such as input-output examples. Yet most specifications, especially those given by end-users, leave the synthesis problem radically ill-posed, because many programs may simultaneously satisfy the specification. Prior work resolves this ambiguity by using various inductive biases, such as a preference for simpler programs. This work introduces a new inductive bias derived by modeling the program synthesis task as rational communication, drawing insights from recursive reasoning models of pragmatics. Given a specification, we score a candidate program both on its consistency with the specification, and also whether a rational speaker would chose this particular specification to communicate that program. We develop efficient algorithms for such an approach when learning from input-output examples, and build a pragmatic program synthesizer over a simple grid-like layout domain. A user study finds that end-user participants communicate more effectively with the pragmatic program synthesizer over a non-pragmatic one.
Principal Neighbourhood Aggregation for Graph Nets
https://papers.nips.cc/paper_files/paper/2020/hash/99cad265a1768cc2dd013f0e740300ae-Abstract.html
Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, Petar Veličković
https://papers.nips.cc/paper_files/paper/2020/hash/99cad265a1768cc2dd013f0e740300ae-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/99cad265a1768cc2dd013f0e740300ae-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10836-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/99cad265a1768cc2dd013f0e740300ae-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/99cad265a1768cc2dd013f0e740300ae-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/99cad265a1768cc2dd013f0e740300ae-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/99cad265a1768cc2dd013f0e740300ae-Supplemental.pdf
Graph Neural Networks (GNNs) have been shown to be effective models for different predictive tasks on graph-structured data. Recent work on their expressive power has focused on isomorphism tasks and countable feature spaces. We extend this theoretical framework to include continuous features---which occur regularly in real-world input domains and within the hidden layers of GNNs---and we demonstrate the requirement for multiple aggregation functions in this context. Accordingly, we propose Principal Neighbourhood Aggregation (PNA), a novel architecture combining multiple aggregators with degree-scalers (which generalize the sum aggregator). Finally, we compare the capacity of different models to capture and exploit the graph structure via a novel benchmark containing multiple tasks taken from classical graph theory, alongside existing benchmarks from real-world domains, all of which demonstrate the strength of our model. With this work we hope to steer some of the GNN research towards new aggregation methods which we believe are essential in the search for powerful and robust models.
Reliable Graph Neural Networks via Robust Aggregation
https://papers.nips.cc/paper_files/paper/2020/hash/99e314b1b43706773153e7ef375fc68c-Abstract.html
Simon Geisler, Daniel Zügner, Stephan Günnemann
https://papers.nips.cc/paper_files/paper/2020/hash/99e314b1b43706773153e7ef375fc68c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/99e314b1b43706773153e7ef375fc68c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10837-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/99e314b1b43706773153e7ef375fc68c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/99e314b1b43706773153e7ef375fc68c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/99e314b1b43706773153e7ef375fc68c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/99e314b1b43706773153e7ef375fc68c-Supplemental.pdf
Perturbations targeting the graph structure have proven to be extremely effective in reducing the performance of Graph Neural Networks (GNNs), and traditional defenses such as adversarial training do not seem to be able to improve robustness. This work is motivated by the observation that adversarially injected edges effectively can be viewed as additional samples to a node's neighborhood aggregation function, which results in distorted aggregations accumulating over the layers. Conventional GNN aggregation functions, such as a sum or mean, can be distorted arbitrarily by a single outlier. We propose a robust aggregation function motivated by the field of robust statistics. Our approach exhibits the largest possible breakdown point of 0.5, which means that the bias of the aggregation is bounded as long as the fraction of adversarial edges of a node is less than 50%. Our novel aggregation function, Soft Medoid, is a fully differentiable generalization of the Medoid and therefore lends itself well for end-to-end deep learning. Equipping a GNN with our aggregation improves the robustness with respect to structure perturbations on Cora ML by a factor of 3 (and 5.5 on Citeseer) and by a factor of 8 for low-degree nodes.
Instance Selection for GANs
https://papers.nips.cc/paper_files/paper/2020/hash/99f6a934a7cf277f2eaece8e3ce619b2-Abstract.html
Terrance DeVries, Michal Drozdzal, Graham W. Taylor
https://papers.nips.cc/paper_files/paper/2020/hash/99f6a934a7cf277f2eaece8e3ce619b2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/99f6a934a7cf277f2eaece8e3ce619b2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10838-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/99f6a934a7cf277f2eaece8e3ce619b2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/99f6a934a7cf277f2eaece8e3ce619b2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/99f6a934a7cf277f2eaece8e3ce619b2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/99f6a934a7cf277f2eaece8e3ce619b2-Supplemental.pdf
Recent advances in Generative Adversarial Networks (GANs) have led to their widespread adoption for the purposes of generating high quality synthetic imagery. While capable of generating photo-realistic images, these models often produce unrealistic samples which fall outside of the data manifold. Several recently proposed techniques attempt to avoid spurious samples, either by rejecting them after generation, or by truncating the model's latent space. While effective, these methods are inefficient, as a large fraction of training time and model capacity are dedicated towards samples that will ultimately go unused. In this work we propose a novel approach to improve sample quality: altering the training dataset via instance selection before model training has taken place. By refining the empirical data distribution before training, we redirect model capacity towards high-density regions, which ultimately improves sample fidelity, lowers model capacity requirements, and significantly reduces training time. Code is available at https://github.com/uoguelph-mlrg/instanceselectionfor_gans.
Linear Disentangled Representations and Unsupervised Action Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/9a02387b02ce7de2dac4b925892f68fb-Abstract.html
Matthew Painter, Adam Prugel-Bennett, Jonathon Hare
https://papers.nips.cc/paper_files/paper/2020/hash/9a02387b02ce7de2dac4b925892f68fb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9a02387b02ce7de2dac4b925892f68fb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10839-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9a02387b02ce7de2dac4b925892f68fb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9a02387b02ce7de2dac4b925892f68fb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9a02387b02ce7de2dac4b925892f68fb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9a02387b02ce7de2dac4b925892f68fb-Supplemental.pdf
Disentangled representation learning has seen a surge in interest over recent times, generally focusing on new models which optimise one of many disparate disentanglement metrics. Symmetry Based Disentangled Representation learning introduced a robust mathematical framework that defined precisely what is meant by a ``linear disentangled representation''. This framework determined that such representations would depend on a particular decomposition of the symmetry group acting on the data, showing that actions would manifest through irreducible group representations acting on independent representational subspaces. \citet{forwardvae} subsequently proposed the first model to induce and demonstrate a linear disentangled representation in a VAE model. In this work we empirically show that linear disentangled representations are not present in standard VAE models and that they instead require altering the loss landscape to induce them. We proceed to show that such representations are a desirable property with regard to classical disentanglement metrics. Finally we propose a method to induce irreducible representations which forgoes the need for labelled action sequences, as was required by prior work. We explore a number of properties of this method, including the ability to learn from action sequences without knowledge of intermediate states and robustness under visual noise. We also demonstrate that it can successfully learn 4 independent symmetries directly from pixels.
Video Frame Interpolation without Temporal Priors
https://papers.nips.cc/paper_files/paper/2020/hash/9a11883317fde3aef2e2432a58c86779-Abstract.html
Youjian Zhang, Chaoyue Wang, Dacheng Tao
https://papers.nips.cc/paper_files/paper/2020/hash/9a11883317fde3aef2e2432a58c86779-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9a11883317fde3aef2e2432a58c86779-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10840-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9a11883317fde3aef2e2432a58c86779-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9a11883317fde3aef2e2432a58c86779-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9a11883317fde3aef2e2432a58c86779-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9a11883317fde3aef2e2432a58c86779-Supplemental.zip
Video frame interpolation, which aims to synthesize non-exist intermediate frames in a video sequence, is an important research topic in computer vision. Existing video frame interpolation methods have achieved remarkable results under specific assumptions, such as instant or known exposure time. However, in complicated real-world situations, the temporal priors of videos, i.e. frames per second (FPS) and frame exposure time, may vary from different camera sensors. When test videos are taken under different exposure settings from training ones, the interpolated frames will suffer significant misalignment problems. In this work, we solve the video frame interpolation problem in a general situation, where input frames can be acquired under uncertain exposure (and interval) time. Unlike previous methods that can only be applied to a specific temporal prior, we derive a general curvilinear motion trajectory formula from four consecutive sharp frames or two consecutive blurry frames without temporal priors. Moreover, utilizing constraints within adjacent motion trajectories, we devise a novel optical flow refinement strategy for better interpolation results. Finally, experiments demonstrate that one well-trained model is enough for synthesizing high-quality slow-motion videos under complicated real-world situations. Codes are available on https://github.com/yjzhang96/UTI-VFI.
Learning compositional functions via multiplicative weight updates
https://papers.nips.cc/paper_files/paper/2020/hash/9a32ef65c42085537062753ec435750f-Abstract.html
Jeremy Bernstein, Jiawei Zhao, Markus Meister, Ming-Yu Liu, Anima Anandkumar, Yisong Yue
https://papers.nips.cc/paper_files/paper/2020/hash/9a32ef65c42085537062753ec435750f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9a32ef65c42085537062753ec435750f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10841-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9a32ef65c42085537062753ec435750f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9a32ef65c42085537062753ec435750f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9a32ef65c42085537062753ec435750f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9a32ef65c42085537062753ec435750f-Supplemental.pdf
Compositionality is a basic structural feature of both biological and artificial neural networks. Learning compositional functions via gradient descent incurs well known problems like vanishing and exploding gradients, making careful learning rate tuning essential for real-world applications. This paper proves that multiplicative weight updates satisfy a descent lemma tailored to compositional functions. Based on this lemma, we derive Madam---a multiplicative version of the Adam optimiser---and show that it can train state of the art neural network architectures without learning rate tuning. We further show that Madam is easily adapted to train natively compressed neural networks by representing their weights in a logarithmic number system. We conclude by drawing connections between multiplicative weight updates and recent findings about synapses in biology.
Sample Complexity of Uniform Convergence for Multicalibration
https://papers.nips.cc/paper_files/paper/2020/hash/9a96876e2f8f3dc4f3cf45f02c61c0c1-Abstract.html
Eliran Shabat, Lee Cohen, Yishay Mansour
https://papers.nips.cc/paper_files/paper/2020/hash/9a96876e2f8f3dc4f3cf45f02c61c0c1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9a96876e2f8f3dc4f3cf45f02c61c0c1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10842-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9a96876e2f8f3dc4f3cf45f02c61c0c1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9a96876e2f8f3dc4f3cf45f02c61c0c1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9a96876e2f8f3dc4f3cf45f02c61c0c1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9a96876e2f8f3dc4f3cf45f02c61c0c1-Supplemental.pdf
There is a growing interest in societal concerns in machine learning systems, especially in fairness. Multicalibration gives a comprehensive methodology to address group fairness. In this work, we address the multicalibration error and decouple it from the prediction error. The importance of decoupling the fairness metric (multicalibration) and the accuracy (prediction error) is due to the inherent trade-off between the two, and the societal decision regarding the ``right tradeoff'' (as imposed many times by regulators). Our work gives sample complexity bounds for uniform convergence guarantees of multicalibration error, which implies that regardless of the accuracy, we can guarantee that the empirical and (true) multicalibration errors are close. We emphasize that our results: (1) are more general than previous bounds, as they apply to both agnostic and realizable settings, and do not rely on a specific type of algorithm (such as differentially private), (2) improve over previous multicalibration sample complexity bounds and (3) implies uniform convergence guarantees for the classical calibration error.
Differentiable Neural Architecture Search in Equivalent Space with Exploration Enhancement
https://papers.nips.cc/paper_files/paper/2020/hash/9a96a2c73c0d477ff2a6da3bf538f4f4-Abstract.html
Miao Zhang, Huiqi Li, Shirui Pan, Xiaojun Chang, Zongyuan Ge, Steven Su
https://papers.nips.cc/paper_files/paper/2020/hash/9a96a2c73c0d477ff2a6da3bf538f4f4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9a96a2c73c0d477ff2a6da3bf538f4f4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10843-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9a96a2c73c0d477ff2a6da3bf538f4f4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9a96a2c73c0d477ff2a6da3bf538f4f4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9a96a2c73c0d477ff2a6da3bf538f4f4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9a96a2c73c0d477ff2a6da3bf538f4f4-Supplemental.pdf
Recent works on One-Shot Neural Architecture Search (NAS) mostly adopt a bilevel optimization scheme to alternatively optimize the supernet weights and architecture parameters after relaxing the discrete search space into a differentiable space. However, the non-negligible incongruence in their relaxation methods is hard to guarantee the differentiable optimization in the continuous space is equivalent to the optimization in the discrete space. Differently, this paper utilizes a variational graph autoencoder to injectively transform the discrete architecture space into an equivalently continuous latent space, to resolve the incongruence. A probabilistic exploration enhancement method is accordingly devised to encourage intelligent exploration during the architecture search in the latent space, to avoid local optimal in architecture search. As the catastrophic forgetting in differentiable One-Shot NAS deteriorates supernet predictive ability and makes the bilevel optimization inefficient, this paper further proposes an architecture complementation method to relieve this deficiency. We analyze the effectiveness of the proposed method, and a series of experiments have been conducted to compare the proposed method with state-of-the-art One-Shot NAS methods.
The interplay between randomness and structure during learning in RNNs
https://papers.nips.cc/paper_files/paper/2020/hash/9ac1382fd8fc4b631594aa135d16ad75-Abstract.html
Friedrich Schuessler, Francesca Mastrogiuseppe, Alexis Dubreuil, Srdjan Ostojic, Omri Barak
https://papers.nips.cc/paper_files/paper/2020/hash/9ac1382fd8fc4b631594aa135d16ad75-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9ac1382fd8fc4b631594aa135d16ad75-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10844-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9ac1382fd8fc4b631594aa135d16ad75-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9ac1382fd8fc4b631594aa135d16ad75-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9ac1382fd8fc4b631594aa135d16ad75-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9ac1382fd8fc4b631594aa135d16ad75-Supplemental.pdf
Training recurrent neural networks (RNNs) on low-dimensional tasks has been widely used to model functional biological networks. However, the solutions found by learning and the effect of initial connectivity are not well understood. Here, we examine RNNs trained using gradient descent on different tasks inspired by the neuroscience literature. We find that the changes in recurrent connectivity can be described by low-rank matrices. This observation holds even in the presence of random initial connectivity, although this initial connectivity has full rank and significantly accelerates training. To understand the origin of these observations, we turn to an analytically tractable setting: training a linear RNN on a simpler task. We show how the low-dimensional task structure leads to low-rank changes to connectivity, and how random initial connectivity facilitates learning. Altogether, our study opens a new perspective to understand learning in RNNs in light of low-rank connectivity changes and the synergistic role of random initialization.
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/9afe487de556e59e6db6c862adfe25a4-Abstract.html
Zixiang Chen, Yuan Cao, Quanquan Gu, Tong Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/9afe487de556e59e6db6c862adfe25a4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9afe487de556e59e6db6c862adfe25a4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10845-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9afe487de556e59e6db6c862adfe25a4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9afe487de556e59e6db6c862adfe25a4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9afe487de556e59e6db6c862adfe25a4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9afe487de556e59e6db6c862adfe25a4-Supplemental.pdf
A recent breakthrough in deep learning theory shows that the training of over-parameterized deep neural networks can be characterized by a kernel function called \textit{neural tangent kernel} (NTK). However, it is known that this type of results does not perfectly match the practice, as NTK-based analysis requires the network weights to stay very close to their initialization throughout training, and cannot handle regularizers or gradient noises. In this paper, we provide a generalized neural tangent kernel analysis and show that noisy gradient descent with weight decay can still exhibit a ``kernel-like'' behavior. This implies that the training loss converges linearly up to a certain accuracy. We also establish a novel generalization error bound for two-layer neural networks trained by noisy gradient descent with weight decay.
Instance-wise Feature Grouping
https://papers.nips.cc/paper_files/paper/2020/hash/9b10a919ddeb07e103dc05ff523afe38-Abstract.html
Aria Masoomi, Chieh Wu, Tingting Zhao, Zifeng Wang, Peter Castaldi, Jennifer Dy
https://papers.nips.cc/paper_files/paper/2020/hash/9b10a919ddeb07e103dc05ff523afe38-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9b10a919ddeb07e103dc05ff523afe38-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10846-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9b10a919ddeb07e103dc05ff523afe38-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9b10a919ddeb07e103dc05ff523afe38-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9b10a919ddeb07e103dc05ff523afe38-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9b10a919ddeb07e103dc05ff523afe38-Supplemental.zip
In many learning problems, the domain scientist is often interested in discovering the groups of features that are redundant and are important for classification. Moreover, the features that belong to each group, and the important feature groups may vary per sample. But what do we mean by feature redundancy? In this paper, we formally define two types of redundancies using information theory: \textit{Representation} and \textit{Relevant redundancies}. We leverage these redundancies to design a formulation for instance-wise feature group discovery and reveal a theoretical guideline to help discover the appropriate number of groups. We approximate mutual information via a variational lower bound and learn the feature group and selector indicators with Gumbel-Softmax in optimizing our formulation. Experiments on synthetic data validate our theoretical claims. Experiments on MNIST, Fashion MNIST, and gene expression datasets show that our method discovers feature groups with high classification accuracies.
Robust Disentanglement of a Few Factors at a Time using rPU-VAE
https://papers.nips.cc/paper_files/paper/2020/hash/9b22a40256b079f338827b0ff1f4792b-Abstract.html
Benjamin Estermann, Markus Marks, Mehmet Fatih Yanik
https://papers.nips.cc/paper_files/paper/2020/hash/9b22a40256b079f338827b0ff1f4792b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9b22a40256b079f338827b0ff1f4792b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10847-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9b22a40256b079f338827b0ff1f4792b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9b22a40256b079f338827b0ff1f4792b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9b22a40256b079f338827b0ff1f4792b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9b22a40256b079f338827b0ff1f4792b-Supplemental.pdf
Disentanglement is at the forefront of unsupervised learning, as disentangled representations of data improve generalization, interpretability, and performance in downstream tasks. Current unsupervised approaches remain inapplicable for real-world datasets since they are highly variable in their performance and fail to reach levels of disentanglement of (semi-)supervised approaches. We introduce population-based training (PBT) for improving consistency in training variational autoencoders (VAEs) and demonstrate the validity of this approach in a supervised setting (PBT-VAE). We then use Unsupervised Disentanglement Ranking (UDR) as an unsupervised heuristic to score models in our PBT-VAE training and show how models trained this way tend to consistently disentangle only a subset of the generative factors. Building on top of this observation we introduce the recursive rPU-VAE approach. We train the model until convergence, remove the learned factors from the dataset and reiterate. In doing so, we can label subsets of the dataset with the learned factors and consecutively use these labels to train one model that fully disentangles the whole dataset. With this approach, we show striking improvement in state-of-the-art unsupervised disentanglement performance and robustness across multiple datasets and metrics.
PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
https://papers.nips.cc/paper_files/paper/2020/hash/9b3a9fb4db30fc6594ec3990cbc09932-Abstract.html
Alekh Agarwal, Mikael Henaff, Sham Kakade, Wen Sun
https://papers.nips.cc/paper_files/paper/2020/hash/9b3a9fb4db30fc6594ec3990cbc09932-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9b3a9fb4db30fc6594ec3990cbc09932-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10848-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9b3a9fb4db30fc6594ec3990cbc09932-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9b3a9fb4db30fc6594ec3990cbc09932-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9b3a9fb4db30fc6594ec3990cbc09932-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9b3a9fb4db30fc6594ec3990cbc09932-Supplemental.pdf
Direct policy gradient methods for reinforcement learning are a successful approach for a variety of reasons: they are model free, they directly optimize the performance metric of interest, and they allow for richly parameterized policies. Their primary drawback is that, by being local in nature, they fail to adequately explore the environment. In contrast, while model-based approaches and Q-learning can, at least in theory, directly handle exploration through the use of optimism, their ability to handle model misspecification and function approximation is far less evident. This work introduces the the POLICY COVER GUIDED POLICY GRADIENT (PC- PG) algorithm, which provably balances the exploration vs. exploitation tradeoff using an ensemble of learned policies (the policy cover). PC-PG enjoys polynomial sample complexity and run time for both tabular MDPs and, more generally, linear MDPs in an infinite dimensional RKHS. Furthermore, PC-PG also has strong guarantees under model misspecification that go beyond the standard worst case L infinity assumptions; these include approximation guarantees for state aggregation under an average case error assumption, along with guarantees under a more general assumption where the approximation error under distribution shift is controlled. We complement the theory with empirical evaluation across a variety of domains in both reward-free and reward-driven settings.
Group Contextual Encoding for 3D Point Clouds
https://papers.nips.cc/paper_files/paper/2020/hash/9b72e31dac81715466cd580a448cf823-Abstract.html
Xu Liu, Chengtao Li, Jian Wang, Jingbo Wang, Boxin Shi, Xiaodong He
https://papers.nips.cc/paper_files/paper/2020/hash/9b72e31dac81715466cd580a448cf823-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9b72e31dac81715466cd580a448cf823-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10849-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9b72e31dac81715466cd580a448cf823-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9b72e31dac81715466cd580a448cf823-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9b72e31dac81715466cd580a448cf823-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9b72e31dac81715466cd580a448cf823-Supplemental.pdf
Global context is crucial for 3D point cloud scene understanding tasks. In this work, we extended the contextual encoding layer that was originally designed for 2D tasks to 3D Point Cloud scenarios. The encoding layer learns a set of code words in the feature space of the 3D point cloud to characterize the global semantic context, and then based on these code words, the method learns a global contextual descriptor to reweight the featuremaps accordingly. Moreover, compared to 2D scenarios, data sparsity becomes a major issue in 3D point cloud scenarios, and the performance of contextual encoding quickly saturates when the number of code words increases. To mitigate this problem, we further proposed a group contextual encoding method, which divides the channel into groups and then performs encoding on group-divided feature vectors. This method facilitates learning of global context in grouped subspace for 3D point clouds. We evaluate the effectiveness and generalizability of our method on three widely-studied 3D point cloud tasks. Experimental results have shown that the proposed method outperformed the VoteNet remarkably with 3 mAP on the benchmark of SUN-RGBD, with the metrics of mAP@ 0.25, and a much greater margin of 6.57 mAP on ScanNet with the metrics of mAP@ 0.5. Compared to the baseline of PointNet++, the proposed method leads to an accuracy of 86 %, outperforming the baseline by 1.5 %. Our proposed method have outperformed the non-grouping baseline methods across the board and establishes new state-of-the-art on these benchmarks.
Latent Bandits Revisited
https://papers.nips.cc/paper_files/paper/2020/hash/9b7c8d13e4b2f08895fb7bcead930b46-Abstract.html
Joey Hong, Branislav Kveton, Manzil Zaheer, Yinlam Chow, Amr Ahmed, Craig Boutilier
https://papers.nips.cc/paper_files/paper/2020/hash/9b7c8d13e4b2f08895fb7bcead930b46-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9b7c8d13e4b2f08895fb7bcead930b46-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10850-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9b7c8d13e4b2f08895fb7bcead930b46-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9b7c8d13e4b2f08895fb7bcead930b46-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9b7c8d13e4b2f08895fb7bcead930b46-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9b7c8d13e4b2f08895fb7bcead930b46-Supplemental.pdf
A latent bandit is a bandit problem where the learning agent knows reward distributions of arms conditioned on an unknown discrete latent state. The goal of the agent is to identify the latent state, after which it can act optimally. This setting is a natural midpoint between online and offline learning, where complex models can be learned offline and the agent identifies the latent state online. This is of high practical relevance, for instance in recommender systems. In this work, we propose general algorithms for latent bandits, based on both upper confidence bounds and Thompson sampling. The algorithms are contextual, and aware of model uncertainty and misspecification. We provide a unified theoretical analysis of our algorithms, which have lower regret than classic bandit policies when the number of latent states is smaller than actions. A comprehensive empirical study showcases the advantages of our approach.
Is normalization indispensable for training deep neural network?
https://papers.nips.cc/paper_files/paper/2020/hash/9b8619251a19057cff70779273e95aa6-Abstract.html
Jie Shao, Kai Hu, Changhu Wang, Xiangyang Xue, Bhiksha Raj
https://papers.nips.cc/paper_files/paper/2020/hash/9b8619251a19057cff70779273e95aa6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9b8619251a19057cff70779273e95aa6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10851-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9b8619251a19057cff70779273e95aa6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9b8619251a19057cff70779273e95aa6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9b8619251a19057cff70779273e95aa6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9b8619251a19057cff70779273e95aa6-Supplemental.pdf
Normalization operations are widely used to train deep neural networks, and they can improve both convergence and generalization in most tasks. The theories for normalization's effectiveness and new forms of normalization have always been hot topics in research. To better understand normalization, one question can be whether normalization is indispensable for training deep neural network? In this paper, we study what would happen when normalization layers are removed from the network, and show how to train deep neural networks without normalization layers and without performance degradation. Our proposed method can achieve the same or even slightly better performance in a variety of tasks: image classification in ImageNet, object detection and segmentation in MS-COCO, video classification in Kinetics, and machine translation in WMT English-German, etc. Our study may help better understand the role of normalization layers and can be a competitive alternative to normalization layers. Codes are available.
Optimization and Generalization of Shallow Neural Networks with Quadratic Activation Functions
https://papers.nips.cc/paper_files/paper/2020/hash/9b8b50fb590c590ffbf1295ce92258dc-Abstract.html
Stefano Sarao Mannelli, Eric Vanden-Eijnden, Lenka Zdeborová
https://papers.nips.cc/paper_files/paper/2020/hash/9b8b50fb590c590ffbf1295ce92258dc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9b8b50fb590c590ffbf1295ce92258dc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10852-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9b8b50fb590c590ffbf1295ce92258dc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9b8b50fb590c590ffbf1295ce92258dc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9b8b50fb590c590ffbf1295ce92258dc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9b8b50fb590c590ffbf1295ce92258dc-Supplemental.pdf
These results are confirmed by numerical experiments.
Intra Order-preserving Functions for Calibration of Multi-Class Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/9bc99c590be3511b8d53741684ef574c-Abstract.html
Amir Rahimi, Amirreza Shaban, Ching-An Cheng, Richard Hartley, Byron Boots
https://papers.nips.cc/paper_files/paper/2020/hash/9bc99c590be3511b8d53741684ef574c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9bc99c590be3511b8d53741684ef574c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10853-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9bc99c590be3511b8d53741684ef574c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9bc99c590be3511b8d53741684ef574c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9bc99c590be3511b8d53741684ef574c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9bc99c590be3511b8d53741684ef574c-Supplemental.pdf
Predicting calibrated confidence scores for multi-class deep networks is important for avoiding rare but costly mistakes. A common approach is to learn a post-hoc calibration function that transforms the output of the original network into calibrated confidence scores while maintaining the network's accuracy. However, previous post-hoc calibration techniques work only with simple calibration functions, potentially lacking sufficient representation to calibrate the complex function landscape of deep networks. In this work, we aim to learn general post-hoc calibration functions that can preserve the top-k predictions of any deep network. We call this family of functions intra order-preserving functions. We propose a new neural network architecture that represents a class of intra order-preserving functions by combining common neural network components. Additionally, we introduce order-invariant and diagonal sub-families, which can act as regularization for better generalization when the training data size is small. We show the effectiveness of the proposed method across a wide range of datasets and classifiers. Our method outperforms state-of-the-art post-hoc calibration methods, namely temperature scaling and Dirichlet calibration, in several evaluation metrics for the task.
Linear Time Sinkhorn Divergences using Positive Features
https://papers.nips.cc/paper_files/paper/2020/hash/9bde76f262285bb1eaeb7b40c758b53e-Abstract.html
Meyer Scetbon, Marco Cuturi
https://papers.nips.cc/paper_files/paper/2020/hash/9bde76f262285bb1eaeb7b40c758b53e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9bde76f262285bb1eaeb7b40c758b53e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10854-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9bde76f262285bb1eaeb7b40c758b53e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9bde76f262285bb1eaeb7b40c758b53e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9bde76f262285bb1eaeb7b40c758b53e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9bde76f262285bb1eaeb7b40c758b53e-Supplemental.pdf
Although Sinkhorn divergences are now routinely used in data sciences to compare probability distributions, the computational effort required to compute them remains expensive, growing in general quadratically in the size $n$ of the support of these distributions. Indeed, solving optimal transport (OT) with an entropic regularization requires computing a $n\times n$ kernel matrix (the neg-exponential of a $n\times n$ pairwise ground cost matrix) that is repeatedly applied to a vector. We propose to use instead ground costs of the form $c(x,y)=-\log\dotp{\varphi(x)}{\varphi(y)}$ where $\varphi$ is a map from the ground space onto the positive orthant $\RR^r_+$, with $r\ll n$. This choice yields, equivalently, a kernel $k(x,y)=\dotp{\varphi(x)}{\varphi(y)}$, and ensures that the cost of Sinkhorn iterations scales as $O(nr)$. We show that usual cost functions can be approximated using this form. Additionaly, we take advantage of the fact that our approach yields approximation that remain fully differentiable with respect to input distributions, as opposed to previously proposed adaptive low-rank approximations of the kernel matrix, to train a faster variant of OT-GAN~\cite{salimans2018improving}.
VarGrad: A Low-Variance Gradient Estimator for Variational Inference
https://papers.nips.cc/paper_files/paper/2020/hash/9c22c0b51b3202246463e986c7e205df-Abstract.html
Lorenz Richter, Ayman Boustati, Nikolas Nüsken, Francisco Ruiz, Omer Deniz Akyildiz
https://papers.nips.cc/paper_files/paper/2020/hash/9c22c0b51b3202246463e986c7e205df-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9c22c0b51b3202246463e986c7e205df-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10855-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9c22c0b51b3202246463e986c7e205df-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9c22c0b51b3202246463e986c7e205df-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9c22c0b51b3202246463e986c7e205df-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9c22c0b51b3202246463e986c7e205df-Supplemental.pdf
We analyse the properties of an unbiased gradient estimator of the ELBO for variational inference, based on the score function method with leave-one-out control variates. We show that this gradient estimator can be obtained using a new loss, defined as the variance of the log-ratio between the exact posterior and the variational approximation, which we call the log-variance loss. Under certain conditions, the gradient of the log-variance loss equals the gradient of the (negative) ELBO. We show theoretically that this gradient estimator, which we call VarGrad due to its connection to the log-variance loss, exhibits lower variance than the score function method in certain settings, and that the leave-one-out control variate coefficients are close to the optimal ones. We empirically demonstrate that VarGrad offers a favourable variance versus computation trade-off compared to other state-of-the-art estimators on a discrete VAE.
A Convolutional Auto-Encoder for Haplotype Assembly and Viral Quasispecies Reconstruction
https://papers.nips.cc/paper_files/paper/2020/hash/9c449771d0edc923c2713a7462cefa3b-Abstract.html
Ziqi Ke, Haris Vikalo
https://papers.nips.cc/paper_files/paper/2020/hash/9c449771d0edc923c2713a7462cefa3b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9c449771d0edc923c2713a7462cefa3b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10856-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9c449771d0edc923c2713a7462cefa3b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9c449771d0edc923c2713a7462cefa3b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9c449771d0edc923c2713a7462cefa3b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9c449771d0edc923c2713a7462cefa3b-Supplemental.zip
Haplotype assembly and viral quasispecies reconstruction are challenging tasks concerned with analysis of genomic mixtures using sequencing data. High-throughput sequencing technologies generate enormous amounts of short fragments (reads) which essentially oversample components of a mixture; the representation redundancy enables reconstruction of the components (haplotypes, viral strains). The reconstruction problem, known to be NP-hard, boils down to grouping together reads originating from the same component in a mixture. Existing methods struggle to solve this problem with required level of accuracy and low runtimes; the problem is becoming increasingly more challenging as the number and length of the components increase. This paper proposes a read clustering method based on a convolutional auto-encoder designed to first project sequenced fragments to a low-dimensional space and then estimate the probability of the read origin using learned embedded features. The components are reconstructed by finding consensus sequences that agglomerate reads from the same origin. Mini-batch stochastic gradient descent and dimension reduction of reads allow the proposed method to efficiently deal with massive numbers of long reads. Experiments on simulated, semi-experimental and experimental data demonstrate the ability of the proposed method to accurately reconstruct haplotypes and viral quasispecies, often demonstrating superior performance compared to state-of-the-art methods. Source codes are available at https://github.com/WuLoli/CAECseq.
Promoting Stochasticity for Expressive Policies via a Simple and Efficient Regularization Method
https://papers.nips.cc/paper_files/paper/2020/hash/9cafd121ba982e6de30ffdf5ada9ce2e-Abstract.html
Qi Zhou, Yufei Kuang, Zherui Qiu, Houqiang Li, Jie Wang
https://papers.nips.cc/paper_files/paper/2020/hash/9cafd121ba982e6de30ffdf5ada9ce2e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9cafd121ba982e6de30ffdf5ada9ce2e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10857-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9cafd121ba982e6de30ffdf5ada9ce2e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9cafd121ba982e6de30ffdf5ada9ce2e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9cafd121ba982e6de30ffdf5ada9ce2e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9cafd121ba982e6de30ffdf5ada9ce2e-Supplemental.zip
Many recent reinforcement learning (RL) methods learn stochastic policies with entropy regularization for exploration and robustness. However, in continuous action spaces, integrating entropy regularization with expressive policies is challenging and usually requires complex inference procedures. To tackle this problem, we propose a novel regularization method that is compatible with a broad range of expressive policy architectures. An appealing feature is that, the estimation of our regularization terms is simple and efficient even when the policy distributions are unknown. We show that our approach can effectively promote the exploration in continuous action spaces. Based on our regularization, we propose an off-policy actor-critic algorithm. Experiments demonstrate that the proposed algorithm outperforms state-of-the-art regularized RL methods in continuous control tasks.
Adversarial Counterfactual Learning and Evaluation for Recommender System
https://papers.nips.cc/paper_files/paper/2020/hash/9cd013fe250ebffc853b386569ab18c0-Abstract.html
Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, Kannan Achan
https://papers.nips.cc/paper_files/paper/2020/hash/9cd013fe250ebffc853b386569ab18c0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9cd013fe250ebffc853b386569ab18c0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10858-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9cd013fe250ebffc853b386569ab18c0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9cd013fe250ebffc853b386569ab18c0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9cd013fe250ebffc853b386569ab18c0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9cd013fe250ebffc853b386569ab18c0-Supplemental.zip
The feedback data of recommender systems are often subject to what was exposed to the users; however, most learning and evaluation methods do not account for the underlying exposure mechanism. We first show in theory that applying supervised learning to detect user preferences may end up with inconsistent results in the absence of exposure information. The counterfactual propensity-weighting approach from causal inference can account for the exposure mechanism; nevertheless, the partial-observation nature of the feedback data can cause identifiability issues. We propose a principled solution by introducing a minimax empirical risk formulation. We show that the relaxation of the dual problem can be converted to an adversarial game between two recommendation models, where the opponent of the candidate model characterizes the underlying exposure mechanism. We provide learning bounds and conduct extensive simulation studies to illustrate and justify the proposed approach over a broad range of recommendation settings, which shed insights on the various benefits of the proposed approach.
Memory-Efficient Learning of Stable Linear Dynamical Systems for Prediction and Control
https://papers.nips.cc/paper_files/paper/2020/hash/9cd78264cf2cd821ba651485c111a29a-Abstract.html
Giorgos ('Yorgos') Mamakoukas, Orest Xherija, Todd Murphey
https://papers.nips.cc/paper_files/paper/2020/hash/9cd78264cf2cd821ba651485c111a29a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9cd78264cf2cd821ba651485c111a29a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10859-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9cd78264cf2cd821ba651485c111a29a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9cd78264cf2cd821ba651485c111a29a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9cd78264cf2cd821ba651485c111a29a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9cd78264cf2cd821ba651485c111a29a-Supplemental.pdf
Learning a stable Linear Dynamical System (LDS) from data involves creating models that both minimize reconstruction error and enforce stability of the learned representation. We propose a novel algorithm for learning stable LDSs. Using a recent characterization of stable matrices, we present an optimization method that ensures stability at every step and iteratively improves the reconstruction error using gradient directions derived in this paper. When applied to LDSs with inputs, our approach---in contrast to current methods for learning stable LDSs---updates both the state and control matrices, expanding the solution space and allowing for models with lower reconstruction error. We apply our algorithm in simulations and experiments to a variety of problems, including learning dynamic textures from image sequences and controlling a robotic manipulator. Compared to existing approaches, our proposed method achieves an \textit{orders-of-magnitude} improvement in reconstruction error and superior results in terms of control performance. In addition, it is \textit{provably} more memory efficient, with an $\mathcal{O}(n^2)$ space complexity compared to $\mathcal{O}(n^4)$ of competing alternatives, thus scaling to higher-dimensional systems when the other methods fail. The code of the proposed algorithm and animations of the results can be found at https://github.com/giorgosmamakoukas/MemoryEfficientStableLDS.
Evolving Normalization-Activation Layers
https://papers.nips.cc/paper_files/paper/2020/hash/9d4c03631b8b0c85ae08bf05eda37d0f-Abstract.html
Hanxiao Liu, Andy Brock, Karen Simonyan, Quoc Le
https://papers.nips.cc/paper_files/paper/2020/hash/9d4c03631b8b0c85ae08bf05eda37d0f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9d4c03631b8b0c85ae08bf05eda37d0f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10860-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9d4c03631b8b0c85ae08bf05eda37d0f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9d4c03631b8b0c85ae08bf05eda37d0f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9d4c03631b8b0c85ae08bf05eda37d0f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9d4c03631b8b0c85ae08bf05eda37d0f-Supplemental.pdf
Normalization layers and activation functions are fundamental components in deep networks and typically co-locate with each other. Here we propose to design them using an automated approach. Instead of designing them separately, we unify them into a single tensor-to-tensor computation graph, and evolve its structure starting from basic mathematical functions. Examples of such mathematical functions are addition, multiplication and statistical moments. The use of low-level mathematical functions, in contrast to the use of high-level modules in mainstream NAS, leads to a highly sparse and large search space which can be challenging for search methods. To address the challenge, we develop efficient rejection protocols to quickly filter out candidate layers that do not work well. We also use multi-objective evolution to optimize each layer's performance across many architectures to prevent overfitting. Our method leads to the discovery of EvoNorms, a set of new normalization-activation layers with novel, and sometimes surprising structures that go beyond existing design patterns. For example, some EvoNorms do not assume that normalization and activation functions must be applied sequentially, nor need to center the feature maps, nor require explicit activation functions. Our experiments show that EvoNorms work well on image classification models including ResNets, MobileNets and EfficientNets but also transfer well to Mask R-CNN with FPN/SpineNet for instance segmentation and to BigGAN for image synthesis, outperforming BatchNorm and GroupNorm based layers in many cases.
ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
https://papers.nips.cc/paper_files/paper/2020/hash/9d58963592071dbf38a0fa114269959c-Abstract.html
Chia-Yu Chen, Jiamin Ni, Songtao Lu, Xiaodong Cui, Pin-Yu Chen, Xiao Sun, Naigang Wang, Swagath Venkataramani, Vijayalakshmi (Viji) Srinivasan, Wei Zhang, Kailash Gopalakrishnan
https://papers.nips.cc/paper_files/paper/2020/hash/9d58963592071dbf38a0fa114269959c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9d58963592071dbf38a0fa114269959c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10861-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9d58963592071dbf38a0fa114269959c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9d58963592071dbf38a0fa114269959c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9d58963592071dbf38a0fa114269959c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9d58963592071dbf38a0fa114269959c-Supplemental.pdf
Large-scale distributed training of Deep Neural Networks (DNNs) on state-of-the-art platforms are expected to be severely communication constrained. To overcome this limitation, numerous gradient compression techniques have been proposed and have demonstrated high compression ratios. However, most existing compression methods do not scale well to large scale distributed systems (due to gradient build-up) and / or lack evaluations in large datasets. To mitigate these issues, we propose a new compression technique, Scalable Sparsified Gradient Compression (ScaleComp), that (i) leverages similarity in the gradient distribution amongst learners to provide a commutative compressor and keep communication cost constant to worker number and (ii) includes low-pass filter in local gradient accumulations to mitigate the impacts of large batch size training and significantly improve scalability. Using theoretical analysis, we show that ScaleComp provides favorable convergence guarantees and is compatible with gradient all-reduce techniques. Furthermore, we experimentally demonstrate that ScaleComp has small overheads, directly reduces gradient traffic and provides high compression rates (70-150X) and excellent scalability (up to 64-80 learners and 10X larger batch sizes over normal training) across a wide range of applications (image, language, and speech) without significant accuracy loss.
RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder
https://papers.nips.cc/paper_files/paper/2020/hash/9d684c589d67031a627ad33d59db65e5-Abstract.html
Cheng Chi, Fangyun Wei, Han Hu
https://papers.nips.cc/paper_files/paper/2020/hash/9d684c589d67031a627ad33d59db65e5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9d684c589d67031a627ad33d59db65e5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10862-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9d684c589d67031a627ad33d59db65e5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9d684c589d67031a627ad33d59db65e5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9d684c589d67031a627ad33d59db65e5-Review.html
null
Existing object detection frameworks are usually built on a single format of object/part representation, i.e., anchor/proposal rectangle boxes in RetinaNet and Faster R-CNN, center points in FCOS and RepPoints, and corner points in CornerNet. While these different representations usually drive the frameworks to perform well in different aspects, e.g., better classification or finer localization, it is in general difficult to combine these representations in a single framework to make good use of each strength, due to the heterogeneous or non-grid feature extraction by different representations. This paper presents an attention-based decoder module similar as that in Transformer~\cite{vaswani2017attention} to bridge other representations into a typical object detector built on a single representation format, in an end-to-end fashion. The other representations act as a set of \emph{key} instances to strengthen the main \emph{query} representation features in the vanilla detectors. Novel techniques are proposed towards efficient computation of the decoder module, including a \emph{key sampling} approach and a \emph{shared location embedding} approach. The proposed module is named \emph{bridging visual representations} (BVR). It can perform in-place and we demonstrate its broad effectiveness in bridging other representations into prevalent object detection frameworks, including RetinaNet, Faster R-CNN, FCOS and ATSS, where about $1.5\sim3.0$ AP improvements are achieved. In particular, we improve a state-of-the-art framework with a strong backbone by about $2.0$ AP, reaching $52.7$ AP on COCO test-dev. The resulting network is named RelationNet++. The code is available at \url{https://github.com/microsoft/RelationNet2}.
Efficient Learning of Discrete Graphical Models
https://papers.nips.cc/paper_files/paper/2020/hash/9d702ffd99ad9c70ac37e506facc8c38-Abstract.html
Marc Vuffray, Sidhant Misra, Andrey Lokhov
https://papers.nips.cc/paper_files/paper/2020/hash/9d702ffd99ad9c70ac37e506facc8c38-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9d702ffd99ad9c70ac37e506facc8c38-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10863-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9d702ffd99ad9c70ac37e506facc8c38-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9d702ffd99ad9c70ac37e506facc8c38-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9d702ffd99ad9c70ac37e506facc8c38-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9d702ffd99ad9c70ac37e506facc8c38-Supplemental.pdf
Graphical models are useful tools for describing structured high-dimensional probability distributions. Development of efficient algorithms for learning graphical models with least amount of data remains an active research topic. Reconstruction of graphical models that describe the statistics of discrete variables is a particularly challenging problem, for which the maximum likelihood approach is intractable. In this work, we provide the first sample-efficient method based on the Interaction Screening framework that allows one to provably learn fully general discrete factor models with node-specific discrete alphabets and multi-body interactions, specified in an arbitrary basis. We identify a single condition related to model parametrization that leads to rigorous guarantees on the recovery of model structure and parameters in any error norm, and is readily verifiable for a large class of models. Importantly, our bounds make explicit distinction between parameters that are proper to the model and priors used as an input to the algorithm. Finally, we show that the Interaction Screening framework includes all models previously considered in the literature as special cases, and for which our analysis shows a systematic improvement in sample complexity.
Near-Optimal SQ Lower Bounds for Agnostically Learning Halfspaces and ReLUs under Gaussian Marginals
https://papers.nips.cc/paper_files/paper/2020/hash/9d7311ba459f9e45ed746755a32dcd11-Abstract.html
Ilias Diakonikolas, Daniel Kane, Nikos Zarifis
https://papers.nips.cc/paper_files/paper/2020/hash/9d7311ba459f9e45ed746755a32dcd11-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9d7311ba459f9e45ed746755a32dcd11-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10864-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9d7311ba459f9e45ed746755a32dcd11-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9d7311ba459f9e45ed746755a32dcd11-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9d7311ba459f9e45ed746755a32dcd11-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9d7311ba459f9e45ed746755a32dcd11-Supplemental.pdf
We study the fundamental problems of agnostically learning halfspaces and ReLUs under Gaussian marginals. In the former problem, given labeled examples $(\bx, y)$ from an unknown distribution on $\R^d \times \{ \pm 1\}$, whose marginal distribution on $\bx$ is the standard Gaussian and the labels $y$ can be arbitrary, the goal is to output a hypothesis with 0-1 loss $\opt+\eps$, where $\opt$ is the 0-1 loss of the best-fitting halfspace. In the latter problem, given labeled examples $(\bx, y)$ from an unknown distribution on $\R^d \times \R$, whose marginal distribution on $\bx$ is the standard Gaussian and the labels $y$ can be arbitrary, the goal is to output a hypothesis with square loss $\opt+\eps$, where $\opt$ is the square loss of the best-fitting ReLU. We prove Statistical Query (SQ) lower bounds of $d^{\poly(1/\eps)}$ for both of these problems. Our SQ lower bounds provide strong evidence that current upper bounds for these tasks are essentially best possible.
Neurosymbolic Transformers for Multi-Agent Communication
https://papers.nips.cc/paper_files/paper/2020/hash/9d740bd0f36aaa312c8d504e28c42163-Abstract.html
Jeevana Priya Inala, Yichen Yang, James Paulos, Yewen Pu, Osbert Bastani, Vijay Kumar, Martin Rinard, Armando Solar-Lezama
https://papers.nips.cc/paper_files/paper/2020/hash/9d740bd0f36aaa312c8d504e28c42163-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9d740bd0f36aaa312c8d504e28c42163-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10865-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9d740bd0f36aaa312c8d504e28c42163-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9d740bd0f36aaa312c8d504e28c42163-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9d740bd0f36aaa312c8d504e28c42163-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9d740bd0f36aaa312c8d504e28c42163-Supplemental.pdf
We study the problem of inferring communication structures that can solve cooperative multi-agent planning problems while minimizing the amount of communication. We quantify the amount of communication as the maximum degree of the communication graph; this metric captures settings where agents have limited bandwidth. Minimizing communication is challenging due to the combinatorial nature of both the decision space and the objective; for instance, we cannot solve this problem by training neural networks using gradient descent. We propose a novel algorithm that synthesizes a control policy that combines a programmatic communication policy used to generate the communication graph with a transformer policy network used to choose actions. Our algorithm first trains the transformer policy, which implicitly generates a "soft" communication graph; then, it synthesizes a programmatic communication policy that "hardens" this graph, forming a neurosymbolic transformer. Our experiments demonstrate how our approach can synthesize policies that generate low-degree communication graphs while maintaining near-optimal performance.
Fairness in Streaming Submodular Maximization: Algorithms and Hardness
https://papers.nips.cc/paper_files/paper/2020/hash/9d752cb08ef466fc480fba981cfa44a1-Abstract.html
Marwa El Halabi, Slobodan Mitrović, Ashkan Norouzi-Fard, Jakab Tardos, Jakub M. Tarnawski
https://papers.nips.cc/paper_files/paper/2020/hash/9d752cb08ef466fc480fba981cfa44a1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9d752cb08ef466fc480fba981cfa44a1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10866-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9d752cb08ef466fc480fba981cfa44a1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9d752cb08ef466fc480fba981cfa44a1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9d752cb08ef466fc480fba981cfa44a1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9d752cb08ef466fc480fba981cfa44a1-Supplemental.pdf
Submodular maximization has become established as the method of choice for the task of selecting representative and diverse summaries of data. However, if datapoints have sensitive attributes such as gender or age, such machine learning algorithms, left unchecked, are known to exhibit bias: under- or over-representation of particular groups. This has made the design of fair machine learning algorithms increasingly important. In this work we address the question: Is it possible to create fair summaries for massive datasets? To this end, we develop the first streaming approximation algorithms for submodular maximization under fairness constraints, for both monotone and non-monotone functions. We validate our findings empirically on exemplar-based clustering, movie recommendation, DPP-based summarization, and maximum coverage in social networks, showing that fairness constraints do not significantly impact utility.
Smoothed Geometry for Robust Attribution
https://papers.nips.cc/paper_files/paper/2020/hash/9d94c8981a48d12adfeecfe1ae6e0ec1-Abstract.html
Zifan Wang, Haofan Wang, Shakul Ramkumar, Piotr Mardziel, Matt Fredrikson, Anupam Datta
https://papers.nips.cc/paper_files/paper/2020/hash/9d94c8981a48d12adfeecfe1ae6e0ec1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9d94c8981a48d12adfeecfe1ae6e0ec1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10867-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9d94c8981a48d12adfeecfe1ae6e0ec1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9d94c8981a48d12adfeecfe1ae6e0ec1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9d94c8981a48d12adfeecfe1ae6e0ec1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9d94c8981a48d12adfeecfe1ae6e0ec1-Supplemental.zip
Feature attributions are a popular tool for explaining the behavior of Deep Neural Networks (DNNs), but have recently been shown to be vulnerable to attacks that produce divergent explanations for nearby inputs. This lack of robustness is especially problematic in high-stakes applications where adversarially-manipulated explanations could impair safety and trustworthiness. Building on a geometric understanding of these attacks presented in recent work, we identify Lipschitz continuity conditions on models' gradient that lead to robust gradient-based attributions, and observe that smoothness may also be related to the ability of an attack to transfer across multiple attribution methods. To mitigate these attacks in practice, we propose an inexpensive regularization method that promotes these conditions in DNNs, as well as a stochastic smoothing technique that does not require re-training. Our experiments on a range of image models demonstrate that both of these mitigations consistently improve attribution robustness, and confirm the role that smooth geometry plays in these attacks on real, large-scale models.
Fast Adversarial Robustness Certification of Nearest Prototype Classifiers for Arbitrary Seminorms
https://papers.nips.cc/paper_files/paper/2020/hash/9da187a7a191431db943a9a5a6fec6f4-Abstract.html
Sascha Saralajew, Lars Holdijk, Thomas Villmann
https://papers.nips.cc/paper_files/paper/2020/hash/9da187a7a191431db943a9a5a6fec6f4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9da187a7a191431db943a9a5a6fec6f4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10868-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9da187a7a191431db943a9a5a6fec6f4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9da187a7a191431db943a9a5a6fec6f4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9da187a7a191431db943a9a5a6fec6f4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9da187a7a191431db943a9a5a6fec6f4-Supplemental.pdf
Methods for adversarial robustness certification aim to provide an upper bound on the test error of a classifier under adversarial manipulation of its input. Current certification methods are computationally expensive and limited to attacks that optimize the manipulation with respect to a norm. We overcome these limitations by investigating the robustness properties of Nearest Prototype Classifiers (NPCs) like learning vector quantization and large margin nearest neighbor. For this purpose, we study the hypothesis margin. We prove that if NPCs use a dissimilarity measure induced by a seminorm, the hypothesis margin is a tight lower bound on the size of adversarial attacks and can be calculated in constant time—this provides the first adversarial robustness certificate calculable in reasonable time. Finally, we show that each NPC trained by a triplet loss maximizes the hypothesis margin and is therefore optimized for adversarial robustness. In the presented evaluation, we demonstrate that NPCs optimized for adversarial robustness are competitive with state-of-the-art methods and set a new benchmark with respect to computational complexity for robustness certification.
Multi-agent active perception with prediction rewards
https://papers.nips.cc/paper_files/paper/2020/hash/9db6faeef387dc789777227a8bed4d52-Abstract.html
Mikko Lauri, Frans Oliehoek
https://papers.nips.cc/paper_files/paper/2020/hash/9db6faeef387dc789777227a8bed4d52-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9db6faeef387dc789777227a8bed4d52-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10869-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9db6faeef387dc789777227a8bed4d52-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9db6faeef387dc789777227a8bed4d52-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9db6faeef387dc789777227a8bed4d52-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9db6faeef387dc789777227a8bed4d52-Supplemental.pdf
Multi-agent active perception is a task where a team of agents cooperatively gathers observations to compute a joint estimate of a hidden variable. The task is decentralized and the joint estimate can only be computed after the task ends by fusing observations of all agents. The objective is to maximize the accuracy of the estimate. The accuracy is quantified by a centralized prediction reward determined by a centralized decision-maker who perceives the observations gathered by all agents after the task ends. In this paper, we model multi-agent active perception as a decentralized partially observable Markov decision process (Dec-POMDP) with a convex centralized prediction reward. We prove that by introducing individual prediction actions for each agent, the problem is converted into a standard Dec-POMDP with a decentralized prediction reward. The loss due to decentralization is bounded, and we give a sufficient condition for when it is zero. Our results allow application of any Dec-POMDP solution algorithm to multi-agent active perception problems, and enable planning to reduce uncertainty without explicit computation of joint estimates. We demonstrate the empirical usefulness of our results by applying a standard Dec-POMDP algorithm to multi-agent active perception problems, showing increased scalability in the planning horizon.
A Local Temporal Difference Code for Distributional Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/9dd16e049becf4d5087c90a83fea403b-Abstract.html
Pablo Tano, Peter Dayan, Alexandre Pouget
https://papers.nips.cc/paper_files/paper/2020/hash/9dd16e049becf4d5087c90a83fea403b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9dd16e049becf4d5087c90a83fea403b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10870-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9dd16e049becf4d5087c90a83fea403b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9dd16e049becf4d5087c90a83fea403b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9dd16e049becf4d5087c90a83fea403b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9dd16e049becf4d5087c90a83fea403b-Supplemental.pdf
Recent theoretical and experimental results suggest that the dopamine system implements distributional temporal difference backups, allowing learning of the entire distributions of the long-run values of states rather than just their expected values. However, the distributional codes explored so far rely on a complex imputation step which crucially relies on spatial non-locality: in order to compute reward prediction errors, units must know not only their own state but also the states of the other units. It is far from clear how these steps could be implemented in realistic neural circuits. Here, we introduce the Laplace code: a local temporal difference code for distributional reinforcement learning that is representationally powerful and computationally straightforward. The code decomposes value distributions and prediction errors across three separated dimensions: reward magnitude (related to distributional quantiles), temporal discounting (related to the Laplace transform of future rewards) and time horizon (related to eligibility traces). Besides lending itself to a local learning rule, the decomposition recovers the temporal evolution of the immediate reward distribution, indicating all possible rewards at all future times. This increases representational capacity and allows for temporally-flexible computations that immediately adjust to changing horizons or discount factors.
Learning with Optimized Random Features: Exponential Speedup by Quantum Machine Learning without Sparsity and Low-Rank Assumptions
https://papers.nips.cc/paper_files/paper/2020/hash/9ddb9dd5d8aee9a76bf217a2a3c54833-Abstract.html
Hayata Yamasaki, Sathyawageeswar Subramanian, Sho Sonoda, Masato Koashi
https://papers.nips.cc/paper_files/paper/2020/hash/9ddb9dd5d8aee9a76bf217a2a3c54833-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9ddb9dd5d8aee9a76bf217a2a3c54833-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10871-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9ddb9dd5d8aee9a76bf217a2a3c54833-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9ddb9dd5d8aee9a76bf217a2a3c54833-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9ddb9dd5d8aee9a76bf217a2a3c54833-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9ddb9dd5d8aee9a76bf217a2a3c54833-Supplemental.pdf
Kernel methods augmented with random features give scalable algorithms for learning from big data. But it has been computationally hard to sample random features according to a probability distribution that is optimized for the data, so as to minimize the required number of features for achieving the learning to a desired accuracy. Here, we develop a quantum algorithm for sampling from this optimized distribution over features, in runtime O(D) that is linear in the dimension D of the input data. Our algorithm achieves an exponential speedup in D compared to any known classical algorithm for this sampling task. In contrast to existing quantum machine learning algorithms, our algorithm circumvents sparsity and low-rank assumptions and thus has wide applicability. We also show that the sampled features can be combined with regression by stochastic gradient descent to achieve the learning without canceling out our exponential speedup. Our algorithm based on sampling optimized random features leads to an accelerated framework for machine learning that takes advantage of quantum computers.
CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations
https://papers.nips.cc/paper_files/paper/2020/hash/9de6d14fff9806d4bcd1ef555be766cd-Abstract.html
Davis Rempe, Tolga Birdal, Yongheng Zhao, Zan Gojcic, Srinath Sridhar, Leonidas J. Guibas
https://papers.nips.cc/paper_files/paper/2020/hash/9de6d14fff9806d4bcd1ef555be766cd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9de6d14fff9806d4bcd1ef555be766cd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10872-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9de6d14fff9806d4bcd1ef555be766cd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9de6d14fff9806d4bcd1ef555be766cd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9de6d14fff9806d4bcd1ef555be766cd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9de6d14fff9806d4bcd1ef555be766cd-Supplemental.zip
We propose CaSPR, a method to learn object-centric Canonical Spatiotemporal Point Cloud Representations of dynamically moving or evolving objects. Our goal is to enable information aggregation over time and the interrogation of object state at any spatiotemporal neighborhood in the past, observed or not. Different from previous work, CaSPR learns representations that support spacetime continuity, are robust to variable and irregularly spacetime-sampled point clouds, and generalize to unseen object instances. Our approach divides the problem into two subtasks. First, we explicitly encode time by mapping an input point cloud sequence to a spatiotemporally-canonicalized object space. We then leverage this canonicalization to learn a spatiotemporal latent representation using neural ordinary differential equations and a generative model of dynamically evolving shapes using continuous normalizing flows. We demonstrate the effectiveness of our method on several applications including shape reconstruction, camera pose estimation, continuous spatiotemporal sequence reconstruction, and correspondence estimation from irregularly or intermittently sampled observations.
Deep Automodulators
https://papers.nips.cc/paper_files/paper/2020/hash/9df81829c4ebc9c427b9afe0438dce5a-Abstract.html
Ari Heljakka, Yuxin Hou, Juho Kannala, Arno Solin
https://papers.nips.cc/paper_files/paper/2020/hash/9df81829c4ebc9c427b9afe0438dce5a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9df81829c4ebc9c427b9afe0438dce5a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10873-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9df81829c4ebc9c427b9afe0438dce5a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9df81829c4ebc9c427b9afe0438dce5a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9df81829c4ebc9c427b9afe0438dce5a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9df81829c4ebc9c427b9afe0438dce5a-Supplemental.pdf
We introduce a new category of generative autoencoders called automodulators. These networks can faithfully reproduce individual real-world input images like regular autoencoders, but also generate a fused sample from an arbitrary combination of several such images, allowing instantaneous "style-mixing" and other new applications. An automodulator decouples the data flow of decoder operations from statistical properties thereof and uses the latent vector to modulate the former by the latter, with a principled approach for mutual disentanglement of decoder layers. Prior work has explored similar decoder architecture with GANs, but their focus has been on random sampling. A corresponding autoencoder could operate on real input images. For the first time, we show how to train such a general-purpose model with sharp outputs in high resolution, using novel training techniques, demonstrated on four image data sets. Besides style-mixing, we show state-of-the-art results in autoencoder comparison, and visual image quality nearly indistinguishable from state-of-the-art GANs. We expect the automodulator variants to become a useful building block for image applications and other data domains.
Convolutional Tensor-Train LSTM for Spatio-Temporal Learning
https://papers.nips.cc/paper_files/paper/2020/hash/9e1a36515d6704d7eb7a30d783400e5d-Abstract.html
Jiahao Su, Wonmin Byeon, Jean Kossaifi, Furong Huang, Jan Kautz, Anima Anandkumar
https://papers.nips.cc/paper_files/paper/2020/hash/9e1a36515d6704d7eb7a30d783400e5d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9e1a36515d6704d7eb7a30d783400e5d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10874-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9e1a36515d6704d7eb7a30d783400e5d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9e1a36515d6704d7eb7a30d783400e5d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9e1a36515d6704d7eb7a30d783400e5d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9e1a36515d6704d7eb7a30d783400e5d-Supplemental.pdf
Learning from spatio-temporal data has numerous applications such as human-behavior analysis, object tracking, video compression, and physics simulation. However, existing methods still perform poorly on challenging video tasks such as long-term forecasting. This is because these kinds of challenging tasks require learning long-term spatio-temporal correlations in the video sequence. In this paper, we propose a higher-order convolutional LSTM model that can efficiently learn these correlations, along with a succinct representations of the history. This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time. To make this feasible in terms of computation and memory requirements, we propose a novel convolutional tensor-train decomposition of the higher-order model. This decomposition reduces the model complexity by jointly approximating a sequence of convolutional kernels as a low-rank tensor-train factorization. As a result, our model outperforms existing approaches, but uses only a fraction of parameters, including the baseline models. Our results achieve state-of-the-art performance in a wide range of applications and datasets, including the multi-steps video prediction on the Moving-MNIST-2 and KTH action datasets as well as early activity recognition on the Something-Something V2 dataset.
The Potts-Ising model for discrete multivariate data
https://papers.nips.cc/paper_files/paper/2020/hash/9e5f64cde99af96fdca0e02a3d24faec-Abstract.html
Zahra Razaee, Arash Amini
https://papers.nips.cc/paper_files/paper/2020/hash/9e5f64cde99af96fdca0e02a3d24faec-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9e5f64cde99af96fdca0e02a3d24faec-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10875-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9e5f64cde99af96fdca0e02a3d24faec-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9e5f64cde99af96fdca0e02a3d24faec-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9e5f64cde99af96fdca0e02a3d24faec-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9e5f64cde99af96fdca0e02a3d24faec-Supplemental.pdf
Modeling dependencies in multivariate discrete data is a challenging problem, especially in high dimensions. The Potts model is a versatile such model, suitable when each coordinate is a categorical variable. However, the full Potts model has too many parameters to be accurately fit when the number of categories is large. We introduce a variation on the Potts model that allows for general categorical marginals and Ising-type multivariate dependence. This reduces the number of parameters from $\Omega(d^2 K^2)$ in the full Potts model to $O(d^2 + Kd)$, where $K$ is the number of categories and $d$ is the dimension of the data. We show that the complexity of fitting this new Potts-Ising model is the same as that of an Ising model. In particular, adopting the neighborhood regression framework, the model can be fit by solving $d$ separate logistic regressions. We demonstrate the ability of the model to capture multivariate dependencies by comparing with existing approaches.
Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech
https://papers.nips.cc/paper_files/paper/2020/hash/9e9a30b74c49d07d8150c8c83b1ccf07-Abstract.html
Shailee Jain, Vy Vo, Shivangi Mahto, Amanda LeBel, Javier S. Turek, Alexander Huth
https://papers.nips.cc/paper_files/paper/2020/hash/9e9a30b74c49d07d8150c8c83b1ccf07-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9e9a30b74c49d07d8150c8c83b1ccf07-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10876-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9e9a30b74c49d07d8150c8c83b1ccf07-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9e9a30b74c49d07d8150c8c83b1ccf07-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9e9a30b74c49d07d8150c8c83b1ccf07-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9e9a30b74c49d07d8150c8c83b1ccf07-Supplemental.pdf
Natural language contains information at multiple timescales. To understand how the human brain represents this information, one approach is to build encoding models that predict fMRI responses to natural language using representations extracted from neural network language models (LMs). However, these LM-derived representations do not explicitly separate information at different timescales, making it difficult to interpret the encoding models. In this work we construct interpretable multi-timescale representations by forcing individual units in an LSTM LM to integrate information over specific temporal scales. This allows us to explicitly and directly map the timescale of information encoded by each individual fMRI voxel. Further, the standard fMRI encoding procedure does not account for varying temporal properties in the encoding features. We modify the procedure so that it can capture both short- and long-timescale information. This approach outperforms other encoding models, particularly for voxels that represent long-timescale information. It also provides a finer-grained map of timescale information in the human language pathway. This serves as a framework for future work investigating temporal hierarchies across artificial and biological language systems.
Group-Fair Online Allocation in Continuous Time
https://papers.nips.cc/paper_files/paper/2020/hash/9ec0cfdc84044494e10582436e013e64-Abstract.html
Semih Cayci, Swati Gupta, Atilla Eryilmaz
https://papers.nips.cc/paper_files/paper/2020/hash/9ec0cfdc84044494e10582436e013e64-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9ec0cfdc84044494e10582436e013e64-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10877-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9ec0cfdc84044494e10582436e013e64-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9ec0cfdc84044494e10582436e013e64-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9ec0cfdc84044494e10582436e013e64-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9ec0cfdc84044494e10582436e013e64-Supplemental.pdf
The theory of discrete-time online learning has been successfully applied in many problems that involve sequential decision-making under uncertainty. However, in many applications including contractual hiring in online freelancing platforms and server allocation in cloud computing systems, the outcome of each action is observed only after a random and action-dependent time. Furthermore, as a consequence of certain ethical and economic concerns, the controller may impose deadlines on the completion of each task, and require fairness across different groups in the allocation of total time budget $B$. In order to address these applications, we consider continuous-time online learning problem with fairness considerations, and present a novel framework based on continuous-time utility maximization. We show that this formulation recovers reward-maximizing, max-min fair and proportionally fair allocation rules across different groups as special cases. We characterize the optimal offline policy, which allocates the total time between different actions in an optimally fair way (as defined by the utility function), and impose deadlines to maximize time-efficiency. In the absence of any statistical knowledge, we propose a novel online learning algorithm based on dual ascent optimization for time averages, and prove that it achieves $\tilde{O}(B^{-1/2})$ regret bound.
Decentralized TD Tracking with Linear Function Approximation and its Finite-Time Analysis
https://papers.nips.cc/paper_files/paper/2020/hash/9ec51f6eb240fb631a35864e13737bca-Abstract.html
Gang Wang, Songtao Lu, Georgios Giannakis, Gerald Tesauro, Jian Sun
https://papers.nips.cc/paper_files/paper/2020/hash/9ec51f6eb240fb631a35864e13737bca-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9ec51f6eb240fb631a35864e13737bca-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10878-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9ec51f6eb240fb631a35864e13737bca-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9ec51f6eb240fb631a35864e13737bca-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9ec51f6eb240fb631a35864e13737bca-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9ec51f6eb240fb631a35864e13737bca-Supplemental.pdf
The present contribution deals with decentralized policy evaluation in multi-agent Markov decision processes using temporal-difference (TD) methods with linear function approximation for scalability. The agents cooperate to estimate the value function of such a process by observing continual state transitions of a shared environment over the graph of interconnected nodes (agents), along with locally private rewards. Different from existing consensus-type TD algorithms, the approach here develops a simple decentralized TD tracker by wedding TD learning with gradient tracking techniques. The non-asymptotic properties of the novel TD tracker are established for both independent and identically distributed (i.i.d.) as well as Markovian transitions through a unifying multistep Lyapunov analysis. In contrast to the prior art, the novel algorithm forgoes the limiting error bounds on the number of agents, which endows it with performance comparable to that of centralized TD methods that are the sharpest known to date.
Understanding Gradient Clipping in Private SGD: A Geometric Perspective
https://papers.nips.cc/paper_files/paper/2020/hash/9ecff5455677b38d19f49ce658ef0608-Abstract.html
Xiangyi Chen, Steven Z. Wu, Mingyi Hong
https://papers.nips.cc/paper_files/paper/2020/hash/9ecff5455677b38d19f49ce658ef0608-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9ecff5455677b38d19f49ce658ef0608-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10879-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9ecff5455677b38d19f49ce658ef0608-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9ecff5455677b38d19f49ce658ef0608-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9ecff5455677b38d19f49ce658ef0608-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9ecff5455677b38d19f49ce658ef0608-Supplemental.pdf
Deep learning models are increasingly popular in many machine learning applications where the training data may contain sensitive information. To provide formal and rigorous privacy guarantee, many learning systems now incorporate differential privacy by training their models with (differentially) private SGD. A key step in each private SGD update is gradient clipping that shrinks the gradient of an individual example whenever its l2 norm exceeds a certain threshold. We first demonstrate how gradient clipping can prevent SGD from converging to a stationary point. We then provide a theoretical analysis on private SGD with gradient clipping. Our analysis fully characterizes the clipping bias on the gradient norm, which can be upper bounded by the Wasserstein distance between the gradient distribution and a geometrically symmetric distribution. Our empirical evaluation further suggests that the gradient distributions along the trajectory of private SGD indeed exhibit such symmetric structure. Together, our results provide an explanation why private SGD with gradient clipping remains effective in practice despite its potential clipping bias. Finally, we develop a new perturbation-based technique that can provably correct the clipping bias even for instances with highly asymmetric gradient distributions.
O(n) Connections are Expressive Enough: Universal Approximability of Sparse Transformers
https://papers.nips.cc/paper_files/paper/2020/hash/9ed27554c893b5bad850a422c3538c15-Abstract.html
Chulhee Yun, Yin-Wen Chang, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, Sanjiv Kumar
https://papers.nips.cc/paper_files/paper/2020/hash/9ed27554c893b5bad850a422c3538c15-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9ed27554c893b5bad850a422c3538c15-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10880-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9ed27554c893b5bad850a422c3538c15-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9ed27554c893b5bad850a422c3538c15-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9ed27554c893b5bad850a422c3538c15-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9ed27554c893b5bad850a422c3538c15-Supplemental.pdf
Recently, Transformer networks have redefined the state of the art in many NLP tasks. However, these models suffer from quadratic computational cost in the input sequence length $n$ to compute pairwise attention in each layer. This has prompted recent research into sparse Transformers that sparsify the connections in the attention layers. While empirically promising for long sequences, fundamental questions remain unanswered: Can sparse Transformers approximate any arbitrary sequence-to-sequence function, similar to their dense counterparts? How does the sparsity pattern and the sparsity level affect their performance? In this paper, we address these questions and provide a unifying framework that captures existing sparse attention models. We propose sufficient conditions under which we prove that a sparse attention model can universally approximate any sequence-to-sequence function. Surprisingly, our results show that sparse Transformers with only $O(n)$ connections per attention layer can approximate the same function class as the dense model with $n^2$ connections. Lastly, we present experiments comparing different patterns/levels of sparsity on standard NLP tasks.
Identifying signal and noise structure in neural population activity with Gaussian process factor models
https://papers.nips.cc/paper_files/paper/2020/hash/9eed867b73ab1eab60583c9d4a789b1b-Abstract.html
Stephen Keeley, Mikio Aoi, Yiyi Yu, Spencer Smith, Jonathan W. Pillow
https://papers.nips.cc/paper_files/paper/2020/hash/9eed867b73ab1eab60583c9d4a789b1b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9eed867b73ab1eab60583c9d4a789b1b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10881-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9eed867b73ab1eab60583c9d4a789b1b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9eed867b73ab1eab60583c9d4a789b1b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9eed867b73ab1eab60583c9d4a789b1b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9eed867b73ab1eab60583c9d4a789b1b-Supplemental.pdf
Neural datasets often contain measurements of neural activity across multiple trials of a repeated stimulus or behavior. An important problem in the analysis of such datasets is to characterize systematic aspects of neural activity that carry information about the repeated stimulus or behavior of interest, which can be considered signal'', and to separate them from the trial-to-trial fluctuations in activity that are not time-locked to the stimulus, which for purposes of such analyses can be considerednoise''. Gaussian Process factor models provide a powerful tool for identifying shared structure in high-dimensional neural data. However, they have not yet been adapted to the problem of characterizing signal and noise in multi-trial datasets. Here we address this shortcoming by proposing ``signal-noise'' Poisson-spiking Gaussian Process Factor Analysis (SNP-GPFA), a flexible latent variable model that resolves signal and noise latent structure in neural population spiking activity. To learn the parameters of our model, we introduce a Fourier-domain black box variational inference method that quickly identifies smooth latent structure. The resulting model reliably uncovers latent signal and trial-to-trial noise-related fluctuations in large-scale recordings. We use this model to show that in monkey V1, noise fluctuations perturb neural activity within a subspace orthogonal to signal activity, suggesting that trial-by-trial noise does not interfere with signal representations. Finally, we extend the model to capture statistical dependencies across brain regions in multi-region data. We show that in mouse visual cortex, models with shared noise across brain regions out-perform models with independent per-region noise.
Equivariant Networks for Hierarchical Structures
https://papers.nips.cc/paper_files/paper/2020/hash/9efb1a59d7b58e69996cf0e32cb71098-Abstract.html
Renhao Wang, Marjan Albooyeh, Siamak Ravanbakhsh
https://papers.nips.cc/paper_files/paper/2020/hash/9efb1a59d7b58e69996cf0e32cb71098-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9efb1a59d7b58e69996cf0e32cb71098-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10882-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9efb1a59d7b58e69996cf0e32cb71098-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9efb1a59d7b58e69996cf0e32cb71098-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9efb1a59d7b58e69996cf0e32cb71098-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9efb1a59d7b58e69996cf0e32cb71098-Supplemental.pdf
While using invariant and equivariant maps, it is possible to apply deep learning to a range of primitive data structures, a formalism for dealing with hierarchy is lacking. This is a significant issue because many practical structures are hierarchies of simple building blocks; some examples include sequences of sets, graphs of graphs, or multiresolution images. Observing that the symmetry of a hierarchical structure is the ``wreath product'' of symmetries of the building blocks, we express the equivariant map for the hierarchy using an intuitive combination of the equivariant linear layers of the building blocks. More generally, we show that any equivariant map for the hierarchy has this form. To demonstrate the effectiveness of this approach to model design, we consider its application in the semantic segmentation of point-cloud data. By voxelizing the point cloud, we impose a hierarchy of translation and permutation symmetries on the data and report state-of-the-art on {semantic3d}, {s3dis}, and {vkitti}, that include some of the largest real-world point-cloud benchmarks.
MinMax Methods for Optimal Transport and Beyond: Regularization, Approximation and Numerics
https://papers.nips.cc/paper_files/paper/2020/hash/9f067d8d6df2d4b8c64fb4c084d6c208-Abstract.html
Luca De Gennaro Aquino, Stephan Eckstein
https://papers.nips.cc/paper_files/paper/2020/hash/9f067d8d6df2d4b8c64fb4c084d6c208-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9f067d8d6df2d4b8c64fb4c084d6c208-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10883-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9f067d8d6df2d4b8c64fb4c084d6c208-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9f067d8d6df2d4b8c64fb4c084d6c208-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9f067d8d6df2d4b8c64fb4c084d6c208-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9f067d8d6df2d4b8c64fb4c084d6c208-Supplemental.zip
We study MinMax solution methods for a general class of optimization problems related to (and including) optimal transport. Theoretically, the focus is on fitting a large class of problems into a single MinMax framework and generalizing regularization techniques known from classical optimal transport. We show that regularization techniques justify the utilization of neural networks to solve such problems by proving approximation theorems and illustrating fundamental issues if no regularization is used. We further study the relation to the literature on generative adversarial nets, and analyze which algorithmic techniques used therein are particularly suitable to the class of problems studied in this paper. Several numerical experiments showcase the generality of the setting and highlight which theoretical insights are most beneficial in practice.
A Discrete Variational Recurrent Topic Model without the Reparametrization Trick
https://papers.nips.cc/paper_files/paper/2020/hash/9f1d5659d5880fb427f6e04ae500fc25-Abstract.html
Mehdi Rezaee, Francis Ferraro
https://papers.nips.cc/paper_files/paper/2020/hash/9f1d5659d5880fb427f6e04ae500fc25-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9f1d5659d5880fb427f6e04ae500fc25-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10884-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9f1d5659d5880fb427f6e04ae500fc25-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9f1d5659d5880fb427f6e04ae500fc25-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9f1d5659d5880fb427f6e04ae500fc25-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9f1d5659d5880fb427f6e04ae500fc25-Supplemental.zip
We show how to learn a neural topic model with discrete random variables---one that explicitly models each word's assigned topic---using neural variational inference that does not rely on stochastic backpropagation to handle the discrete variables. The model we utilize combines the expressive power of neural methods for representing sequences of text with the topic model's ability to capture global, thematic coherence. Using neural variational inference, we show improved perplexity and document understanding across multiple corpora. We examine the effect of prior parameters both on the model and variational parameters, and demonstrate how our approach can compete and surpass a popular topic model implementation on an automatic measure of topic quality.
Transferable Graph Optimizers for ML Compilers
https://papers.nips.cc/paper_files/paper/2020/hash/9f29450d2eb58feb555078bdefe28aa5-Abstract.html
Yanqi Zhou, Sudip Roy, Amirali Abdolrashidi, Daniel Wong, Peter Ma, Qiumin Xu, Hanxiao Liu, Phitchaya Phothilimtha, Shen Wang, Anna Goldie, Azalia Mirhoseini, James Laudon
https://papers.nips.cc/paper_files/paper/2020/hash/9f29450d2eb58feb555078bdefe28aa5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9f29450d2eb58feb555078bdefe28aa5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10885-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9f29450d2eb58feb555078bdefe28aa5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9f29450d2eb58feb555078bdefe28aa5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9f29450d2eb58feb555078bdefe28aa5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9f29450d2eb58feb555078bdefe28aa5-Supplemental.pdf
Most compilers for machine learning (ML) frameworks need to solve many correlated optimization problems to generate efficient machine code. Current ML compilers rely on heuristics based algorithms to solve these optimization problems one at a time. However, this approach is not only hard to maintain but often leads to sub-optimal solutions especially for newer model architectures. Existing learning based approaches in the literature are sample inefficient, tackle a single optimization problem, and do not generalize to unseen graphs making them infeasible to be deployed in practice. To address these limitations, we propose an end-to-end, transferable deep reinforcement learning method for computational graph optimization (GO), based on a scalable sequential attention mechanism over an inductive graph neural network. GO generates decisions on the entire graph rather than on each individual node autoregressively, drastically speeding up the search compared to prior methods. Moreover, we propose recurrent attention layers to jointly optimize dependent graph optimization tasks and demonstrate 33%-60% speedup on three graph optimization tasks compared to TensorFlow default optimization. On a diverse set of representative graphs consisting of up to 80,000 nodes, including Inception-v3, Transformer-XL, and WaveNet, GO achieves on average 21% improvement over human experts and 18% improvement over the prior state of the art with 15x faster convergence, on a device placement task evaluated in real systems.
Learning with Operator-valued Kernels in Reproducing Kernel Krein Spaces
https://papers.nips.cc/paper_files/paper/2020/hash/9f319422ca17b1082ea49820353f14ab-Abstract.html
Akash Saha, Balamurugan Palaniappan
https://papers.nips.cc/paper_files/paper/2020/hash/9f319422ca17b1082ea49820353f14ab-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9f319422ca17b1082ea49820353f14ab-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10886-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9f319422ca17b1082ea49820353f14ab-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9f319422ca17b1082ea49820353f14ab-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9f319422ca17b1082ea49820353f14ab-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9f319422ca17b1082ea49820353f14ab-Supplemental.zip
Operator-valued kernels have shown promise in supervised learning problems with functional inputs and functional outputs. The crucial (and possibly restrictive) assumption of positive definiteness of operator-valued kernels has been instrumental in developing efficient algorithms. In this work, we consider operator-valued kernels which might not be necessarily positive definite. To tackle the indefiniteness of operator-valued kernels, we harness the machinery of Reproducing Kernel Krein Spaces (RKKS) of function-valued functions. A representer theorem is illustrated which yields a suitable loss stabilization problem for supervised learning with function-valued inputs and outputs. Analysis of generalization properties of the proposed framework is given. An iterative Operator based Minimum Residual (OpMINRES) algorithm is proposed for solving the loss stabilization problem. Experiments with indefinite operator-valued kernels on synthetic and real data sets demonstrate the utility of the proposed approach.
Learning Bounds for Risk-sensitive Learning
https://papers.nips.cc/paper_files/paper/2020/hash/9f60ab2b55468f104055b16df8f69e81-Abstract.html
Jaeho Lee, Sejun Park, Jinwoo Shin
https://papers.nips.cc/paper_files/paper/2020/hash/9f60ab2b55468f104055b16df8f69e81-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9f60ab2b55468f104055b16df8f69e81-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10887-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9f60ab2b55468f104055b16df8f69e81-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9f60ab2b55468f104055b16df8f69e81-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9f60ab2b55468f104055b16df8f69e81-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9f60ab2b55468f104055b16df8f69e81-Supplemental.zip
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss, instead of the standard expected loss. In this paper, we propose to study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents (OCE): our general scheme can handle various known risks, e.g., the entropic risk, mean-variance, and conditional value-at-risk, as special cases. We provide two learning bounds on the performance of empirical OCE minimizer. The first result gives an OCE guarantee based on the Rademacher average of the hypothesis space, which generalizes and improves existing results on the expected loss and the conditional value-at-risk. The second result, based on a novel variance-based characterization of OCE, gives an expected loss guarantee with a suppressed dependence on the smoothness of the selected OCE. Finally, we demonstrate the practical implications of the proposed bounds via exploratory experiments on neural networks.
Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints
https://papers.nips.cc/paper_files/paper/2020/hash/9f655cc8884fda7ad6d8a6fb15cc001e-Abstract.html
Marc Finzi, Ke Alexander Wang, Andrew G. Wilson
https://papers.nips.cc/paper_files/paper/2020/hash/9f655cc8884fda7ad6d8a6fb15cc001e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9f655cc8884fda7ad6d8a6fb15cc001e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10888-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9f655cc8884fda7ad6d8a6fb15cc001e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9f655cc8884fda7ad6d8a6fb15cc001e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9f655cc8884fda7ad6d8a6fb15cc001e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9f655cc8884fda7ad6d8a6fb15cc001e-Supplemental.zip
Reasoning about the physical world requires models that are endowed with the right inductive biases to learn the underlying dynamics. Recent works improve generalization for predicting trajectories by learning the Hamiltonian or Lagrangian of a system rather than the differential equations directly. While these methods encode the constraints of the systems using generalized coordinates, we show that embedding the system into Cartesian coordinates and enforcing the constraints explicitly with Lagrange multipliers dramatically simplifies the learning problem. We introduce a series of challenging chaotic and extended-body systems, including systems with $N$-pendulums, spring coupling, magnetic fields, rigid rotors, and gyroscopes, to push the limits of current approaches. Our experiments show that Cartesian coordinates with explicit constraints lead to a 100x improvement in accuracy and data efficiency.
Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency
https://papers.nips.cc/paper_files/paper/2020/hash/9f6992966d4c363ea0162a056cb45fe5-Abstract.html
Robert Geirhos, Kristof Meding, Felix A. Wichmann
https://papers.nips.cc/paper_files/paper/2020/hash/9f6992966d4c363ea0162a056cb45fe5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9f6992966d4c363ea0162a056cb45fe5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10889-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9f6992966d4c363ea0162a056cb45fe5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9f6992966d4c363ea0162a056cb45fe5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9f6992966d4c363ea0162a056cb45fe5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9f6992966d4c363ea0162a056cb45fe5-Supplemental.pdf
A central problem in cognitive science and behavioural neuroscience as well as in machine learning and artificial intelligence research is to ascertain whether two or more decision makers---be they brains or algorithms---use the same strategy. Accuracy alone cannot distinguish between strategies: two systems may achieve similar accuracy with very different strategies. The need to differentiate beyond accuracy is particularly pressing if two systems are at or near ceiling performance, like Convolutional Neural Networks (CNNs) and humans on visual object recognition. Here we introduce trial-by-trial error consistency, a quantitative analysis for measuring whether two decision making systems systematically make errors on the same inputs. Making consistent errors on a trial-by-trial basis is a necessary condition if we want to ascertain similar processing strategies between decision makers. Our analysis is applicable to compare algorithms with algorithms, humans with humans, and algorithms with humans. When applying error consistency to visual object recognition we obtain three main findings: (1.) Irrespective of architecture, CNNs are remarkably consistent with one another. (2.) The consistency between CNNs and human observers, however, is little above what can be expected by chance alone---indicating that humans and CNNs are likely implementing very different strategies. (3.) CORnet-S, a recurrent model termed the "current best model of the primate ventral visual stream", fails to capture essential characteristics of human behavioural data and behaves essentially like a standard purely feedforward ResNet-50 in our analysis; highlighting that certain behavioural failure cases are not limited to feedforward models. Taken together, error consistency analysis suggests that the strategies used by human and machine vision are still very different---but we envision our general-purpose error consistency analysis to serve as a fruitful tool for quantifying future progress.
Provably Efficient Reinforcement Learning with Kernel and Neural Function Approximations
https://papers.nips.cc/paper_files/paper/2020/hash/9fa04f87c9138de23e92582b4ce549ec-Abstract.html
Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael Jordan
https://papers.nips.cc/paper_files/paper/2020/hash/9fa04f87c9138de23e92582b4ce549ec-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9fa04f87c9138de23e92582b4ce549ec-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10890-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9fa04f87c9138de23e92582b4ce549ec-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9fa04f87c9138de23e92582b4ce549ec-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9fa04f87c9138de23e92582b4ce549ec-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9fa04f87c9138de23e92582b4ce549ec-Supplemental.zip
Reinforcement learning (RL) algorithms combined with modern function approximators such as kernel functions and deep neural networks have achieved significant empirical successes in large-scale application problems with a massive number of states. From a theoretical perspective, however, RL with functional approximation poses a fundamental challenge to developing algorithms with provable computational and statistical efficiency, due to the need to take into consideration both the exploration-exploitation tradeoff that is inherent in RL and the bias-variance tradeoff that is innate in statistical estimation. To address such a challenge, focusing on the episodic setting where the action-value functions are represented by a kernel function or over-parametrized neural network, we propose the first provable RL algorithm with both polynomial runtime and sample complexity, without additional assumptions on the data-generating model. In particular, for both the kernel and neural settings, we prove that an optimistic modification of the least-squares value iteration algorithm incurs an $\tilde{\mathcal{O}}(\delta_{\cF} H^2 \sqrt{T})$ regret, where $\delta_{\cF}$ characterizes the intrinsic complexity of the function class $\cF$, $H$ is the length of each episode, and $T$ is the total number of episodes. Our regret bounds are independent of the number of states and therefore even allows it to diverge, which exhibits the benefit of function approximation.
Constant-Expansion Suffices for Compressed Sensing with Generative Priors
https://papers.nips.cc/paper_files/paper/2020/hash/9fa83fec3cf3810e5680ed45f7124dce-Abstract.html
Constantinos Daskalakis, Dhruv Rohatgi, Emmanouil Zampetakis
https://papers.nips.cc/paper_files/paper/2020/hash/9fa83fec3cf3810e5680ed45f7124dce-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9fa83fec3cf3810e5680ed45f7124dce-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10891-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9fa83fec3cf3810e5680ed45f7124dce-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9fa83fec3cf3810e5680ed45f7124dce-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9fa83fec3cf3810e5680ed45f7124dce-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9fa83fec3cf3810e5680ed45f7124dce-Supplemental.pdf
Generative neural networks have been empirically found very promising in providing effective structural priors for compressed sensing, since they can be trained to span low-dimensional data manifolds in high-dimensional signal spaces. Despite the non-convexity of the resulting optimization problem, it has also been shown theoretically that, for neural networks with random Gaussian weights, a signal in the range of the network can be efficiently, approximately recovered from a few noisy measurements. However, a major bottleneck of these theoretical guarantees is a network \emph{expansivity} condition: that each layer of the neural network must be larger than the previous by a logarithmic factor. Our main contribution is to break this strong expansivity assumption, showing that \emph{constant} expansivity suffices to get efficient recovery algorithms, besides it also being information-theoretically necessary. To overcome the theoretical bottleneck in existing approaches we prove a novel uniform concentration theorem for random functions that might not be Lipschitz but satisfy a relaxed notion which we call ``pseudo-Lipschitzness.'' Using this theorem we can show that a matrix concentration inequality known as the \emph{Weight Distribution Condition (WDC)}, which was previously only known to hold for Gaussian matrices with logarithmic aspect ratio, in fact holds for constant aspect ratios too. Since WDC is a fundamental matrix concentration inequality in the heart of all existing theoretical guarantees on this problem, our tighter bound immediately yields improvements in all known results in the literature on compressed sensing with deep generative priors, including one-bit recovery, phase retrieval, and more.
RANet: Region Attention Network for Semantic Segmentation
https://papers.nips.cc/paper_files/paper/2020/hash/9fe8593a8a330607d76796b35c64c600-Abstract.html
Dingguo Shen, Yuanfeng Ji, Ping Li, Yi Wang, Di Lin
https://papers.nips.cc/paper_files/paper/2020/hash/9fe8593a8a330607d76796b35c64c600-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/9fe8593a8a330607d76796b35c64c600-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10892-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/9fe8593a8a330607d76796b35c64c600-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/9fe8593a8a330607d76796b35c64c600-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/9fe8593a8a330607d76796b35c64c600-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/9fe8593a8a330607d76796b35c64c600-Supplemental.pdf
Recent semantic segmentation methods model the relationship between pixels to construct the contextual representations. In this paper, we introduce the \emph{Region Attention Network} (RANet), a novel attention network for modeling the relationship between object regions. RANet divides the image into object regions, where we select representative information. In contrast to the previous methods, RANet configures the information pathways between the pixels in different regions, enabling the region interaction to exchange the regional context for enhancing all of the pixels in the image. We train the construction of object regions, the selection of the representative regional contents, the configuration of information pathways and the context exchange between pixels, jointly, to improve the segmentation accuracy. We extensively evaluate our method on the challenging segmentation benchmarks, demonstrating that RANet effectively helps to achieve the state-of-the-art results.
A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent
https://papers.nips.cc/paper_files/paper/2020/hash/a03fa30821986dff10fc66647c84c9c3-Abstract.html
Zhenyu Liao, Romain Couillet, Michael W. Mahoney
https://papers.nips.cc/paper_files/paper/2020/hash/a03fa30821986dff10fc66647c84c9c3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a03fa30821986dff10fc66647c84c9c3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10893-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a03fa30821986dff10fc66647c84c9c3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a03fa30821986dff10fc66647c84c9c3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a03fa30821986dff10fc66647c84c9c3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a03fa30821986dff10fc66647c84c9c3-Supplemental.pdf
This article characterizes the exact asymptotics of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$, their dimension $p$, and the dimension of feature space $N$ are all large and comparable. In this regime, the random RFF Gram matrix no longer converges to the well-known limiting Gaussian kernel matrix (as it does when $N \to \infty$ alone), but it still has a tractable behavior that is captured by our analysis. This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$. Based on these estimates, a precise characterization of two qualitatively different phases of learning, including the phase transition between them, is provided; and the corresponding double descent test error curve is derived from this phase transition behavior. These results do not depend on strong assumptions on the data distribution, and they perfectly match empirical results on real-world data sets.
Learning sparse codes from compressed representations with biologically plausible local wiring constraints
https://papers.nips.cc/paper_files/paper/2020/hash/a03fec24df877cc65c037673397ad5c0-Abstract.html
Kion Fallah, Adam Willats, Ninghao Liu, Christopher Rozell
https://papers.nips.cc/paper_files/paper/2020/hash/a03fec24df877cc65c037673397ad5c0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a03fec24df877cc65c037673397ad5c0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10894-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a03fec24df877cc65c037673397ad5c0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a03fec24df877cc65c037673397ad5c0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a03fec24df877cc65c037673397ad5c0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a03fec24df877cc65c037673397ad5c0-Supplemental.pdf
Sparse coding is an important method for unsupervised learning of task-independent features in theoretical neuroscience models of neural coding. While a number of algorithms exist to learn these representations from the statistics of a dataset, they largely ignore the information bottlenecks present in fiber pathways connecting cortical areas. For example, the visual pathway has many fewer neurons transmitting visual information to cortex than the number of photoreceptors. Both empirical and analytic results have recently shown that sparse representations can be learned effectively after performing dimensionality reduction with randomized linear operators, producing latent coefficients that preserve information. Unfortunately,current proposals for sparse coding in the compressed space require a centralized compression process (i.e., dense random matrix) that is biologically unrealistic due to local wiring constraints observed in neural circuits. The main contribution of this paper is to leverage recent results on structured random matrices to propose a theoretical neuroscience model of randomized projections for communication between cortical areas that is consistent with the local wiring constraints observed in neuroanatomy. We show analytically and empirically that unsupervised learning of sparse representations can be performed in the compressed space despite significant local wiring constraints in compression matrices of varying forms (corresponding to different local wiring patterns). Our analysis verifies that even with significant local wiring constraints, the learned representations remain qualitatively similar,have similar quantitative performance in both training and generalization error, and are consistent across many measures with measured macaque V1 receptive fields.
Self-Imitation Learning via Generalized Lower Bound Q-learning
https://papers.nips.cc/paper_files/paper/2020/hash/a0443c8c8c3372d662e9173c18faaa2c-Abstract.html
Yunhao Tang
https://papers.nips.cc/paper_files/paper/2020/hash/a0443c8c8c3372d662e9173c18faaa2c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a0443c8c8c3372d662e9173c18faaa2c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10895-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a0443c8c8c3372d662e9173c18faaa2c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a0443c8c8c3372d662e9173c18faaa2c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a0443c8c8c3372d662e9173c18faaa2c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a0443c8c8c3372d662e9173c18faaa2c-Supplemental.pdf
Self-imitation learning motivated by lower-bound Q-learning is a novel and effective approach for off-policy learning. In this work, we propose a n-step lower bound which generalizes the original return-based lower-bound Q-learning, and introduce a new family of self-imitation learning algorithms. To provide a formal motivation for the potential performance gains provided by self-imitation learning, we show that n-step lower bound Q-learning achieves a trade-off between fixed point bias and contraction rate, drawing close connections to the popular uncorrected n-step Q-learning. We finally show that n-step lower bound Q-learning is a more robust alternative to return-based self-imitation learning and uncorrected n-step, over a wide range of benchmark tasks.
Private Learning of Halfspaces: Simplifying the Construction and Reducing the Sample Complexity
https://papers.nips.cc/paper_files/paper/2020/hash/a08e32d2f9a8b78894d964ec7fd4172e-Abstract.html
Haim Kaplan, Yishay Mansour, Uri Stemmer, Eliad Tsfadia
https://papers.nips.cc/paper_files/paper/2020/hash/a08e32d2f9a8b78894d964ec7fd4172e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a08e32d2f9a8b78894d964ec7fd4172e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10896-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a08e32d2f9a8b78894d964ec7fd4172e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a08e32d2f9a8b78894d964ec7fd4172e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a08e32d2f9a8b78894d964ec7fd4172e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a08e32d2f9a8b78894d964ec7fd4172e-Supplemental.pdf
We present a differentially private learner for halfspaces over a finite grid $G$ in $\R^d$ with sample complexity $\approx d^{2.5}\cdot 2^{\log^*|G|}$, which improves the state-of-the-art result of [Beimel et al., COLT 2019] by a $d^2$ factor. The building block for our learner is a new differentially private algorithm for approximately solving the linear feasibility problem: Given a feasible collection of $m$ linear constraints of the form $Ax\geq b$, the task is to {\em privately} identify a solution $x$ that satisfies {\em most} of the constraints. Our algorithm is iterative, where each iteration determines the next coordinate of the constructed solution $x$.
Directional Pruning of Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/a09e75c5c86a7bf6582d2b4d75aad615-Abstract.html
Shih-Kang Chao, Zhanyu Wang, Yue Xing, Guang Cheng
https://papers.nips.cc/paper_files/paper/2020/hash/a09e75c5c86a7bf6582d2b4d75aad615-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a09e75c5c86a7bf6582d2b4d75aad615-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10897-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a09e75c5c86a7bf6582d2b4d75aad615-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a09e75c5c86a7bf6582d2b4d75aad615-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a09e75c5c86a7bf6582d2b4d75aad615-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a09e75c5c86a7bf6582d2b4d75aad615-Supplemental.zip
In the light of the fact that the stochastic gradient descent (SGD) often finds a flat minimum valley in the training loss, we propose a novel directional pruning method which searches for a sparse minimizer in or close to that flat region. The proposed pruning method does not require retraining or the expert knowledge on the sparsity level. To overcome the computational formidability of estimating the flat directions, we propose to use a carefully tuned $\ell_1$ proximal gradient algorithm which can provably achieve the directional pruning with a small learning rate after sufficient training. The empirical results demonstrate the promising results of our solution in highly sparse regime (92% sparsity) among many existing pruning methods on the ResNet50 with the ImageNet, while using only a slightly higher wall time and memory footprint than the SGD. Using the VGG16 and the wide ResNet 28x10 on the CIFAR-10 and CIFAR-100, we demonstrate that our solution reaches the same minima valley as the SGD, and the minima found by our solution and the SGD do not deviate in directions that impact the training loss. The code that reproduces the results of this paper is available at https://github.com/donlan2710/gRDA-Optimizer/tree/master/directional_pruning.
Smoothly Bounding User Contributions in Differential Privacy
https://papers.nips.cc/paper_files/paper/2020/hash/a0dc078ca0d99b5ebb465a9f1cad54ba-Abstract.html
Alessandro Epasto, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni, Lijie Ren
https://papers.nips.cc/paper_files/paper/2020/hash/a0dc078ca0d99b5ebb465a9f1cad54ba-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a0dc078ca0d99b5ebb465a9f1cad54ba-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10898-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a0dc078ca0d99b5ebb465a9f1cad54ba-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a0dc078ca0d99b5ebb465a9f1cad54ba-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a0dc078ca0d99b5ebb465a9f1cad54ba-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a0dc078ca0d99b5ebb465a9f1cad54ba-Supplemental.pdf
For a better trade-off between utility and privacy guarantee, we propose a method which smoothly bounds user contributions by setting appropriate weights on data points and apply it to estimating the mean/quantiles, linear regression, and empirical risk minimization. We show that our algorithm provably outperforms the sample limiting algorithm. We conclude with experimental evaluations which validate our theoretical results.
Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping
https://papers.nips.cc/paper_files/paper/2020/hash/a1140a3d0df1c81e24ae954d935e8926-Abstract.html
Minjia Zhang, Yuxiong He
https://papers.nips.cc/paper_files/paper/2020/hash/a1140a3d0df1c81e24ae954d935e8926-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a1140a3d0df1c81e24ae954d935e8926-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10899-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a1140a3d0df1c81e24ae954d935e8926-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a1140a3d0df1c81e24ae954d935e8926-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a1140a3d0df1c81e24ae954d935e8926-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a1140a3d0df1c81e24ae954d935e8926-Supplemental.pdf
In this work, we propose a method based on progressive layer dropping that speeds the training of Transformer-based language models, not at the cost of excessive hardware resources but from model architecture change and training technique boosted efficiency. Extensive experiments on BERT show that the proposed method achieves a 25% reduction of computation cost in FLOPS and a 24% reduction in the end-to-end wall-clock training time. Furthermore, we show that our pre-trained models are equipped with strong knowledge transferability, achieving similar or even higher accuracy in downstream tasks to baseline models.
Online Planning with Lookahead Policies
https://papers.nips.cc/paper_files/paper/2020/hash/a18aa23ee676d7f5ffb34cf16df3e08c-Abstract.html
Yonathan Efroni, Mohammad Ghavamzadeh, Shie Mannor
https://papers.nips.cc/paper_files/paper/2020/hash/a18aa23ee676d7f5ffb34cf16df3e08c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a18aa23ee676d7f5ffb34cf16df3e08c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10900-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a18aa23ee676d7f5ffb34cf16df3e08c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a18aa23ee676d7f5ffb34cf16df3e08c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a18aa23ee676d7f5ffb34cf16df3e08c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a18aa23ee676d7f5ffb34cf16df3e08c-Supplemental.pdf
Real Time Dynamic Programming (RTDP) is an online algorithm based on Dynamic Programming (DP) that acts by 1-step greedy planning. Unlike DP, RTDP does not require access to the entire state space, i.e., it explicitly handles the exploration. This fact makes RTDP particularly appealing when the state space is large and it is not possible to update all states simultaneously. In this we devise a multi-step greedy RTDP algorithm, which we call $h$-RTDP, that replaces the 1-step greedy policy with a $h$-step lookahead policy. We analyze $h$-RTDP in its exact form and establish that increasing the lookahead horizon, $h$, results in an improved sample complexity, with the cost of additional computations. This is the first work that proves improved sample complexity as a result of {\em increasing} the lookahead horizon in online planning. We then analyze the performance of $h$-RTDP in three approximate settings: approximate model, approximate value updates, and approximate state representation. For these cases, we prove that the asymptotic performance of $h$-RTDP remains the same as that of a corresponding approximate DP algorithm, the best one can hope for without further assumptions on the approximation errors.
Learning Deep Attribution Priors Based On Prior Knowledge
https://papers.nips.cc/paper_files/paper/2020/hash/a19883fca95d0e5ec7ee6c94c6c32028-Abstract.html
Ethan Weinberger, Joseph Janizek, Su-In Lee
https://papers.nips.cc/paper_files/paper/2020/hash/a19883fca95d0e5ec7ee6c94c6c32028-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a19883fca95d0e5ec7ee6c94c6c32028-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10901-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a19883fca95d0e5ec7ee6c94c6c32028-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a19883fca95d0e5ec7ee6c94c6c32028-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a19883fca95d0e5ec7ee6c94c6c32028-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a19883fca95d0e5ec7ee6c94c6c32028-Supplemental.pdf
Feature attribution methods, which explain an individual prediction made by a model as a sum of attributions for each input feature, are an essential tool for understanding the behavior of complex deep learning models. However, ensuring that models produce meaningful explanations, rather than ones that rely on noise, is not straightforward. Exacerbating this problem is the fact that attribution methods do not provide insight as to why features are assigned their attribution values, leading to explanations that are difficult to interpret. In real-world problems we often have sets of additional information for each feature that are predictive of that feature's importance to the task at hand. Here, we propose the deep attribution prior (DAPr) framework to exploit such information to overcome the limitations of attribution methods. Our framework jointly learns a relationship between prior information and feature importance, as well as biases models to have explanations that rely on features predicted to be important. We find that our framework both results in networks that generalize better to out of sample data and admits new methods for interpreting model behavior.
Using noise to probe recurrent neural network structure and prune synapses
https://papers.nips.cc/paper_files/paper/2020/hash/a1ada9947e0d683b4625f94c74104d73-Abstract.html
Eli Moore, Rishidev Chaudhuri
https://papers.nips.cc/paper_files/paper/2020/hash/a1ada9947e0d683b4625f94c74104d73-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a1ada9947e0d683b4625f94c74104d73-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10902-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a1ada9947e0d683b4625f94c74104d73-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a1ada9947e0d683b4625f94c74104d73-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a1ada9947e0d683b4625f94c74104d73-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a1ada9947e0d683b4625f94c74104d73-Supplemental.pdf
Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. Here we suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. We construct a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, we prove that this rule preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation.
NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity
https://papers.nips.cc/paper_files/paper/2020/hash/a1c3ae6c49a89d92aef2d423dadb477f-Abstract.html
Sang-gil Lee, Sungwon Kim, Sungroh Yoon
https://papers.nips.cc/paper_files/paper/2020/hash/a1c3ae6c49a89d92aef2d423dadb477f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a1c3ae6c49a89d92aef2d423dadb477f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10903-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a1c3ae6c49a89d92aef2d423dadb477f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a1c3ae6c49a89d92aef2d423dadb477f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a1c3ae6c49a89d92aef2d423dadb477f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a1c3ae6c49a89d92aef2d423dadb477f-Supplemental.zip
Normalizing flows (NFs) have become a prominent method for deep generative models that allow for an analytic probability density estimation and efficient synthesis. However, a flow-based network is considered to be inefficient in parameter complexity because of reduced expressiveness of bijective mapping, which renders the models unfeasibly expensive in terms of parameters. We present an alternative parameterization scheme called NanoFlow, which uses a single neural density estimator to model multiple transformation stages. Hence, we propose an efficient parameter decomposition method and the concept of flow indication embedding, which are key missing components that enable density estimation from a single neural network. Experiments performed on audio and image models confirm that our method provides a new parameter-efficient solution for scalable NFs with significant sublinear parameter complexity.
Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge
https://papers.nips.cc/paper_files/paper/2020/hash/a1d4c20b182ad7137ab3606f0e3fc8a4-Abstract.html
Chaoyang He, Murali Annavaram, Salman Avestimehr
https://papers.nips.cc/paper_files/paper/2020/hash/a1d4c20b182ad7137ab3606f0e3fc8a4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a1d4c20b182ad7137ab3606f0e3fc8a4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10904-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a1d4c20b182ad7137ab3606f0e3fc8a4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a1d4c20b182ad7137ab3606f0e3fc8a4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a1d4c20b182ad7137ab3606f0e3fc8a4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a1d4c20b182ad7137ab3606f0e3fc8a4-Supplemental.pdf
Scaling up the convolutional neural network (CNN) size (e.g., width, depth, etc.) is known to effectively improve model accuracy. However, the large model size impedes training on resource-constrained edge devices. For instance, federated learning (FL) may place undue burden on the compute capability of edge nodes, even though there is a strong practical need for FL due to its privacy and confidentiality properties. To address the resource-constrained reality of edge devices, we reformulate FL as a group knowledge transfer training algorithm, called FedGKT. FedGKT designs a variant of the alternating minimization approach to train small CNNs on edge nodes and periodically transfer their knowledge by knowledge distillation to a large server-side CNN. FedGKT consolidates several advantages into a single framework: reduced demand for edge computation, lower communication bandwidth for large CNNs, and asynchronous training, all while maintaining model accuracy comparable to FedAvg. We train CNNs designed based on ResNet-56 and ResNet-110 using three distinct datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-IID variants. Our results show that FedGKT can obtain comparable or even slightly higher accuracy than FedAvg. More importantly, FedGKT makes edge training affordable. Compared to the edge training using FedAvg, FedGKT demands 9 to 17 times less computational power (FLOPs) on edge devices and requires 54 to 105 times fewer parameters in the edge CNN. Our source code is released at FedML (https://fedml.ai).
Neural FFTs for Universal Texture Image Synthesis
https://papers.nips.cc/paper_files/paper/2020/hash/a23156abfd4a114c35b930b836064e8b-Abstract.html
Morteza Mardani, Guilin Liu, Aysegul Dundar, Shiqiu Liu, Andrew Tao, Bryan Catanzaro
https://papers.nips.cc/paper_files/paper/2020/hash/a23156abfd4a114c35b930b836064e8b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a23156abfd4a114c35b930b836064e8b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10905-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a23156abfd4a114c35b930b836064e8b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a23156abfd4a114c35b930b836064e8b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a23156abfd4a114c35b930b836064e8b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a23156abfd4a114c35b930b836064e8b-Supplemental.pdf
Synthesizing larger texture images from a smaller exemplar is an important task in graphics and vision. The conventional CNNs, recently adopted for synthesis, require to train and test on the same set of images and fail to generalize to unseen images. This is mainly because those CNNs fully rely on convolutional and upsampling layers that operate locally and not suitable for a task as global as texture synthesis. In this work, inspired by the repetitive nature of texture patterns, we find that texture synthesis can be viewed as (local) \textit{upsampling} in the Fast Fourier Transform (FFT) domain. However, FFT of natural images exhibits high dynamic range and lacks local correlations. Therefore, to train CNNs we design a framework to perform FFT upsampling in feature space using deformable convolutions. Such design allows our framework to generalize to unseen images, and synthesize textures in a single pass. Extensive evaluations confirm that our method achieves state-of-the-art performance both quantitatively and qualitatively.
Graph Cross Networks with Vertex Infomax Pooling
https://papers.nips.cc/paper_files/paper/2020/hash/a26398dca6f47b49876cbaffbc9954f9-Abstract.html
Maosen Li, Siheng Chen, Ya Zhang, Ivor Tsang
https://papers.nips.cc/paper_files/paper/2020/hash/a26398dca6f47b49876cbaffbc9954f9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a26398dca6f47b49876cbaffbc9954f9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10906-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a26398dca6f47b49876cbaffbc9954f9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a26398dca6f47b49876cbaffbc9954f9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a26398dca6f47b49876cbaffbc9954f9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a26398dca6f47b49876cbaffbc9954f9-Supplemental.pdf
We propose a novel graph cross network (GXN) to achieve comprehensive feature learning from multiple scales of a graph. Based on trainable hierarchical representations of a graph, GXN enables the interchange of intermediate features across scales to promote information flow. Two key ingredients of GXN include a novel vertex infomax pooling (VIPool), which creates multiscale graphs in a trainable manner, and a novel feature-crossing layer, enabling feature interchange across scales. The proposed VIPool selects the most informative subset of vertices based on the neural estimation of mutual information between vertex features and neighborhood features. The intuition behind is that a vertex is informative when it can maximally reflect its neighboring information. The proposed feature-crossing layer fuses intermediate features between two scales for mutual enhancement by improving information flow and enriching multiscale features at hidden layers. The cross shape of feature-crossing layer distinguishes GXN from many other multiscale architectures. Experimental results show that the proposed GXN improves the classification accuracy by 2.12% and 1.15% on average for graph classification and vertex classification, respectively. Based on the same network, the proposed VIPool consistently outperforms other graph-pooling methods.
Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms
https://papers.nips.cc/paper_files/paper/2020/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html
Hilal Asi, John C. Duchi
https://papers.nips.cc/paper_files/paper/2020/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a267f936e54d7c10a2bb70dbe6ad7a89-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10907-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a267f936e54d7c10a2bb70dbe6ad7a89-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a267f936e54d7c10a2bb70dbe6ad7a89-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a267f936e54d7c10a2bb70dbe6ad7a89-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a267f936e54d7c10a2bb70dbe6ad7a89-Supplemental.pdf
We study and provide instance-optimal algorithms in differential privacy by extending and approximating the inverse sensitivity mechanism. We provide two approximation frameworks, one which only requires knowledge of local sensitivities, and a gradient-based approximation for optimization problems, which are efficiently computable for a broad class of functions. We complement our analysis with instance-specific lower bounds for vector-valued functions, which demonstrate that our mechanisms are (nearly) instance-optimal under certain assumptions and that minimax lower bounds may not provide an accurate estimate of the hardness of a problem in general: our algorithms can significantly outperform minimax bounds for well-behaved instances. Finally, we use our approximation framework to develop private mechanisms for unbounded-range mean estimation, principal component analysis, and linear regression. For PCA, our mechanisms give an efficient (pure) differentially private algorithm with near-optimal rates.
Calibration of Shared Equilibria in General Sum Partially Observable Markov Games
https://papers.nips.cc/paper_files/paper/2020/hash/a2f04745390fd6897d09772b2cd1f581-Abstract.html
Nelson Vadori, Sumitra Ganesh, Prashant Reddy, Manuela Veloso
https://papers.nips.cc/paper_files/paper/2020/hash/a2f04745390fd6897d09772b2cd1f581-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a2f04745390fd6897d09772b2cd1f581-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10908-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a2f04745390fd6897d09772b2cd1f581-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a2f04745390fd6897d09772b2cd1f581-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a2f04745390fd6897d09772b2cd1f581-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a2f04745390fd6897d09772b2cd1f581-Supplemental.pdf
Training multi-agent systems (MAS) to achieve realistic equilibria gives us a useful tool to understand and model real-world systems. We consider a general sum partially observable Markov game where agents of different types share a single policy network, conditioned on agent-specific information. This paper aims at i) formally understanding equilibria reached by such agents, and ii) matching emergent phenomena of such equilibria to real-world targets. Parameter sharing with decentralized execution has been introduced as an efficient way to train multiple agents using a single policy network. However, the nature of resulting equilibria reached by such agents has not been yet studied: we introduce the novel concept of Shared equilibrium as a symmetric pure Nash equilibrium of a certain Functional Form Game (FFG) and prove convergence to the latter for a certain class of games using self-play. In addition, it is important that such equilibria satisfy certain constraints so that MAS are calibrated to real world data for practical use: we solve this problem by introducing a novel dual-Reinforcement Learning based approach that fits emergent behaviors of agents in a Shared equilibrium to externally-specified targets, and apply our methods to a n-player market example. We do so by calibrating parameters governing distributions of agent types rather than individual agents, which allows both behavior differentiation among agents and coherent scaling of the shared policy network to multiple agents.
MOPO: Model-based Offline Policy Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/a322852ce0df73e204b7e67cbbef0d0a-Abstract.html
Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Y. Zou, Sergey Levine, Chelsea Finn, Tengyu Ma
https://papers.nips.cc/paper_files/paper/2020/hash/a322852ce0df73e204b7e67cbbef0d0a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a322852ce0df73e204b7e67cbbef0d0a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10909-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a322852ce0df73e204b7e67cbbef0d0a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a322852ce0df73e204b7e67cbbef0d0a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a322852ce0df73e204b7e67cbbef0d0a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a322852ce0df73e204b7e67cbbef0d0a-Supplemental.pdf
Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a batch of previously collected data. This problem setting is compelling, because it offers the promise of utilizing large, diverse, previously collected datasets to acquire policies without any costly or dangerous active exploration, but it is also exceptionally difficult, due to the distributional shift between the offline training data and the learned policy. While there has been significant progress in model-free offline RL, the most successful prior methods constrain the policy to the support of the data, precluding generalization to new states. In this paper, we observe that an existing model-based RL algorithm on its own already produces significant gains in the offline setting, as compared to model-free approaches, despite not being designed for this setting. However, although many standard model-based RL methods already estimate the uncertainty of their model, they do not by themselves provide a mechanism to avoid the issues associated with distributional shift in the offline setting. We therefore propose to modify existing model-based RL methods to address these issues by casting offline model-based RL into a penalized MDP framework. We theoretically show that, by using this penalized MDP, we are maximizing a lower bound of the return in the true MDP. Based on our theoretical results, we propose a new model-based offline RL algorithm that applies the variance of a Lipschitz-regularized model as a penalty to the reward function. We find that this algorithm outperforms both standard model-based RL methods and existing state-of-the-art model-free offline RL approaches on existing offline RL benchmarks, as well as two challenging continuous control tasks that require generalizing from data collected for a different task.
Building powerful and equivariant graph neural networks with structural message-passing
https://papers.nips.cc/paper_files/paper/2020/hash/a32d7eeaae19821fd9ce317f3ce952a7-Abstract.html
Clément Vignac, Andreas Loukas, Pascal Frossard
https://papers.nips.cc/paper_files/paper/2020/hash/a32d7eeaae19821fd9ce317f3ce952a7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a32d7eeaae19821fd9ce317f3ce952a7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10910-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a32d7eeaae19821fd9ce317f3ce952a7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a32d7eeaae19821fd9ce317f3ce952a7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a32d7eeaae19821fd9ce317f3ce952a7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a32d7eeaae19821fd9ce317f3ce952a7-Supplemental.pdf
Message-passing has proved to be an effective way to design graph neural networks, as it is able to leverage both permutation equivariance and an inductive bias towards learning local structures in order to achieve good generalization. However, current message-passing architectures have a limited representation power and fail to learn basic topological properties of graphs. We address this problem and propose a powerful and equivariant message-passing framework based on two ideas: first, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node. This matrix contains rich local information about both features and topology and can eventually be pooled to build node representations. Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance. Having a representation that is independent of the specific choice of the one-hot encoding permits inductive reasoning and leads to better generalization properties. Experimentally, our model can predict various graph topological properties on synthetic data more accurately than previous methods and achieves state-of-the-art results on molecular graph regression on the ZINC dataset.
Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
https://papers.nips.cc/paper_files/paper/2020/hash/a36b598abb934e4528412e5a2127b931-Abstract.html
Sebastian Curi, Felix Berkenkamp, Andreas Krause
https://papers.nips.cc/paper_files/paper/2020/hash/a36b598abb934e4528412e5a2127b931-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a36b598abb934e4528412e5a2127b931-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10911-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a36b598abb934e4528412e5a2127b931-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a36b598abb934e4528412e5a2127b931-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a36b598abb934e4528412e5a2127b931-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a36b598abb934e4528412e5a2127b931-Supplemental.pdf
Model-based reinforcement learning algorithms with probabilistic dynamical models are amongst the most data-efficient learning methods. This is often attributed to their ability to distinguish between epistemic and aleatoric uncertainty. However, while most algorithms distinguish these two uncertainties for learning the model, they ignore it when optimizing the policy, which leads to greedy and insufficient exploration. At the same time, there are no practical solvers for optimistic exploration algorithms. In this paper, we propose a practical optimistic exploration algorithm (H-UCRL). H-UCRL reparameterizes the set of plausible models and hallucinates control directly on the epistemic uncertainty. By augmenting the input space with the hallucinated inputs, H-UCRL can be solved using standard greedy planners. Furthermore, we analyze H-UCRL and construct a general regret bound for well-calibrated models, which is provably sublinear in the case of Gaussian Process models. Based on this theoretical foundation, we show how optimistic exploration can be easily combined with state-of-the-art reinforcement learning algorithms and different probabilistic models. Our experiments demonstrate that optimistic exploration significantly speeds-up learning when there are penalties on actions, a setting that is notoriously difficult for existing model-based reinforcement learning algorithms.
Practical Low-Rank Communication Compression in Decentralized Deep Learning
https://papers.nips.cc/paper_files/paper/2020/hash/a376802c0811f1b9088828288eb0d3f0-Abstract.html
Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi
https://papers.nips.cc/paper_files/paper/2020/hash/a376802c0811f1b9088828288eb0d3f0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a376802c0811f1b9088828288eb0d3f0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10912-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a376802c0811f1b9088828288eb0d3f0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a376802c0811f1b9088828288eb0d3f0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a376802c0811f1b9088828288eb0d3f0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a376802c0811f1b9088828288eb0d3f0-Supplemental.zip
Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models. However, algorithms for decentralized training with compressed communication over arbitrary connected networks have been more complicated, requiring additional memory and hyperparameters. We introduce a simple algorithm that directly compresses the model differences between neighboring workers using low-rank linear compressors. We prove that our method does not require any additional hyperparameters, converges faster than prior methods, and is asymptotically independent of both the network and the compression. Inspired the PowerSGD algorithm for centralized deep learning, we execute power iteration steps on model differences to maximize the information transferred per bit. Out of the box, these compressors perform on par with state-of-the-art tuned compression algorithms in a series of deep learning benchmarks.
Mutual exclusivity as a challenge for deep neural networks
https://papers.nips.cc/paper_files/paper/2020/hash/a378383b89e6719e15cd1aa45478627c-Abstract.html
Kanishk Gandhi, Brenden M. Lake
https://papers.nips.cc/paper_files/paper/2020/hash/a378383b89e6719e15cd1aa45478627c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a378383b89e6719e15cd1aa45478627c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10913-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a378383b89e6719e15cd1aa45478627c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a378383b89e6719e15cd1aa45478627c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a378383b89e6719e15cd1aa45478627c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a378383b89e6719e15cd1aa45478627c-Supplemental.pdf
Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not vanilla neural architectures have an ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation. We demonstrate that there is a compelling case for designing task-general neural networks that learn through mutual exclusivity, which remains an open challenge.
3D Shape Reconstruction from Vision and Touch
https://papers.nips.cc/paper_files/paper/2020/hash/a3842ed7b3d0fe3ac263bcabd2999790-Abstract.html
Edward Smith, Roberto Calandra, Adriana Romero, Georgia Gkioxari, David Meger, Jitendra Malik, Michal Drozdzal
https://papers.nips.cc/paper_files/paper/2020/hash/a3842ed7b3d0fe3ac263bcabd2999790-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a3842ed7b3d0fe3ac263bcabd2999790-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10914-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a3842ed7b3d0fe3ac263bcabd2999790-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a3842ed7b3d0fe3ac263bcabd2999790-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a3842ed7b3d0fe3ac263bcabd2999790-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a3842ed7b3d0fe3ac263bcabd2999790-Supplemental.pdf
When a toddler is presented a new toy, their instinctual behaviour is to pick it up and inspect it with their hand and eyes in tandem, clearly searching over its surface to properly understand what they are playing with. At any instance here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to multi-modal shape understanding which encourages a similar fusion vision and touch information. To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single- modality baselines; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) the reconstruction quality increases with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.
GradAug: A New Regularization Method for Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/a3a3e8b30dd6eadfc78c77bb2b8e6b60-Abstract.html
Taojiannan Yang, Sijie Zhu, Chen Chen
https://papers.nips.cc/paper_files/paper/2020/hash/a3a3e8b30dd6eadfc78c77bb2b8e6b60-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a3a3e8b30dd6eadfc78c77bb2b8e6b60-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10915-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a3a3e8b30dd6eadfc78c77bb2b8e6b60-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a3a3e8b30dd6eadfc78c77bb2b8e6b60-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a3a3e8b30dd6eadfc78c77bb2b8e6b60-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a3a3e8b30dd6eadfc78c77bb2b8e6b60-Supplemental.pdf
We propose a new regularization method to alleviate over-fitting in deep neural networks. The key idea is utilizing randomly transformed training samples to regularize a set of sub-networks, which are originated by sampling the width of the original network, in the training process. As such, the proposed method introduces self-guided disturbances to the raw gradients of the network and therefore is termed as Gradient Augmentation (GradAug). We demonstrate that GradAug can help the network learn well-generalized and more diverse representations. Moreover, it is easy to implement and can be applied to various structures and applications. GradAug improves ResNet-50 to 78.79% on ImageNet classification, which is a new state-of-the-art accuracy. By combining with CutMix, it further boosts the performance to 79.67%, which outperforms an ensemble of advanced training tricks. The generalization ability is evaluated on COCO object detection and instance segmentation where GradAug significantly surpasses other state-of-the-art methods. GradAug is also robust to image distortions and FGSM adversarial attacks and is highly effective in low data regimes. Code is available at \url{https://github.com/taoyang1122/GradAug}
An Equivalence between Loss Functions and Non-Uniform Sampling in Experience Replay
https://papers.nips.cc/paper_files/paper/2020/hash/a3bf6e4db673b6449c2f7d13ee6ec9c0-Abstract.html
Scott Fujimoto, David Meger, Doina Precup
https://papers.nips.cc/paper_files/paper/2020/hash/a3bf6e4db673b6449c2f7d13ee6ec9c0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a3bf6e4db673b6449c2f7d13ee6ec9c0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10916-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a3bf6e4db673b6449c2f7d13ee6ec9c0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a3bf6e4db673b6449c2f7d13ee6ec9c0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a3bf6e4db673b6449c2f7d13ee6ec9c0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a3bf6e4db673b6449c2f7d13ee6ec9c0-Supplemental.pdf
Prioritized Experience Replay (PER) is a deep reinforcement learning technique in which agents learn from transitions sampled with non-uniform probability proportionate to their temporal-difference error. We show that any loss function evaluated with non-uniformly sampled data can be transformed into another uniformly sampled loss function with the same expected gradient. Surprisingly, we find in some environments PER can be replaced entirely by this new loss function without impact to empirical performance. Furthermore, this relationship suggests a new branch of improvements to PER by correcting its uniformly sampled loss function equivalent. We demonstrate the effectiveness of our proposed modifications to PER and the equivalent loss function in several MuJoCo and Atari environments.
Learning Utilities and Equilibria in Non-Truthful Auctions
https://papers.nips.cc/paper_files/paper/2020/hash/a3c788c57e423fa9c177544a4d5d1239-Abstract.html
Hu Fu, Tao Lin
https://papers.nips.cc/paper_files/paper/2020/hash/a3c788c57e423fa9c177544a4d5d1239-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a3c788c57e423fa9c177544a4d5d1239-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10917-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a3c788c57e423fa9c177544a4d5d1239-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a3c788c57e423fa9c177544a4d5d1239-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a3c788c57e423fa9c177544a4d5d1239-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a3c788c57e423fa9c177544a4d5d1239-Supplemental.pdf
In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria generally has an intricate dependence on the prior. Using the First Price Auction as our main demonstrating example, we show that $\tilde O(n / \epsilon^2)$ samples from the prior with $n$ agents suffice for an algorithm to learn the interim utilities for all monotone bidding strategies. As a consequence, this number of samples suffice for learning all approximate equilibria. We give almost matching (up to polylog factors) lower bound on the sample complexity for learning utilities. We also consider a setting where agents must pay a search cost to discover their own types. Drawing on a connection between this setting and the first price auction, discovered recently by Kleinberg et al. (2016), we show that $\tilde O(n / \epsilon^2)$ samples suffice for utilities and equilibria to be estimated in a near welfare-optimal descending auction in this setting. En route, we improve the sample complexity bound, recently obtained by Guo et al. (2019), for the Pandora's Box problem, which is a classical model for sequential consumer search.
Rational neural networks
https://papers.nips.cc/paper_files/paper/2020/hash/a3f390d88e4c41f2747bfa2f1b5f87db-Abstract.html
Nicolas Boulle, Yuji Nakatsukasa, Alex Townsend
https://papers.nips.cc/paper_files/paper/2020/hash/a3f390d88e4c41f2747bfa2f1b5f87db-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a3f390d88e4c41f2747bfa2f1b5f87db-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10918-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a3f390d88e4c41f2747bfa2f1b5f87db-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a3f390d88e4c41f2747bfa2f1b5f87db-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a3f390d88e4c41f2747bfa2f1b5f87db-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a3f390d88e4c41f2747bfa2f1b5f87db-Supplemental.pdf
We consider neural networks with rational activation functions. The choice of the nonlinear activation function in deep learning architectures is crucial and heavily impacts the performance of a neural network. We establish optimal bounds in terms of network complexity and prove that rational neural networks approximate smooth functions more efficiently than ReLU networks with exponentially smaller depth. The flexibility and smoothness of rational activation functions make them an attractive alternative to ReLU, as we demonstrate with numerical experiments.
DISK: Learning local features with policy gradient
https://papers.nips.cc/paper_files/paper/2020/hash/a42a596fc71e17828440030074d15e74-Abstract.html
Michał Tyszkiewicz, Pascal Fua, Eduard Trulls
https://papers.nips.cc/paper_files/paper/2020/hash/a42a596fc71e17828440030074d15e74-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a42a596fc71e17828440030074d15e74-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10919-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a42a596fc71e17828440030074d15e74-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a42a596fc71e17828440030074d15e74-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a42a596fc71e17828440030074d15e74-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a42a596fc71e17828440030074d15e74-Supplemental.pdf
Local feature frameworks are difficult to learn in an end-to-end fashion due to the discreteness inherent to the selection and matching of sparse keypoints. We introduce DISK (DIScrete Keypoints), a novel method that overcomes these obstacles by leveraging principles from Reinforcement Learning (RL), optimizing end-to-end for a high number of correct feature matches. Our simple yet expressive probabilistic model lets us keep the training and inference regimes close, while maintaining good enough convergence properties to reliably train from scratch. Our features can be extracted very densely while remaining discriminative, challenging commonly held assumptions about what constitutes a good keypoint, as showcased in Fig. 1, and deliver state-of-the-art results on three public benchmarks.
Transfer Learning via $\ell_1$ Regularization
https://papers.nips.cc/paper_files/paper/2020/hash/a4a83056b58ff983d12c72bb17996243-Abstract.html
Masaaki Takada, Hironori Fujisawa
https://papers.nips.cc/paper_files/paper/2020/hash/a4a83056b58ff983d12c72bb17996243-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a4a83056b58ff983d12c72bb17996243-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10920-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a4a83056b58ff983d12c72bb17996243-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a4a83056b58ff983d12c72bb17996243-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a4a83056b58ff983d12c72bb17996243-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a4a83056b58ff983d12c72bb17996243-Supplemental.pdf
Machine learning algorithms typically require abundant data under a stationary environment. However, environments are nonstationary in many real-world applications. Critical issues lie in how to effectively adapt models under an ever-changing environment. We propose a method for transferring knowledge from a source domain to a target domain via $\ell_1$ regularization in high dimension. We incorporate $\ell_1$ regularization of differences between source and target parameters in addition to an ordinary $\ell_1$ regularization. Hence, our method yields sparsity for both the estimates themselves and changes of the estimates. The proposed method has a tight estimation error bound under a stationary environment, and the estimate remains unchanged from the source estimate under small residuals. Moreover, the estimate is consistent with the underlying function, even when the source estimate is mistaken due to nonstationarity. Empirical results demonstrate that the proposed method effectively balances stability and plasticity.
GOCor: Bringing Globally Optimized Correspondence Volumes into Your Neural Network
https://papers.nips.cc/paper_files/paper/2020/hash/a4a8a31750a23de2da88ef6a491dfd5c-Abstract.html
Prune Truong, Martin Danelljan, Luc V. Gool, Radu Timofte
https://papers.nips.cc/paper_files/paper/2020/hash/a4a8a31750a23de2da88ef6a491dfd5c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a4a8a31750a23de2da88ef6a491dfd5c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10921-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a4a8a31750a23de2da88ef6a491dfd5c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a4a8a31750a23de2da88ef6a491dfd5c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a4a8a31750a23de2da88ef6a491dfd5c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a4a8a31750a23de2da88ef6a491dfd5c-Supplemental.pdf
The feature correlation layer serves as a key neural network module in numerous computer vision problems that involve dense correspondences between image pairs. It predicts a correspondence volume by evaluating dense scalar products between feature vectors extracted from pairs of locations in two images. However, this point-to-point feature comparison is insufficient when disambiguating multiple similar regions in an image, severely affecting the performance of the end task. We propose GOCor, a fully differentiable dense matching module, acting as a direct replacement to the feature correlation layer. The correspondence volume generated by our module is the result of an internal optimization procedure that explicitly accounts for similar regions in the scene. Moreover, our approach is capable of effectively learning spatial matching priors to resolve further matching ambiguities. We analyze our GOCor module in extensive ablative experiments. When integrated into state-of-the-art networks, our approach significantly outperforms the feature correlation layer for the tasks of geometric matching, optical flow, and dense semantic matching. The code and trained models will be made available at github.com/PruneTruong/GOCor.
Deep Inverse Q-learning with Constraints
https://papers.nips.cc/paper_files/paper/2020/hash/a4c42bfd5f5130ddf96e34a036c75e0a-Abstract.html
Gabriel Kalweit, Maria Huegle, Moritz Werling, Joschka Boedecker
https://papers.nips.cc/paper_files/paper/2020/hash/a4c42bfd5f5130ddf96e34a036c75e0a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a4c42bfd5f5130ddf96e34a036c75e0a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10922-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a4c42bfd5f5130ddf96e34a036c75e0a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a4c42bfd5f5130ddf96e34a036c75e0a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a4c42bfd5f5130ddf96e34a036c75e0a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a4c42bfd5f5130ddf96e34a036c75e0a-Supplemental.pdf
Popular Maximum Entropy Inverse Reinforcement Learning approaches require the computation of expected state visitation frequencies for the optimal policy under an estimate of the reward function. This usually requires intermediate value estimation in the inner loop of the algorithm, slowing down convergence considerably. In this work, we introduce a novel class of algorithms that only needs to solve the MDP underlying the demonstrated behavior once to recover the expert policy. This is possible through a formulation that exploits a probabilistic behavior assumption for the demonstrations within the structure of Q-learning. We propose Inverse Action-value Iteration which is able to fully recover an underlying reward of an external agent in closed-form analytically. We further provide an accompanying class of sampling-based variants which do not depend on a model of the environment. We show how to extend this class of algorithms to continuous state-spaces via function approximation and how to estimate a corresponding action-value function, leading to a policy as close as possible to the policy of the external agent, while optionally satisfying a list of predefined hard constraints. We evaluate the resulting algorithms called Inverse Action-value Iteration, Inverse Q-learning and Deep Inverse Q-learning on the Objectworld benchmark, showing a speedup of up to several orders of magnitude compared to (Deep) Max-Entropy algorithms. We further apply Deep Constrained Inverse Q-learning on the task of learning autonomous lane-changes in the open-source simulator SUMO achieving competent driving after training on data corresponding to 30 minutes of demonstrations.
Optimistic Dual Extrapolation for Coherent Non-monotone Variational Inequalities
https://papers.nips.cc/paper_files/paper/2020/hash/a4df48d0b71376788fee0b92746fd7d5-Abstract.html
Chaobing Song, Zhengyuan Zhou, Yichao Zhou, Yong Jiang, Yi Ma
https://papers.nips.cc/paper_files/paper/2020/hash/a4df48d0b71376788fee0b92746fd7d5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a4df48d0b71376788fee0b92746fd7d5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10923-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a4df48d0b71376788fee0b92746fd7d5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a4df48d0b71376788fee0b92746fd7d5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a4df48d0b71376788fee0b92746fd7d5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a4df48d0b71376788fee0b92746fd7d5-Supplemental.pdf
The optimization problems associated with training generative adversarial neural networks can be largely reduced to certain {\em non-monotone} variational inequality problems (VIPs), whereas existing convergence results are mostly based on monotone or strongly monotone assumptions. In this paper, we propose {\em optimistic dual extrapolation (OptDE)}, a method that only performs {\em one} gradient evaluation per iteration. We show that OptDE is provably convergent to {\em a strong solution} under different coherent non-monotone assumptions. In particular, when a {\em weak solution} exists, the convergence rate of our method is $O(1/{\epsilon^{2}})$, which matches the best existing result of the methods with two gradient evaluations. Further, when a {\em $\sigma$-weak solution} exists, the convergence guarantee is improved to the linear rate $O(\log\frac{1}{\epsilon})$. Along the way--as a byproduct of our inquiries into non-monotone variational inequalities--we provide the near-optimal $O\big(\frac{1}{\epsilon}\log \frac{1}{\epsilon}\big)$ convergence guarantee in terms of restricted strong merit function for monotone variational inequalities. We also show how our results can be naturally generalized to the stochastic setting, and obtain corresponding new convergence results. Taken together, our results contribute to the broad landscape of variational inequality--both non-monotone and monotone alike--by providing a novel and more practical algorithm with the state-of-the-art convergence guarantees.
Prediction with Corrupted Expert Advice
https://papers.nips.cc/paper_files/paper/2020/hash/a512294422de868f8474d22344636f16-Abstract.html
Idan Amir, Idan Attias, Tomer Koren, Yishay Mansour, Roi Livni
https://papers.nips.cc/paper_files/paper/2020/hash/a512294422de868f8474d22344636f16-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/a512294422de868f8474d22344636f16-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10924-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/a512294422de868f8474d22344636f16-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/a512294422de868f8474d22344636f16-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/a512294422de868f8474d22344636f16-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/a512294422de868f8474d22344636f16-Supplemental.pdf
We revisit the fundamental problem of prediction with expert advice, in a setting where the environment is benign and generates losses stochastically, but the feedback observed by the learner is subject to a moderate adversarial corruption. We prove that a variant of the classical Multiplicative Weights algorithm with decreasing step sizes achieves constant regret in this setting and performs optimally in a wide range of environments, regardless of the magnitude of the injected corruption. Our results reveal a surprising disparity between the often comparable Follow the Regularized Leader (FTRL) and Online Mirror Descent (OMD) frameworks: we show that for experts in the corrupted stochastic regime, the regret performance of OMD is in fact strictly inferior to that of FTRL.