title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
Small Nash Equilibrium Certificates in Very Large Games
https://papers.nips.cc/paper_files/paper/2020/hash/4fbe073f17f161810fdf3dab1307b30f-Abstract.html
Brian Zhang, Tuomas Sandholm
https://papers.nips.cc/paper_files/paper/2020/hash/4fbe073f17f161810fdf3dab1307b30f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4fbe073f17f161810fdf3dab1307b30f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10325-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4fbe073f17f161810fdf3dab1307b30f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4fbe073f17f161810fdf3dab1307b30f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4fbe073f17f161810fdf3dab1307b30f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4fbe073f17f161810fdf3dab1307b30f-Supplemental.pdf
In many game settings, the game is not explicitly given but is only accessible by playing it. While there have been impressive demonstrations in such settings, prior techniques have not offered safety guarantees, that is, guarantees on the game-theoretic exploitability of the computed strategies. In this paper we introduce an approach that shows that it is possible to provide exploitability guarantees in such settings without ever exploring the entire game. We introduce a notion of a certificatae of an extensive-form approximate Nash equilibrium. For verifying a certificate, we give an algorithm that runs in time linear in the size of the certificate rather than the size of the whole game. In zero-sum games, we further show that an optimal certificate---given the exploration so far---can be computed with any standard game-solving algorithm (e.g., using a linear program or counterfactual regret minimization). However, unlike in the cases of normal form or perfect information, we show that certain families of extensive-form games do not have small approximate certificates, even after making extremely nice assumptions on the structure of the game. Despite this difficulty, we find experimentally that very small certificates, even exact ones, often exist in large and even in infinite games. Overall, our approach enables one to try one's favorite exploration strategies while offering exploitability guarantees, thereby decoupling the exploration strategy from the equilibrium-finding process.
Training Linear Finite-State Machines
https://papers.nips.cc/paper_files/paper/2020/hash/4fc28b7093b135c21c7183ac07e928a6-Abstract.html
Arash Ardakani, Amir Ardakani, Warren Gross
https://papers.nips.cc/paper_files/paper/2020/hash/4fc28b7093b135c21c7183ac07e928a6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4fc28b7093b135c21c7183ac07e928a6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10326-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4fc28b7093b135c21c7183ac07e928a6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4fc28b7093b135c21c7183ac07e928a6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4fc28b7093b135c21c7183ac07e928a6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4fc28b7093b135c21c7183ac07e928a6-Supplemental.pdf
A finite-state machine (FSM) is a computation model to process binary strings in sequential circuits. Hence, a single-input linear FSM is conventionally used to implement complex single-input functions , such as tanh and exponentiation functions, in stochastic computing (SC) domain where continuous values are represented by sequences of random bits. In this paper, we introduce a method that can train a multi-layer FSM-based network where FSMs are connected to every FSM in the previous and the next layer. We show that the proposed FSM-based network can synthesize multi-input complex functions such as 2D Gabor filters and can perform non-sequential tasks such as image classifications on stochastic streams with no multiplication since FSMs are implemented by look-up tables only. Inspired by the capability of FSMs in processing binary streams, we then propose an FSM-based model that can process time series data when performing temporal tasks such as character-level language modeling. Unlike long short-term memories (LSTMs) that unroll the network for each input time step and perform back-propagation on the unrolled network, our FSM-based model requires to backpropagate gradients only for the current input time step while it is still capable of learning long-term dependencies. Therefore, our FSM-based model can learn extremely long-term dependencies as it requires 1/l memory storage during training compared to LSTMs, where l is the number of time steps. Moreover, our FSM-based model reduces the power consumption of training on a GPU by 33% compared to an LSTM model of the same size.
Efficient active learning of sparse halfspaces with arbitrary bounded noise
https://papers.nips.cc/paper_files/paper/2020/hash/5034a5d62f91942d2a7aeaf527dfe111-Abstract.html
Chicheng Zhang, Jie Shen, Pranjal Awasthi
https://papers.nips.cc/paper_files/paper/2020/hash/5034a5d62f91942d2a7aeaf527dfe111-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5034a5d62f91942d2a7aeaf527dfe111-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10327-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5034a5d62f91942d2a7aeaf527dfe111-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5034a5d62f91942d2a7aeaf527dfe111-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5034a5d62f91942d2a7aeaf527dfe111-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5034a5d62f91942d2a7aeaf527dfe111-Supplemental.pdf
We study active learning of homogeneous $s$-sparse halfspaces in $\mathbb{R}^d$ under the setting where the unlabeled data distribution is isotropic log-concave and each label is flipped with probability at most $\eta$ for a parameter $\eta \in \big[0, \frac12\big)$, known as the bounded noise. Even in the presence of mild label noise, i.e. $\eta$ is a small constant, this is a challenging problem and only recently have label complexity bounds of the form $\tilde{O}(s \cdot polylog(d, \frac{1}{\epsilon}))$ been established in [Zhang 2018] for computationally efficient algorithms. In contrast, under high levels of label noise, the label complexity bounds achieved by computationally efficient algorithms are much worse: the best known result [Awasthi et al. 2016] provides a computationally efficient algorithm with label complexity $\tilde{O}((s ln d/\epsilon)^{poly(1/(1-2\eta))})$, which is label-efficient only when the noise rate $\eta$ is a fixed constant. In this work, we substantially improve on it by designing a polynomial time algorithm for active learning of $s$-sparse halfspaces, with a label complexity of $\tilde{O}\big(\frac{s}{(1-2\eta)^4} polylog (d, \frac 1 \epsilon) \big)$. This is the first efficient algorithm with label complexity polynomial in $\frac{1}{1-2\eta}$ in this setting, which is label-efficient even for $\eta$ arbitrarily close to $\frac12$. Our active learning algorithm and its theoretical guarantees also immediately translate to new state-of-the-art label and sample complexity results for full-dimensional active and passive halfspace learning under arbitrary bounded noise.
Swapping Autoencoder for Deep Image Manipulation
https://papers.nips.cc/paper_files/paper/2020/hash/50905d7b2216bfeccb5b41016357176b-Abstract.html
Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei Efros, Richard Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/50905d7b2216bfeccb5b41016357176b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/50905d7b2216bfeccb5b41016357176b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10328-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/50905d7b2216bfeccb5b41016357176b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/50905d7b2216bfeccb5b41016357176b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/50905d7b2216bfeccb5b41016357176b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/50905d7b2216bfeccb5b41016357176b-Supplemental.pdf
Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging. We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation, rather than random sampling. The key idea is to encode an image into two independent components and enforce that any swapped combination maps to a realistic image. In particular, we encourage the components to represent structure and texture, by enforcing one component to encode co-occurrent patch statistics across different parts of the image. As our method is trained with an encoder, finding the latent codes for a new input image becomes trivial, rather than cumbersome. As a result, our method enables us to manipulate real input images in various ways, including texture swapping, local and global editing, and latent code vector arithmetic. Experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models.
Self-Supervised Few-Shot Learning on Point Clouds
https://papers.nips.cc/paper_files/paper/2020/hash/50c1f44e426560f3f2cdcb3e19e39903-Abstract.html
Charu Sharma, Manohar Kaul
https://papers.nips.cc/paper_files/paper/2020/hash/50c1f44e426560f3f2cdcb3e19e39903-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/50c1f44e426560f3f2cdcb3e19e39903-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10329-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/50c1f44e426560f3f2cdcb3e19e39903-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/50c1f44e426560f3f2cdcb3e19e39903-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/50c1f44e426560f3f2cdcb3e19e39903-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/50c1f44e426560f3f2cdcb3e19e39903-Supplemental.pdf
The increased availability of massive point clouds coupled with their utility in a wide variety of applications such as robotics, shape synthesis, and self-driving cars has attracted increased attention from both industry and academia. Recently, deep neural networks operating on labeled point clouds have shown promising results on supervised learning tasks like classification and segmentation. However, supervised learning leads to the cumbersome task of annotating the point clouds. To combat this problem, we propose two novel self-supervised pre-training tasks that encode a hierarchical partitioning of the point clouds using a cover-tree, where point cloud subsets lie within balls of varying radii at each level of the cover-tree. Furthermore, our self-supervised learning network is restricted to pre-train on the support set (comprising of scarce training examples) used to train the downstream network in a few-shot learning (FSL) setting. Finally, the fully-trained self-supervised network's point embeddings are input to the downstream task's network. We present a comprehensive empirical evaluation of our method on both downstream classification and segmentation tasks and show that supervised methods pre-trained with our self-supervised learning method significantly improve the accuracy of state-of-the-art methods. Additionally, our method also outperforms previous unsupervised methods in downstream classification tasks.
Faster Differentially Private Samplers via Rényi Divergence Analysis of Discretized Langevin MCMC
https://papers.nips.cc/paper_files/paper/2020/hash/50cf0fe63e0ff857e1c9d01d827267ca-Abstract.html
Arun Ganesh, Kunal Talwar
https://papers.nips.cc/paper_files/paper/2020/hash/50cf0fe63e0ff857e1c9d01d827267ca-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/50cf0fe63e0ff857e1c9d01d827267ca-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10330-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/50cf0fe63e0ff857e1c9d01d827267ca-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/50cf0fe63e0ff857e1c9d01d827267ca-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/50cf0fe63e0ff857e1c9d01d827267ca-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/50cf0fe63e0ff857e1c9d01d827267ca-Supplemental.pdf
Various differentially private algorithms instantiate the exponential mechanism, and require sampling from the distribution $\exp(-f)$ for a suitable function $f$. When the domain of the distribution is high-dimensional, this sampling can be challenging. Using heuristic sampling schemes such as Gibbs sampling does not necessarily lead to provable privacy. When $f$ is convex, techniques from log-concave sampling lead to polynomial-time algorithms, albeit with large polynomials. Langevin dynamics-based algorithms offer much faster alternatives under some distance measures such as statistical distance. In this work, we establish rapid convergence for these algorithms under distance measures more suitable for differential privacy. For smooth, strongly-convex $f$, we give the first results proving convergence in R\'enyi divergence. This gives us fast differentially private algorithms for such $f$. Our techniques and simple and generic and apply also to underdamped Langevin dynamics.
Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE
https://papers.nips.cc/paper_files/paper/2020/hash/510f2318f324cf07fce24c3a4b89c771-Abstract.html
Ding Zhou, Xue-Xin Wei
https://papers.nips.cc/paper_files/paper/2020/hash/510f2318f324cf07fce24c3a4b89c771-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/510f2318f324cf07fce24c3a4b89c771-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10331-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/510f2318f324cf07fce24c3a4b89c771-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/510f2318f324cf07fce24c3a4b89c771-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/510f2318f324cf07fce24c3a4b89c771-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/510f2318f324cf07fce24c3a4b89c771-Supplemental.pdf
The ability to record activities from hundreds of neurons simultaneously in the brain has placed an increasing demand for developing appropriate statistical techniques to analyze such data. Recently, deep generative models have been proposed to fit neural population responses. While these methods are flexible and expressive, the downside is that they can be difficult to interpret and identify. To address this problem, we propose a method that integrates key ingredients from latent models and traditional neural encoding models. Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder, which we adapt to make appropriate for neuroscience applications. Specifically, we propose to construct latent variable models of neural activity while simultaneously modeling the relation between the latent and task variables (non-neural variables, e.g. sensory, motor, and other externally observable states). The incorporation of task variables results in models that are not only more constrained, but also show qualitative improvements in interpretability and identifiability. We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex. We demonstrate that pi-VAE not only fits the data better, but also provides unexpected novel insights into the structure of the neural codes.
RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/51200d29d1fc15f5a71c1dab4bb54f7c-Abstract.html
Caglar Gulcehre, Ziyu Wang, Alexander Novikov, Thomas Paine, Sergio Gómez, Konrad Zolna, Rishabh Agarwal, Josh S. Merel, Daniel J. Mankowitz, Cosmin Paduraru, Gabriel Dulac-Arnold, Jerry Li, Mohammad Norouzi, Matthew Hoffman, Nicolas Heess, Nando de Freitas
https://papers.nips.cc/paper_files/paper/2020/hash/51200d29d1fc15f5a71c1dab4bb54f7c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/51200d29d1fc15f5a71c1dab4bb54f7c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10332-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/51200d29d1fc15f5a71c1dab4bb54f7c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/51200d29d1fc15f5a71c1dab4bb54f7c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/51200d29d1fc15f5a71c1dab4bb54f7c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/51200d29d1fc15f5a71c1dab4bb54f7c-Supplemental.pdf
Offline methods for reinforcement learning have a potential to help bridge the gap between reinforcement learning research and real-world applications. They make it possible to learn policies from offline datasets, thus overcoming concerns associated with online data collection in the real-world, including cost, safety, or ethical concerns. In this paper, we propose a benchmark called RL Unplugged to evaluate and compare offline RL methods. RL Unplugged includes data from a diverse range of domains including games e.g., Atari benchmark) and simulated motor control problems (e.g., DM Control Suite). The datasets include domains that are partially or fully observable, use continuous or discrete actions, and have stochastic vs. deterministic dynamics. We propose detailed evaluation protocols for each domain in RL Unplugged and provide an extensive analysis of supervised learning and offline RL methods using these protocols. We will release data for all our tasks and open-source all algorithms presented in this paper. We hope that our suite of benchmarks will increase the reproducibility of experiments and make it possible to study challenging tasks with a limited computational budget, thus making RL research both more systematic and more accessible across the community. Moving forward, we view RL Unplugged as a living benchmark suite that will evolve and grow with datasets contributed by the research community and ourselves. Our project page is available on github.
Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning
https://papers.nips.cc/paper_files/paper/2020/hash/512c5cad6c37edb98ae91c8a76c3a291-Abstract.html
Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama
https://papers.nips.cc/paper_files/paper/2020/hash/512c5cad6c37edb98ae91c8a76c3a291-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/512c5cad6c37edb98ae91c8a76c3a291-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10333-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/512c5cad6c37edb98ae91c8a76c3a291-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/512c5cad6c37edb98ae91c8a76c3a291-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/512c5cad6c37edb98ae91c8a76c3a291-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/512c5cad6c37edb98ae91c8a76c3a291-Supplemental.pdf
The transition matrix, denoting the transition relationship from clean labels to noisy labels, is essential to build statistically consistent classifiers in label-noise learning. Existing methods for estimating the transition matrix rely heavily on estimating the noisy class posterior. However, the estimation error for noisy class posterior could be large because of the randomness of label noise. The estimation error would lead the transition matrix to be poorly estimated. Therefore in this paper, we aim to solve this problem by exploiting the divide-and-conquer paradigm. Specifically, we introduce an intermediate class to avoid directly estimating the noisy class posterior. By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimated transition matrices. We term the proposed method as the dual $T$-estimator. Both theoretical analyses and empirical results illustrate the effectiveness of the dual $T$-estimator for estimating transition matrices, leading to better classification performances.
Interior Point Solving for LP-based prediction+optimisation
https://papers.nips.cc/paper_files/paper/2020/hash/51311013e51adebc3c34d2cc591fefee-Abstract.html
Jayanta Mandi, Tias Guns
https://papers.nips.cc/paper_files/paper/2020/hash/51311013e51adebc3c34d2cc591fefee-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/51311013e51adebc3c34d2cc591fefee-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10334-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/51311013e51adebc3c34d2cc591fefee-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/51311013e51adebc3c34d2cc591fefee-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/51311013e51adebc3c34d2cc591fefee-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/51311013e51adebc3c34d2cc591fefee-Supplemental.pdf
Solving optimization problem is the key to decision making in many real-life analytics applications. However, the coefficients of the optimization problems are often uncertain and dependent on external factors, such as future demand or energy- or stock prices. Machine learning (ML) models, especially neural networks, are increasingly being used to estimate these coefficients in a data-driven way. Hence, end-to-end predict-and-optimize approaches, which consider how effective the predicted values are to solve the optimization problem, have received increasing attention. In case of integer linear programming problems, a popular approach to overcome their non-differentiabilty is to add a quadratic penalty term to the continuous relaxation, such that results from differentiating over quadratic programs can be used. Instead we investigate the use of the more principled logarithmic barrier term, as widely used in interior point solvers for linear programming. Instead of differentiating the KKT conditions, we consider the homogeneous self-dual formulation of the LP and we show the relation between the interior point step direction and corresponding gradients needed for learning. Finally, our empirical experiments demonstrate our approach performs as good as if not better than the state-of-the-art QPTL (Quadratic Programming task loss) formulation of Wilder et al. and SPO approach of Elmachtoub and Grigas.
A simple normative network approximates local non-Hebbian learning in the cortex
https://papers.nips.cc/paper_files/paper/2020/hash/5133aa1d673894d5a05b9d83809b9dbe-Abstract.html
Siavash Golkar, David Lipshutz, Yanis Bahroun, Anirvan Sengupta, Dmitri Chklovskii
https://papers.nips.cc/paper_files/paper/2020/hash/5133aa1d673894d5a05b9d83809b9dbe-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5133aa1d673894d5a05b9d83809b9dbe-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10335-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5133aa1d673894d5a05b9d83809b9dbe-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5133aa1d673894d5a05b9d83809b9dbe-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5133aa1d673894d5a05b9d83809b9dbe-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5133aa1d673894d5a05b9d83809b9dbe-Supplemental.pdf
To guide behavior, the brain extracts relevant features from high-dimensional data streamed by sensory organs. Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals which provide context and task-relevant information. Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data. Mathematically, we start with a family of Reduced-Rank Regression (RRR) objective functions which include Reduced Rank (minimum) Mean Square Error (RRMSE) and Canonical Correlation Analysis (CCA), and derive novel offline and online optimization algorithms, which we call Bio-RRR. The online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex. We detail how, in our model, the calcium plateau potential can be interpreted as a backpropagating error signal. We demonstrate that, despite relying exclusively on biologically plausible local learning rules, our algorithms perform competitively with existing implementations of RRMSE and CCA.
Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks
https://papers.nips.cc/paper_files/paper/2020/hash/517f24c02e620d5a4dac1db388664a63-Abstract.html
Roman Pogodin, Peter Latham
https://papers.nips.cc/paper_files/paper/2020/hash/517f24c02e620d5a4dac1db388664a63-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/517f24c02e620d5a4dac1db388664a63-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10336-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/517f24c02e620d5a4dac1db388664a63-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/517f24c02e620d5a4dac1db388664a63-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/517f24c02e620d5a4dac1db388664a63-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/517f24c02e620d5a4dac1db388664a63-Supplemental.pdf
The state-of-the art machine learning approach to training deep neural networks, backpropagation, is implausible for real neural networks: neurons need to know their outgoing weights; training alternates between a bottom-up forward pass (computation) and a top-down backward pass (learning); and the algorithm often needs precise labels of many data points. Biologically plausible approximations to backpropagation, such as feedback alignment, solve the weight transport problem, but not the other two. Thus, fully biologically plausible learning rules have so far remained elusive. Here we present a family of learning rules that does not suffer from any of these problems. It is motivated by the information bottleneck principle (extended with kernel methods), in which networks learn to compress the input as much as possible without sacrificing prediction of the output. The resulting rules have a 3-factor Hebbian structure: they require pre- and post-synaptic firing rates and an error signal - the third factor - consisting of a global teaching signal and a layer-specific term, both available without a top-down pass. They do not require precise labels; instead, they rely on the similarity between pairs of desired outputs. Moreover, to obtain good performance on hard problems and retain biological plausibility, our rules need divisive normalization - a known feature of biological networks. Finally, simulations show that our rules perform nearly as well as backpropagation on image classification tasks.
Understanding the Role of Training Regimes in Continual Learning
https://papers.nips.cc/paper_files/paper/2020/hash/518a38cc9a0173d0b2dc088166981cf8-Abstract.html
Seyed Iman Mirzadeh, Mehrdad Farajtabar, Razvan Pascanu, Hassan Ghasemzadeh
https://papers.nips.cc/paper_files/paper/2020/hash/518a38cc9a0173d0b2dc088166981cf8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/518a38cc9a0173d0b2dc088166981cf8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10337-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/518a38cc9a0173d0b2dc088166981cf8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/518a38cc9a0173d0b2dc088166981cf8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/518a38cc9a0173d0b2dc088166981cf8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/518a38cc9a0173d0b2dc088166981cf8-Supplemental.pdf
Catastrophic forgetting affects the training of neural networks, limiting their ability to learn multiple tasks sequentially. From the perspective of the well established plasticity-stability dilemma, neural networks tend to be overly plastic, lacking the stability necessary to prevent the forgetting of previous knowledge, which means that as learning progresses, networks tend to forget previously seen tasks. This phenomenon coined in the continual learning literature, has attracted much attention lately, and several families of approaches have been proposed with different degrees of success. However, there has been limited prior work extensively analyzing the impact that different training regimes -- learning rate, batch size, regularization method-- can have on forgetting. In this work, we depart from the typical approach of altering the learning algorithm to improve stability. Instead, we hypothesize that the geometrical properties of the local minima found for each task play an important role in the overall degree of forgetting. In particular, we study the effect of dropout, learning rate decay, and batch size on forming training regimes that widen the tasks' local minima and consequently, on helping it not to forget catastrophically. Our study provides practical insights to improve stability via simple yet effective techniques that outperform alternative baselines.
Fair regression with Wasserstein barycenters
https://papers.nips.cc/paper_files/paper/2020/hash/51cdbd2611e844ece5d80878eb770436-Abstract.html
Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
https://papers.nips.cc/paper_files/paper/2020/hash/51cdbd2611e844ece5d80878eb770436-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/51cdbd2611e844ece5d80878eb770436-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10338-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/51cdbd2611e844ece5d80878eb770436-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/51cdbd2611e844ece5d80878eb770436-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/51cdbd2611e844ece5d80878eb770436-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/51cdbd2611e844ece5d80878eb770436-Supplemental.pdf
We study the problem of learning a real-valued function that satisfies the Demographic Parity constraint. It demands the distribution of the predicted output to be independent of the sensitive attribute. We consider the case that the sensitive attribute is available for prediction. We establish a connection between fair regression and optimal transport theory, based on which we derive a close form expression for the optimal fair predictor. Specifically, we show that the distribution of this optimum is the Wasserstein barycenter of the distributions induced by the standard regression function on the sensitive groups. This result offers an intuitive interpretation of the optimal fair prediction and suggests a simple post-processing algorithm to achieve fairness. We establish risk and distribution-free fairness guarantees for this procedure. Numerical experiments indicate that our method is very effective in learning fair models, with a relative increase in error rate that is inferior to the relative gain in fairness.
Training Stronger Baselines for Learning to Optimize
https://papers.nips.cc/paper_files/paper/2020/hash/51f4efbfb3e18f4ea053c4d3d282c4e2-Abstract.html
Tianlong Chen, Weiyi Zhang, Zhou Jingyang, Shiyu Chang, Sijia Liu, Lisa Amini, Zhangyang Wang
https://papers.nips.cc/paper_files/paper/2020/hash/51f4efbfb3e18f4ea053c4d3d282c4e2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/51f4efbfb3e18f4ea053c4d3d282c4e2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10339-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/51f4efbfb3e18f4ea053c4d3d282c4e2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/51f4efbfb3e18f4ea053c4d3d282c4e2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/51f4efbfb3e18f4ea053c4d3d282c4e2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/51f4efbfb3e18f4ea053c4d3d282c4e2-Supplemental.pdf
Learning to optimize (L2O) is gaining increased attention because classical optimizers require laborious, problem-specific design and hyperparameter tuning. However, there are significant performance and practicality gaps between manually designed optimizers and existing L2O models. Specifically, learned optimizers are applicable to only a limited class of problems, often exhibit instability, and generalize poorly. As research efforts focus on increasingly sophisticated L2O models, we argue for an orthogonal, under-explored theme: improved training techniques for L2O models. We first present a progressive, curriculum-based training scheme, which gradually increases the optimizer unroll length to mitigate the well-known L2O dilemma of truncation bias (shorter unrolling) versus gradient explosion (longer unrolling). Secondly, we present an off-policy imitation learning based approach to guide the L2O learning, by learning from the behavior of analytical optimizers. We evaluate our improved training techniques with a variety of state-of-the-art L2O models and immediately boost their performance, without making any change to their model structures. We demonstrate that, using our improved training techniques, one of the earliest and simplest L2O models can be trained to outperform even the latest and most complex L2O models on a number of tasks. Our results demonstrate a greater potential of L2O yet to be unleashed, and prompt a reconsideration of recent L2O model progress. Our codes are publicly available at: https://github.com/VITA-Group/L2O-Training-Techniques.
Exactly Computing the Local Lipschitz Constant of ReLU Networks
https://papers.nips.cc/paper_files/paper/2020/hash/5227fa9a19dce7ba113f50a405dcaf09-Abstract.html
Matt Jordan, Alexandros G. Dimakis
https://papers.nips.cc/paper_files/paper/2020/hash/5227fa9a19dce7ba113f50a405dcaf09-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5227fa9a19dce7ba113f50a405dcaf09-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10340-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5227fa9a19dce7ba113f50a405dcaf09-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5227fa9a19dce7ba113f50a405dcaf09-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5227fa9a19dce7ba113f50a405dcaf09-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5227fa9a19dce7ba113f50a405dcaf09-Supplemental.zip
The local Lipschitz constant of a neural network is a useful metric with applications in robustness, generalization, and fairness evaluation. We provide novel analytic results relating the local Lipschitz constant of nonsmooth vector-valued functions to a maximization over the norm of the generalized Jacobian. We present a sufficient condition for which backpropagation always returns an element of the generalized Jacobian, and reframe the problem over this broad class of functions. We show strong inapproximability results for estimating Lipschitz constants of ReLU networks, and then formulate an algorithm to compute these quantities exactly. We leverage this algorithm to evaluate the tightness of competing Lipschitz estimators and the effects of regularized training on the Lipschitz constant.
Strictly Batch Imitation Learning by Energy-based Distribution Matching
https://papers.nips.cc/paper_files/paper/2020/hash/524f141e189d2a00968c3d48cadd4159-Abstract.html
Daniel Jarrett, Ioana Bica, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2020/hash/524f141e189d2a00968c3d48cadd4159-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/524f141e189d2a00968c3d48cadd4159-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10341-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/524f141e189d2a00968c3d48cadd4159-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/524f141e189d2a00968c3d48cadd4159-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/524f141e189d2a00968c3d48cadd4159-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/524f141e189d2a00968c3d48cadd4159-Supplemental.pdf
Consider learning a policy purely on the basis of demonstrated behavior---that is, with no access to reinforcement signals, no knowledge of transition dynamics, and no further interaction with the environment. This strictly batch imitation learning problem arises wherever live experimentation is costly, such as in healthcare. One solution is simply to retrofit existing algorithms for apprenticeship learning to work in the offline setting. But such an approach leans heavily on off-policy evaluation or offline model estimation, and can be indirect and inefficient. We argue that a good solution should be able to explicitly parameterize a policy (i.e. respecting action conditionals), implicitly learn from rollout dynamics (i.e. leveraging state marginals), and---crucially---operate in an entirely offline fashion. To address this challenge, we propose a novel technique by energy-based distribution matching (EDM): By identifying parameterizations of the (discriminative) model of a policy with the (generative) energy function for state distributions, EDM yields a simple but effective solution that equivalently minimizes a divergence between the occupancy measure for the demonstrator and a model thereof for the imitator. Through experiments with application to control and healthcare settings, we illustrate consistent performance gains over existing algorithms for strictly batch imitation learning.
On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint Sampling Method
https://papers.nips.cc/paper_files/paper/2020/hash/5265d33c184af566aeb7ef8afd0b9b03-Abstract.html
Ye He, Krishnakumar Balasubramanian, Murat A. Erdogdu
https://papers.nips.cc/paper_files/paper/2020/hash/5265d33c184af566aeb7ef8afd0b9b03-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5265d33c184af566aeb7ef8afd0b9b03-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10342-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5265d33c184af566aeb7ef8afd0b9b03-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5265d33c184af566aeb7ef8afd0b9b03-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5265d33c184af566aeb7ef8afd0b9b03-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5265d33c184af566aeb7ef8afd0b9b03-Supplemental.pdf
The randomized midpoint method, proposed by (Shen and Lee, 2019), has emerged as an optimal discretization procedure for simulating the continuous time underdamped Langevin diffusion. In this paper, we analyze several probabilistic properties of the randomized midpoint discretization method, considering both overdamped and underdamped Langevin dynamics. We first characterize the stationary distribution of the discrete chain obtained with constant step-size discretization and show that it is biased away from the target distribution. Notably, the step-size needs to go to zero to obtain asymptotic unbiasedness. Next, we establish the asymptotic normality of numerical integration using the randomized midpoint method and highlight the relative advantages and disadvantages over other discretizations. Our results collectively provide several insights into the behavior of the randomized midpoint discretization method, including obtaining confidence intervals for numerical integrations.
A Single-Loop Smoothed Gradient Descent-Ascent Algorithm for Nonconvex-Concave Min-Max Problems
https://papers.nips.cc/paper_files/paper/2020/hash/52aaa62e71f829d41d74892a18a11d59-Abstract.html
Jiawei Zhang, Peijun Xiao, Ruoyu Sun, Zhiquan Luo
https://papers.nips.cc/paper_files/paper/2020/hash/52aaa62e71f829d41d74892a18a11d59-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/52aaa62e71f829d41d74892a18a11d59-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10343-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/52aaa62e71f829d41d74892a18a11d59-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/52aaa62e71f829d41d74892a18a11d59-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/52aaa62e71f829d41d74892a18a11d59-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/52aaa62e71f829d41d74892a18a11d59-Supplemental.zip
Nonconvex-concave min-max problem arises in many machine learning applications including minimizing a pointwise maximum of a set of nonconvex functions and robust adversarial training of neural networks. A popular approach to solve this problem is the gradient descent-ascent (GDA) algorithm which unfortunately can exhibit oscillation in case of nonconvexity. In this paper, we introduce a ``smoothing" scheme which can be combined with GDA to stabilize the oscillation and ensure convergence to a stationary solution. We prove that the stabilized GDA algorithm can achieve an $O(1/\epsilon^2)$ iteration complexity for minimizing the pointwise maximum of a finite collection of nonconvex functions. Moreover, the smoothed GDA algorithm achieves an $O(1/\epsilon^4)$ iteration complexity for general nonconvex-concave problems. Extensions of this stabilized GDA algorithm to multi-block cases are presented. To the best of our knowledge, this is the first algorithm to achieve $O(1/\epsilon^2)$ for a class of nonconvex-concave problem. We illustrate the practical efficiency of the stabilized GDA algorithm on robust training.
Generating Correct Answers for Progressive Matrices Intelligence Tests
https://papers.nips.cc/paper_files/paper/2020/hash/52cf49fea5ff66588408852f65cf8272-Abstract.html
Niv Pekar, Yaniv Benny, Lior Wolf
https://papers.nips.cc/paper_files/paper/2020/hash/52cf49fea5ff66588408852f65cf8272-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/52cf49fea5ff66588408852f65cf8272-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10344-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/52cf49fea5ff66588408852f65cf8272-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/52cf49fea5ff66588408852f65cf8272-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/52cf49fea5ff66588408852f65cf8272-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/52cf49fea5ff66588408852f65cf8272-Supplemental.pdf
Raven’s Progressive Matrices are multiple-choice intelligence tests, where one tries to complete the missing location in a 3x3 grid of abstract images. Previous attempts to address this test have focused solely on selecting the right answer out of the multiple choices. In this work, we focus, instead, on generating a correct answer given the grid, which is a harder task, by definition. The proposed neural model combines multiple advances in generative models, including employing multiple pathways through the same network, using the reparameterization trick along two pathways to make their encoding compatible, a selective application of variational losses, and a complex perceptual loss that is coupled with a selective backpropagation procedure. Our algorithm is able not only to generate a set of plausible answers but also to be competitive to the state of the art methods in multiple-choice tests.
HyNet: Learning Local Descriptor with Hybrid Similarity Measure and Triplet Loss
https://papers.nips.cc/paper_files/paper/2020/hash/52d2752b150f9c35ccb6869cbf074e48-Abstract.html
Yurun Tian, Axel Barroso Laguna, Tony Ng, Vassileios Balntas, Krystian Mikolajczyk
https://papers.nips.cc/paper_files/paper/2020/hash/52d2752b150f9c35ccb6869cbf074e48-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/52d2752b150f9c35ccb6869cbf074e48-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10345-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/52d2752b150f9c35ccb6869cbf074e48-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/52d2752b150f9c35ccb6869cbf074e48-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/52d2752b150f9c35ccb6869cbf074e48-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/52d2752b150f9c35ccb6869cbf074e48-Supplemental.zip
In this paper, we investigate how L2 normalisation affects the back-propagated descriptor gradients during training. Based on our observations, we propose HyNet, a new local descriptor that leads to state-of-the-art results in matching. HyNet introduces a hybrid similarity measure for triplet margin loss, a regularisation term constraining the descriptor norm, and a new network architecture that performs L2 normalisation of all intermediate feature maps and the output descriptors. HyNet surpasses previous methods by a significant margin on standard benchmarks that include patch matching, verification, and retrieval, as well as outperforming full end-to-end methods on 3D reconstruction tasks.
Preference learning along multiple criteria: A game-theoretic perspective
https://papers.nips.cc/paper_files/paper/2020/hash/52f4691a4de70b3c441bca6c546979d9-Abstract.html
Kush Bhatia, Ashwin Pananjady, Peter Bartlett, Anca Dragan, Martin J. Wainwright
https://papers.nips.cc/paper_files/paper/2020/hash/52f4691a4de70b3c441bca6c546979d9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/52f4691a4de70b3c441bca6c546979d9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10346-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/52f4691a4de70b3c441bca6c546979d9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/52f4691a4de70b3c441bca6c546979d9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/52f4691a4de70b3c441bca6c546979d9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/52f4691a4de70b3c441bca6c546979d9-Supplemental.pdf
From a theoretical standpoint, we show that the Blackwell winner of a multi-criteria problem instance can be computed as the solution to a convex optimization problem. Furthermore, given random samples of pairwise comparisons, we show that a simple, "plug-in" estimator achieves (near-)optimal minimax sample complexity. Finally, we showcase the practical utility of our framework in a user study on autonomous driving, where we find that the Blackwell winner outperforms the von Neumann winner for the overall preferences.
Multi-Plane Program Induction with 3D Box Priors
https://papers.nips.cc/paper_files/paper/2020/hash/5301c4d888f5204274439e6dcf5fdb54-Abstract.html
Yikai Li, Jiayuan Mao, Xiuming Zhang, Bill Freeman, Josh Tenenbaum, Noah Snavely, Jiajun Wu
https://papers.nips.cc/paper_files/paper/2020/hash/5301c4d888f5204274439e6dcf5fdb54-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5301c4d888f5204274439e6dcf5fdb54-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10347-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5301c4d888f5204274439e6dcf5fdb54-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5301c4d888f5204274439e6dcf5fdb54-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5301c4d888f5204274439e6dcf5fdb54-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5301c4d888f5204274439e6dcf5fdb54-Supplemental.zip
We consider two important aspects in understanding and editing images: modeling regular, program-like texture or patterns in 2D planes, and 3D posing of these planes in the scene. Unlike prior work on image-based program synthesis, which assumes the image contains a single visible 2D plane, we present Box Program Induction (BPI), which infers a program-like scene representation that simultaneously models repeated structure on multiple 2D planes, the 3D position and orientation of the planes, and camera parameters, all from a single image. Our model assumes a box prior, i.e., that the image captures either an inner view or an outer view of a box in 3D. It uses neural networks to infer visual cues such as vanishing points, wireframe lines to guide a search-based algorithm to find the program that best explains the image. Such a holistic, structured scene representation enables 3D-aware interactive image editing operations such as inpainting missing pixels, changing camera parameters, and extrapolate the image contents.
Online Neural Connectivity Estimation with Noisy Group Testing
https://papers.nips.cc/paper_files/paper/2020/hash/531d29a813ef9471aad0a5558d449a73-Abstract.html
Anne Draelos, John Pearson
https://papers.nips.cc/paper_files/paper/2020/hash/531d29a813ef9471aad0a5558d449a73-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/531d29a813ef9471aad0a5558d449a73-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10348-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/531d29a813ef9471aad0a5558d449a73-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/531d29a813ef9471aad0a5558d449a73-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/531d29a813ef9471aad0a5558d449a73-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/531d29a813ef9471aad0a5558d449a73-Supplemental.zip
One of the primary goals of systems neuroscience is to relate the structure of neural circuits to their function, yet patterns of connectivity are difficult to establish when recording from large populations in behaving organisms. Many previous approaches have attempted to estimate functional connectivity between neurons using statistical modeling of observational data, but these approaches rely heavily on parametric assumptions and are purely correlational. Recently, however, holographic photostimulation techniques have made it possible to precisely target selected ensembles of neurons, offering the possibility of establishing direct causal links. A naive method for inferring functional connections is to stimulate each individual neuron multiple times and observe the responses of cells in the local network, but this approach scales poorly with the number of neurons. Here, we propose a method based on noisy group testing that drastically increases the efficiency of this process in sparse networks. By stimulating small ensembles of neurons, we show that it is possible to recover binarized network connectivity with a number of tests that grows only logarithmically with population size under minimal statistical assumptions. Moreover, we prove that our approach, which reduces to an efficiently solvable convex optimization problem, can be related to Variational Bayesian inference on the binary connection weights, and we derive rigorous bounds on the posterior marginals. This allows us to extend our method to the streaming setting, where continuously updated posteriors allow for optional stopping, and we demonstrate the feasibility of inferring connectivity for networks of up to tens of thousands of neurons online.
Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free
https://papers.nips.cc/paper_files/paper/2020/hash/537d9b6c927223c796cac288cced29df-Abstract.html
Haotao Wang, Tianlong Chen, Shupeng Gui, TingKuei Hu, Ji Liu, Zhangyang Wang
https://papers.nips.cc/paper_files/paper/2020/hash/537d9b6c927223c796cac288cced29df-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/537d9b6c927223c796cac288cced29df-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10349-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/537d9b6c927223c796cac288cced29df-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/537d9b6c927223c796cac288cced29df-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/537d9b6c927223c796cac288cced29df-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/537d9b6c927223c796cac288cced29df-Supplemental.pdf
Adversarial training and its many variants substantially improve deep network robustness, yet at the cost of compromising standard accuracy. Moreover, the training process is heavy and hence it becomes impractical to thoroughly explore the trade-off between accuracy and robustness. This paper asks this new question: how to quickly calibrate a trained model in-situ, to examine the achievable trade-offs between its standard and robust accuracies, without (re-)training it many times? Our proposed framework, Once-for-all Adversarial Training (OAT), is built on an innovative model-conditional training framework, with a controlling hyper-parameter as the input. The trained model could be adjusted among different standard and robust accuracies “for free” at testing time. As an important knob, we exploit dual batch normalization to separate standard and adversarial feature statistics, so that they can be learned in one model without degrading performance. We further extend OAT to a Once-for-all Adversarial Training and Slimming (OATS) framework, that allows for the joint trade-off among accuracy, robustness and runtime efficiency. Experiments show that, without any re-training nor ensembling, OAT/OATS achieve similar or even superior performance compared to dedicatedly trained models at various configurations. Our codes and pretrained models are available at: https://github.com/VITA-Group/Once-for-All-Adversarial-Training.
Implicit Neural Representations with Periodic Activation Functions
https://papers.nips.cc/paper_files/paper/2020/hash/53c04118df112c13a8c34b38343b9c10-Abstract.html
Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, Gordon Wetzstein
https://papers.nips.cc/paper_files/paper/2020/hash/53c04118df112c13a8c34b38343b9c10-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/53c04118df112c13a8c34b38343b9c10-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10350-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/53c04118df112c13a8c34b38343b9c10-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/53c04118df112c13a8c34b38343b9c10-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/53c04118df112c13a8c34b38343b9c10-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/53c04118df112c13a8c34b38343b9c10-Supplemental.zip
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENs with hypernetworks to learn priors over the space of SIREN functions.
Rotated Binary Neural Network
https://papers.nips.cc/paper_files/paper/2020/hash/53c5b2affa12eed84dfec9bfd83550b1-Abstract.html
Mingbao Lin, Rongrong Ji, Zihan Xu, Baochang Zhang, Yan Wang, Yongjian Wu, Feiyue Huang, Chia-Wen Lin
https://papers.nips.cc/paper_files/paper/2020/hash/53c5b2affa12eed84dfec9bfd83550b1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/53c5b2affa12eed84dfec9bfd83550b1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10351-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/53c5b2affa12eed84dfec9bfd83550b1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/53c5b2affa12eed84dfec9bfd83550b1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/53c5b2affa12eed84dfec9bfd83550b1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/53c5b2affa12eed84dfec9bfd83550b1-Supplemental.zip
Binary Neural Network (BNN) shows its predominance in reducing the complexity of deep neural networks. However, it suffers severe performance degradation. One of the major impediments is the large quantization error between the full-precision weight vector and its binary vector. Previous works focus on compensating for the norm gap while leaving the angular bias hardly touched. In this paper, for the first time, we explore the influence of angular bias on the quantization error and then introduce a Rotated Binary Neural Network (RBNN), which considers the angle alignment between the full-precision weight vector and its binarized version. At the beginning of each training epoch, we propose to rotate the full-precision weight vector to its binary vector to reduce the angular bias. To avoid the high complexity of learning a large rotation matrix, we further introduce a bi-rotation formulation that learns two smaller rotation matrices. In the training stage, we devise an adjustable rotated weight vector for binarization to escape the potential local optimum. Our rotation leads to around 50% weight flips which maximize the information gain. Finally, we propose a training-aware approximation of the sign function for the gradient backward. Experiments on CIFAR-10 and ImageNet demonstrate the superiorities of RBNN over many state-of-the-arts. Our source code, experimental settings, training logs and binary models are available at https://github.com/lmbxmu/RBNN.
Community detection in sparse time-evolving graphs with a dynamical Bethe-Hessian
https://papers.nips.cc/paper_files/paper/2020/hash/54391c872fe1c8b4f98095c5d6ec7ec7-Abstract.html
Lorenzo Dall'Amico, Romain Couillet, Nicolas Tremblay
https://papers.nips.cc/paper_files/paper/2020/hash/54391c872fe1c8b4f98095c5d6ec7ec7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/54391c872fe1c8b4f98095c5d6ec7ec7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10352-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/54391c872fe1c8b4f98095c5d6ec7ec7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/54391c872fe1c8b4f98095c5d6ec7ec7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/54391c872fe1c8b4f98095c5d6ec7ec7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/54391c872fe1c8b4f98095c5d6ec7ec7-Supplemental.zip
This article considers the problem of community detection in sparse dynamical graphs in which the community structure evolves over time. A fast spectral algorithm based on an extension of the Bethe-Hessian matrix is proposed, which benefits from the positive correlation in the class labels and in their temporal evolution and is designed to be applicable to any dynamical graph with a community structure. Under the dynamical degree-corrected stochastic block model, in the case of two classes of equal size, we demonstrate and support with extensive simulations that our proposed algorithm is capable of making non-trivial community reconstruction as soon as theoretically possible, thereby reaching the optimal detectability threshold and provably outperforming competing spectral methods.
Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness
https://papers.nips.cc/paper_files/paper/2020/hash/543e83748234f7cbab21aa0ade66565f-Abstract.html
Jeremiah Liu, Zi Lin, Shreyas Padhy, Dustin Tran, Tania Bedrax Weiss, Balaji Lakshminarayanan
https://papers.nips.cc/paper_files/paper/2020/hash/543e83748234f7cbab21aa0ade66565f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/543e83748234f7cbab21aa0ade66565f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10353-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/543e83748234f7cbab21aa0ade66565f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/543e83748234f7cbab21aa0ade66565f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/543e83748234f7cbab21aa0ade66565f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/543e83748234f7cbab21aa0ade66565f-Supplemental.pdf
Bayesian neural networks (BNN) and deep ensembles are principled approaches to estimate the predictive uncertainty of a deep learning model. However their practicality in real-time, industrial-scale applications are limited due to their heavy memory and inference cost. This motivates us to study principled approaches to high-quality uncertainty estimation that require only a single deep neural network (DNN). By formalizing the uncertainty quantification as a minimax learning problem, we first identify input distance awareness, i.e., the model’s ability to quantify the distance of a testing example from the training data in the input space, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs, by adding a weight normalization step during training and replacing the output layer. On a suite of vision and language understanding tasks and on modern architectures (Wide-ResNet and BERT), SNGP is competitive with deep ensembles in prediction, calibration and out-of-domain detection, and outperforms the other single-model approaches.
Adaptive Learning of Rank-One Models for Efficient Pairwise Sequence Alignment
https://papers.nips.cc/paper_files/paper/2020/hash/54e0e46b6647aa736c13ef9d09eab432-Abstract.html
Govinda Kamath, Tavor Baharav, Ilan Shomorony
https://papers.nips.cc/paper_files/paper/2020/hash/54e0e46b6647aa736c13ef9d09eab432-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/54e0e46b6647aa736c13ef9d09eab432-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10354-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/54e0e46b6647aa736c13ef9d09eab432-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/54e0e46b6647aa736c13ef9d09eab432-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/54e0e46b6647aa736c13ef9d09eab432-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/54e0e46b6647aa736c13ef9d09eab432-Supplemental.pdf
Pairwise alignment of DNA sequencing data is a ubiquitous task in bioinformatics and typically represents a heavy computational burden. State-of-the-art approaches to speed up this task use hashing to identify short segments (k-mers) that are shared by pairs of reads, which can then be used to estimate alignment scores. However, when the number of reads is large, accurately estimating alignment scores for all pairs is still very costly. Moreover, in practice, one is only interested in identifying pairs of reads with large alignment scores. In this work, we propose a new approach to pairwise alignment estimation based on two key new ingredients. The first ingredient is to cast the problem of pairwise alignment estimation under a general framework of rank-one crowdsourcing models, where the workers' responses correspond to k-mer hash collisions. These models can be accurately solved via a spectral decomposition of the response matrix. The second ingredient is to utilise a multi-armed bandit algorithm to adaptively refine this spectral estimator only for read pairs that are likely to have large alignments. The resulting algorithm iteratively performs a spectral decomposition of the response matrix for adaptively chosen subsets of the read pairs.
Hierarchical nucleation in deep neural networks
https://papers.nips.cc/paper_files/paper/2020/hash/54f3bc04830d762a3b56a789b6ff62df-Abstract.html
Diego Doimo, Aldo Glielmo, Alessio Ansuini, Alessandro Laio
https://papers.nips.cc/paper_files/paper/2020/hash/54f3bc04830d762a3b56a789b6ff62df-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/54f3bc04830d762a3b56a789b6ff62df-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10355-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/54f3bc04830d762a3b56a789b6ff62df-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/54f3bc04830d762a3b56a789b6ff62df-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/54f3bc04830d762a3b56a789b6ff62df-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/54f3bc04830d762a3b56a789b6ff62df-Supplemental.zip
Deep convolutional networks (DCNs) learn meaningful representations where data that share the same abstract characteristics are positioned closer and closer. Understanding these representations and how they are generated is of unquestioned practical and theoretical interest. In this work we study the evolution of the probability density of the ImageNet dataset across the hidden layers in some state-of-the-art DCNs. We find that the initial layers generate a unimodal probability density getting rid of any structure irrelevant for classification. In subsequent layers density peaks arise in a hierarchical fashion that mirrors the semantic hierarchy of the concepts. Density peaks corresponding to single categories appear only close to the output and via a very sharp transition which resembles the nucleation process of a heterogeneous liquid. This process leaves a footprint in the probability density of the output layer where the topography of the peaks allows reconstructing the semantic relationships of the categories.
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
https://papers.nips.cc/paper_files/paper/2020/hash/55053683268957697aa39fba6f231c68-Abstract.html
Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, Ren Ng
https://papers.nips.cc/paper_files/paper/2020/hash/55053683268957697aa39fba6f231c68-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/55053683268957697aa39fba6f231c68-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10356-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/55053683268957697aa39fba6f231c68-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/55053683268957697aa39fba6f231c68-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/55053683268957697aa39fba6f231c68-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/55053683268957697aa39fba6f231c68-Supplemental.zip
We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes. Using tools from the neural tangent kernel (NTK) literature, we show that a standard MLP has impractically slow convergence to high frequency signal components. To overcome this spectral bias, we use a Fourier feature mapping to transform the effective NTK into a stationary kernel with a tunable bandwidth. We suggest an approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities.
Graph Geometry Interaction Learning
https://papers.nips.cc/paper_files/paper/2020/hash/551fdbb810aff145c114b93867dd8bfd-Abstract.html
Shichao Zhu, Shirui Pan, Chuan Zhou, Jia Wu, Yanan Cao, Bin Wang
https://papers.nips.cc/paper_files/paper/2020/hash/551fdbb810aff145c114b93867dd8bfd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/551fdbb810aff145c114b93867dd8bfd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10357-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/551fdbb810aff145c114b93867dd8bfd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/551fdbb810aff145c114b93867dd8bfd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/551fdbb810aff145c114b93867dd8bfd-Review.html
null
While numerous approaches have been developed to embed graphs into either Euclidean or hyperbolic spaces, they do not fully utilize the information available in graphs, or lack the flexibility to model intrinsic complex graph geometry. To utilize the strength of both Euclidean and hyperbolic geometries, we develop a novel Geometry Interaction Learning (GIL) method for graphs, a well-suited and efficient alternative for learning abundant geometric properties in graph. GIL captures a more informative internal structural features with low dimensions while maintaining conformal invariance of each space. Furthermore, our method endows each node the freedom to determine the importance of each geometry space via a flexible dual feature interaction learning and probability assembling mechanism. Promising experimental results are presented for five benchmark datasets on node classification and link prediction tasks.
Differentiable Augmentation for Data-Efficient GAN Training
https://papers.nips.cc/paper_files/paper/2020/hash/55479c55ebd1efd3ff125f1337100388-Abstract.html
Shengyu Zhao, Zhijian Liu, Ji Lin, Jun-Yan Zhu, Song Han
https://papers.nips.cc/paper_files/paper/2020/hash/55479c55ebd1efd3ff125f1337100388-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/55479c55ebd1efd3ff125f1337100388-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10358-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/55479c55ebd1efd3ff125f1337100388-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/55479c55ebd1efd3ff125f1337100388-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/55479c55ebd1efd3ff125f1337100388-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/55479c55ebd1efd3ff125f1337100388-Supplemental.pdf
The performance of generative adversarial networks (GANs) heavily deteriorates given a limited amount of training data. This is mainly because the discriminatorsis memorizing the exact training set. To combat it, we propose Differentiable Augmentation (DiffAugment), a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples. Previous attempts to directly augment the training data manipulate the distribution of real images, yielding little benefit; DiffAugment enables us to adopt the differentiable augmentation for the generated samples, effectively stabilizes training, and leads to better convergence. Experiments demonstrate consistent gains of our method over a variety of GAN architectures and loss functions for both unconditional and class-conditional generation. With DiffAugment, we achieve astate-of-the-art FID of 6.80 with an IS of 100.8 on ImageNet 128×128 and 2-4× reductions of FID given 1,000 images on FFHQ and LSUN. Furthermore, with only 20% training data, we can match the top performance on CIFAR-10 and CIFAR-100. Finally, our method can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms. Code is available at https://github.com/mit-han-lab/data-efficient-gans.
Heuristic Domain Adaptation
https://papers.nips.cc/paper_files/paper/2020/hash/555d6702c950ecb729a966504af0a635-Abstract.html
Shuhao Cui, Xuan Jin, Shuhui Wang, Yuan He, Qingming Huang
https://papers.nips.cc/paper_files/paper/2020/hash/555d6702c950ecb729a966504af0a635-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/555d6702c950ecb729a966504af0a635-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10359-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/555d6702c950ecb729a966504af0a635-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/555d6702c950ecb729a966504af0a635-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/555d6702c950ecb729a966504af0a635-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/555d6702c950ecb729a966504af0a635-Supplemental.pdf
In visual domain adaptation (DA), separating the domain-specific characteristics from the domain-invariant representations is an ill-posed problem. Existing methods apply different kinds of priors or directly minimize the domain discrepancy to address this problem, which lack flexibility in handling real-world situations. Another research pipeline expresses the domain-specific information as a gradual transferring process, which tends to be suboptimal in accurately removing the domain-specific properties. In this paper, we address the modeling of domain-invariant and domain-specific information from the heuristic search perspective. We identify the characteristics in the existing representations that lead to larger domain discrepancy as the heuristic representations. With the guidance of heuristic representations, we formulate a principled framework of Heuristic Domain Adaptation (HDA) with well-founded theoretical guarantees. To perform HDA, the cosine similarity scores and independence measurements between domain-invariant and domain-specific representations are cast into the constraints at the initial and final states during the learning procedure. Similar to the final condition of heuristic search, we further derive a constraint enforcing the final range of heuristic network output to be small. Accordingly, we propose Heuristic Domain Adaptation Network (HDAN), which explicitly learns the domain-invariant and domain-specific representations with the above mentioned constraints. Extensive experiments show that HDAN has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA. The code is available at https://github.com/cuishuhao/HDA.
Learning Certified Individually Fair Representations
https://papers.nips.cc/paper_files/paper/2020/hash/55d491cf951b1b920900684d71419282-Abstract.html
Anian Ruoss, Mislav Balunovic, Marc Fischer, Martin Vechev
https://papers.nips.cc/paper_files/paper/2020/hash/55d491cf951b1b920900684d71419282-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/55d491cf951b1b920900684d71419282-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10360-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/55d491cf951b1b920900684d71419282-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/55d491cf951b1b920900684d71419282-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/55d491cf951b1b920900684d71419282-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/55d491cf951b1b920900684d71419282-Supplemental.pdf
Fair representation learning provides an effective way of enforcing fairness constraints without compromising utility for downstream users. A desirable family of such fairness constraints, each requiring similar treatment for similar individuals, is known as individual fairness. In this work, we introduce the first method that enables data consumers to obtain certificates of individual fairness for existing and new data points. The key idea is to map similar individuals to close latent representations and leverage this latent proximity to certify individual fairness. That is, our method enables the data producer to learn and certify a representation where for a data point all similar individuals are at l-infinity distance at most epsilon, thus allowing data consumers to certify individual fairness by proving epsilon-robustness of their classifier. Our experimental evaluation on five real-world datasets and several fairness constraints demonstrates the expressivity and scalability of our approach.
Part-dependent Label Noise: Towards Instance-dependent Label Noise
https://papers.nips.cc/paper_files/paper/2020/hash/5607fe8879e4fd269e88387e8cb30b7e-Abstract.html
Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, Dacheng Tao, Masashi Sugiyama
https://papers.nips.cc/paper_files/paper/2020/hash/5607fe8879e4fd269e88387e8cb30b7e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5607fe8879e4fd269e88387e8cb30b7e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10361-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5607fe8879e4fd269e88387e8cb30b7e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5607fe8879e4fd269e88387e8cb30b7e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5607fe8879e4fd269e88387e8cb30b7e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5607fe8879e4fd269e88387e8cb30b7e-Supplemental.pdf
Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise. Note that there are psychological and physiological evidences showing that we humans perceive instances by decomposing them into parts. Annotators are therefore more likely to annotate instances based on the parts rather than the whole instances, where a wrong mapping from parts to classes may cause the instance-dependent label noise. Motivated by this human cognition, in this paper, we approximate the instance-dependent label noise by exploiting \textit{part-dependent} label noise. Specifically, since instances can be approximately reconstructed by a combination of parts, we approximate the instance-dependent \textit{transition matrix} for an instance by a combination of the transition matrices for the parts of the instance. The transition matrices for parts can be learned by exploiting anchor points (i.e., data points that belong to a specific class almost surely). Empirical evaluations on synthetic and real-world datasets demonstrate our method is superior to the state-of-the-art approaches for learning from the instance-dependent label noise.
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/564127c03caab942e503ee6f810f54fd-Abstract.html
Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, H. Vincent Poor
https://papers.nips.cc/paper_files/paper/2020/hash/564127c03caab942e503ee6f810f54fd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/564127c03caab942e503ee6f810f54fd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10362-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/564127c03caab942e503ee6f810f54fd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/564127c03caab942e503ee6f810f54fd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/564127c03caab942e503ee6f810f54fd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/564127c03caab942e503ee6f810f54fd-Supplemental.pdf
In federated learning, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round. Naive weighted aggregation of such models causes objective inconsistency, that is, the global model converges to a stationary point of a mismatched objective function which can be arbitrarily different from the true objective. This paper provides a general framework to analyze the convergence of federated heterogeneous optimization algorithms. It subsumes previously proposed methods such as FedAvg and FedProx and provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency. Using insights from this analysis, we propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
An Improved Analysis of (Variance-Reduced) Policy Gradient and Natural Policy Gradient Methods
https://papers.nips.cc/paper_files/paper/2020/hash/56577889b3c1cd083b6d7b32d32f99d5-Abstract.html
Yanli Liu, Kaiqing Zhang, Tamer Basar, Wotao Yin
https://papers.nips.cc/paper_files/paper/2020/hash/56577889b3c1cd083b6d7b32d32f99d5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/56577889b3c1cd083b6d7b32d32f99d5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10363-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/56577889b3c1cd083b6d7b32d32f99d5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/56577889b3c1cd083b6d7b32d32f99d5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/56577889b3c1cd083b6d7b32d32f99d5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/56577889b3c1cd083b6d7b32d32f99d5-Supplemental.pdf
In this paper, we revisit and improve the convergence of policy gradient (PG), natural PG (NPG) methods, and their variance-reduced variants, under general smooth policy parametrizations. More specifically, with the Fisher information matrix of the policy being positive definite: i) we show that a state-of-the-art variance-reduced PG method, which has only been shown to converge to stationary points, converges to the globally optimal value up to some inherent function approximation error due to policy parametrization; ii) we show that NPG enjoys a lower sample complexity; iii) we propose SRVR-NPG, which incorporates variance-reduction into the NPG update. Our improvements follow from an observation that the convergence of (variance-reduced) PG and NPG methods can improve each other: the stationary convergence analysis of PG can be applied on NPG as well, and the global convergence analysis of NPG can help to establish the global convergence of (variance-reduced) PG methods. Our analysis carefully integrates the advantages of these two lines of works. Thanks to this improvement, we have also made variance-reduction for NPG possible for the first time, with both global convergence and an efficient finite-sample complexity.
Geometric Exploration for Online Control
https://papers.nips.cc/paper_files/paper/2020/hash/565e8a413d0562de9ee4378402d2b481-Abstract.html
Orestis Plevrakis, Elad Hazan
https://papers.nips.cc/paper_files/paper/2020/hash/565e8a413d0562de9ee4378402d2b481-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/565e8a413d0562de9ee4378402d2b481-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10364-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/565e8a413d0562de9ee4378402d2b481-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/565e8a413d0562de9ee4378402d2b481-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/565e8a413d0562de9ee4378402d2b481-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/565e8a413d0562de9ee4378402d2b481-Supplemental.pdf
We study the control of an \emph{unknown} linear dynamical system under general convex costs. The objective is minimizing regret vs the class of strongly-stable linear policies. In this work, we first consider the case of known cost functions, for which we design the first polynomial-time algorithm with $n^3\sqrt{T}$-regret, where $n$ is the dimension of the state plus the dimension of control input. The $\sqrt{T}$-horizon dependence is optimal, and improves upon the previous best known bound of $T^{2/3}$. The main component of our algorithm is a novel geometric exploration strategy: we adaptively construct a sequence of barycentric spanners in an over-parameterized policy space. Second, we consider the case of bandit feedback, for which we give the first polynomial-time algorithm with $poly(n)\sqrt{T}$-regret, building on Stochastic Bandit Convex Optimization.
Automatic Curriculum Learning through Value Disagreement
https://papers.nips.cc/paper_files/paper/2020/hash/566f0ea4f6c2e947f36795c8f58ba901-Abstract.html
Yunzhi Zhang, Pieter Abbeel, Lerrel Pinto
https://papers.nips.cc/paper_files/paper/2020/hash/566f0ea4f6c2e947f36795c8f58ba901-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/566f0ea4f6c2e947f36795c8f58ba901-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10365-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/566f0ea4f6c2e947f36795c8f58ba901-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/566f0ea4f6c2e947f36795c8f58ba901-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/566f0ea4f6c2e947f36795c8f58ba901-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/566f0ea4f6c2e947f36795c8f58ba901-Supplemental.zip
Continually solving new, unsolved tasks is the key to learning diverse behaviors. Through reinforcement learning (RL), we have made massive strides towards solving tasks that have a single goal. However, in the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency. When biological agents learn, there is often an organized and meaningful order to which learning happens. Inspired by this, we propose setting up an automatic curriculum for goals that the agent needs to solve. Our key insight is that if we can sample goals at the frontier of the set of goals that an agent is able to reach, it will provide a significantly stronger learning signal compared to randomly sampled goals. To operationalize this idea, we introduce a goal proposal module that prioritizes goals that maximize the epistemic uncertainty of the Q-function of the policy. This simple technique samples goals that are neither too hard nor too easy for the agent to solve, hence enabling continual improvement. We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
MRI Banding Removal via Adversarial Training
https://papers.nips.cc/paper_files/paper/2020/hash/567b8f5f423af15818a068235807edc0-Abstract.html
Aaron Defazio, Tullie Murrell, Michael Recht
https://papers.nips.cc/paper_files/paper/2020/hash/567b8f5f423af15818a068235807edc0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/567b8f5f423af15818a068235807edc0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10366-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/567b8f5f423af15818a068235807edc0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/567b8f5f423af15818a068235807edc0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/567b8f5f423af15818a068235807edc0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/567b8f5f423af15818a068235807edc0-Supplemental.zip
MR images reconstructed from sub-sampled Cartesian data using deep learning techniques show a characteristic banding (sometimes described as streaking), which is particularly strong in low signal-to-noise regions of the reconstructed image. In this work, we propose the use of an adversarial loss that penalizes banding structures without requiring any human annotation. Our technique greatly reduces the appearance of banding, without requiring any additional computation or post-processing at reconstruction time. We report the results of a blind comparison against a strong baseline by a group of expert evaluators (board-certified radiologists), where our approach is ranked superior at banding removal with no statistically significant loss of detail. A reference implementation of our method is available in the supplementary material.
The NetHack Learning Environment
https://papers.nips.cc/paper_files/paper/2020/hash/569ff987c643b4bedf504efda8f786c2-Abstract.html
Heinrich Küttler, Nantas Nardelli, Alexander Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, Tim Rocktäschel
https://papers.nips.cc/paper_files/paper/2020/hash/569ff987c643b4bedf504efda8f786c2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/569ff987c643b4bedf504efda8f786c2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10367-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/569ff987c643b4bedf504efda8f786c2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/569ff987c643b4bedf504efda8f786c2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/569ff987c643b4bedf504efda8f786c2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/569ff987c643b4bedf504efda8f786c2-Supplemental.pdf
Progress in Reinforcement Learning (RL) algorithms goes hand-in-hand with the development of challenging environments that test the limits of current methods. While existing RL environments are either sufficiently complex or based on fast simulation, they are rarely both. Here, we present the NetHack Learning Environment (NLE), a scalable, procedurally generated, stochastic, rich, and challenging environment for RL research based on the popular single-player terminal-based roguelike game, NetHack. We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL, while dramatically reducing the computational resources required to gather a large amount of experience. We compare NLE and its task suite to existing alternatives, and discuss why it is an ideal medium for testing the robustness and systematic generalization of RL agents. We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration, alongside qualitative analysis of various agents trained in the environment. NLE is open source and available at https://github.com/facebookresearch/nle.
Language and Visual Entity Relationship Graph for Agent Navigation
https://papers.nips.cc/paper_files/paper/2020/hash/56dc0997d871e9177069bb472574eb29-Abstract.html
Yicong Hong, Cristian Rodriguez, Yuankai Qi, Qi Wu, Stephen Gould
https://papers.nips.cc/paper_files/paper/2020/hash/56dc0997d871e9177069bb472574eb29-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/56dc0997d871e9177069bb472574eb29-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10368-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/56dc0997d871e9177069bb472574eb29-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/56dc0997d871e9177069bb472574eb29-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/56dc0997d871e9177069bb472574eb29-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/56dc0997d871e9177069bb472574eb29-Supplemental.pdf
Vision-and-Language Navigation (VLN) requires an agent to navigate in a real-world environment following natural language instructions. From both the textual and visual perspectives, we find that the relationships among the scene, its objects, and directional cues are essential for the agent to interpret complex instructions and correctly perceive the environment. To capture and utilize the relationships, we propose a novel Language and Visual Entity Relationship Graph for modelling the inter-modal relationships between text and vision, and the intra-modal relationships among visual entities. We propose a message passing algorithm for propagating information between language elements and visual entities in the graph, which we then combine to determine the next action to take. Experiments show that by taking advantage of the relationships we are able to improve over state-of-the-art. On the Room-to-Room (R2R) benchmark, our method achieves the new best performance on the test unseen split with success rate weighted by path length of 52%. On the Room-for-Room (R4R) dataset, our method significantly improves the previous best from 13% to 34% on the success weighted by normalized dynamic time warping.
ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping
https://papers.nips.cc/paper_files/paper/2020/hash/56f9f88906aebf4ad985aaec7fa01313-Abstract.html
Cher Bass, Mariana da Silva, Carole Sudre, Petru-Daniel Tudosiu, Stephen Smith, Emma Robinson
https://papers.nips.cc/paper_files/paper/2020/hash/56f9f88906aebf4ad985aaec7fa01313-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/56f9f88906aebf4ad985aaec7fa01313-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10369-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/56f9f88906aebf4ad985aaec7fa01313-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/56f9f88906aebf4ad985aaec7fa01313-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/56f9f88906aebf4ad985aaec7fa01313-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/56f9f88906aebf4ad985aaec7fa01313-Supplemental.pdf
Feature attribution (FA), or the assignment of class-relevance to different locations in an image, is important for many classification problems but is particularly crucial within the neuroscience domain, where accurate mechanistic models of behaviours, or disease, require knowledge of all features discriminative of a trait. At the same time, predicting class relevance from brain images is challenging as phenotypes are typically heterogeneous, and changes occur against a background of significant natural variation. Here, we present a novel framework for creating class specific FA maps through image-to-image translation. We propose the use of a VAE-GAN to explicitly disentangle class relevance from background features for improved interpretability properties, which results in meaningful FA maps. We validate our method on 2D and 3D brain image datasets of dementia (ADNI dataset), ageing (UK Biobank), and (simulated) lesion detection. We show that FA maps generated by our method outperform baseline FA methods when validated against ground truth. More significantly, our approach is the first to use latent space sampling to support exploration of phenotype variation.
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
https://papers.nips.cc/paper_files/paper/2020/hash/572201a4497b0b9f02d4f279b09ec30d-Abstract.html
Zhou Fan, Zhichao Wang
https://papers.nips.cc/paper_files/paper/2020/hash/572201a4497b0b9f02d4f279b09ec30d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/572201a4497b0b9f02d4f279b09ec30d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10370-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/572201a4497b0b9f02d4f279b09ec30d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/572201a4497b0b9f02d4f279b09ec30d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/572201a4497b0b9f02d4f279b09ec30d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/572201a4497b0b9f02d4f279b09ec30d-Supplemental.zip
We study the eigenvalue distributions of the Conjugate Kernel and Neural Tangent Kernel associated to multi-layer feedforward neural networks. In an asymptotic regime where network width is increasing linearly in sample size, under random initialization of the weights, and for input samples satisfying a notion of approximate pairwise orthogonality, we show that the eigenvalue distributions of the CK and NTK converge to deterministic limits. The limit for the CK is described by iterating the Marcenko-Pastur map across the hidden layers. The limit for the NTK is equivalent to that of a linear combination of the CK matrices across layers, and may be described by recursive fixed-point equations that extend this Marcenko-Pastur map. We demonstrate the agreement of these asymptotic predictions with the observed spectra for both synthetic and CIFAR-10 training data, and we perform a small simulation to investigate the evolutions of these spectra over training.
No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium
https://papers.nips.cc/paper_files/paper/2020/hash/5763abe87ed1938799203fb6e8650025-Abstract.html
Andrea Celli, Alberto Marchesi, Gabriele Farina, Nicola Gatti
https://papers.nips.cc/paper_files/paper/2020/hash/5763abe87ed1938799203fb6e8650025-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5763abe87ed1938799203fb6e8650025-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10371-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5763abe87ed1938799203fb6e8650025-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5763abe87ed1938799203fb6e8650025-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5763abe87ed1938799203fb6e8650025-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5763abe87ed1938799203fb6e8650025-Supplemental.pdf
The existence of simple, uncoupled no-regret dynamics that converge to correlated equilibria in normal-form games is a celebrated result in the theory of multi-agent systems. Specifically, it has been known for more than 20 years that when all players seek to minimize their internal regret in a repeated normal-form game, the empirical frequency of play converges to a normal-form correlated equilibrium. Extensive-form (that is, tree-form) games generalize normal-form games by modeling both sequential and simultaneous moves, as well as private information. Because of the sequential nature and presence of partial information in the game, extensive-form correlation has significantly different properties than the normal-form counterpart, many of which are still open research directions. Extensive-form correlated equilibrium (EFCE) has been proposed as the natural extensive-form counterpart to normal-form correlated equilibrium. However, it was currently unknown whether EFCE emerges as the result of uncoupled agent dynamics. In this paper, we give the first uncoupled no-regret dynamics that converge to the set of EFCEs in n-player general-sum extensive-form games with perfect recall. First, we introduce a notion of trigger regret in extensive-form games, which extends that of internal regret in normal-form games. When each player has low trigger regret, the empirical frequency of play is a close to an EFCE. Then, we give an efficient no-trigger-regret algorithm. Our algorithm decomposes trigger regret into local subproblems at each decision point for the player, and constructs a global strategy of the player from the local solutions at each decision point.
Estimating weighted areas under the ROC curve
https://papers.nips.cc/paper_files/paper/2020/hash/5781a2637b476d781eb3134581b32044-Abstract.html
Andreas Maurer, Massimiliano Pontil
https://papers.nips.cc/paper_files/paper/2020/hash/5781a2637b476d781eb3134581b32044-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5781a2637b476d781eb3134581b32044-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10372-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5781a2637b476d781eb3134581b32044-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5781a2637b476d781eb3134581b32044-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5781a2637b476d781eb3134581b32044-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5781a2637b476d781eb3134581b32044-Supplemental.pdf
Exponential bounds on the estimation error are given for the plug-in estimator of weighted areas under the ROC curve. The bounds hold for single score functions and uniformly over classes of functions, whose complexity can be controlled by Gaussian or Rademacher averages. The results justify learning algorithms which select score functions to maximize the empirical partial area under the curve (pAUC). They also illustrate the use of some recent advances in the theory of nonlinear empirical processes.
Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study
https://papers.nips.cc/paper_files/paper/2020/hash/57cd30d9088b0185cf0ebca1a472ff1d-Abstract.html
Assaf Dauber, Meir Feder, Tomer Koren, Roi Livni
https://papers.nips.cc/paper_files/paper/2020/hash/57cd30d9088b0185cf0ebca1a472ff1d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/57cd30d9088b0185cf0ebca1a472ff1d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10373-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/57cd30d9088b0185cf0ebca1a472ff1d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/57cd30d9088b0185cf0ebca1a472ff1d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/57cd30d9088b0185cf0ebca1a472ff1d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/57cd30d9088b0185cf0ebca1a472ff1d-Supplemental.pdf
We revisit this paradigm in arguably the simplest non-trivial setup, and study the implicit bias of Stochastic Gradient Descent (SGD) in the context of Stochastic Convex Optimization. As a first step, we provide a simple construction that rules out the existence of a \emph{distribution-independent} implicit regularizer that governs the generalization ability of SGD. We then demonstrate a learning problem that rules out a very general class of \emph{distribution-dependent} implicit regularizers from explaining generalization, which includes strongly convex regularizers as well as non-degenerate norm-based regularizations. Certain aspects of our constructions point out to significant difficulties in providing a comprehensive explanation of an algorithm's generalization performance by solely arguing about its implicit regularization properties.
Generalized Hindsight for Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/57e5cb96e22546001f1d6520ff11d9ba-Abstract.html
Alexander Li, Lerrel Pinto, Pieter Abbeel
https://papers.nips.cc/paper_files/paper/2020/hash/57e5cb96e22546001f1d6520ff11d9ba-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/57e5cb96e22546001f1d6520ff11d9ba-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10374-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/57e5cb96e22546001f1d6520ff11d9ba-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/57e5cb96e22546001f1d6520ff11d9ba-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/57e5cb96e22546001f1d6520ff11d9ba-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/57e5cb96e22546001f1d6520ff11d9ba-Supplemental.zip
One of the key reasons for the high sample complexity in reinforcement learning (RL) is the inability to transfer knowledge from one task to another. In standard multi-task RL settings, low-reward data collected while trying to solve one task provides little to no signal for solving that particular task and is hence effectively wasted. However, we argue that this data, which is uninformative for one task, is likely a rich source of information for other tasks. To leverage this insight and efficiently reuse data, we present Generalized Hindsight: an approximate inverse reinforcement learning technique for relabeling behaviors with the right tasks. Intuitively, given a behavior generated under one task, Generalized Hindsight returns a different task that the behavior is better suited for. Then, the behavior is relabeled with this new task before being used by an off-policy RL optimizer. Compared to standard relabeling techniques, Generalized Hindsight provides a substantially more efficient re-use of samples, which we empirically demonstrate on a suite of multi-task navigation and manipulation tasks.
Critic Regularized Regression
https://papers.nips.cc/paper_files/paper/2020/hash/588cb956d6bbe67078f29f8de420a13d-Abstract.html
Ziyu Wang, Alexander Novikov, Konrad Zolna, Josh S. Merel, Jost Tobias Springenberg, Scott E. Reed, Bobak Shahriari, Noah Siegel, Caglar Gulcehre, Nicolas Heess, Nando de Freitas
https://papers.nips.cc/paper_files/paper/2020/hash/588cb956d6bbe67078f29f8de420a13d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/588cb956d6bbe67078f29f8de420a13d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10375-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/588cb956d6bbe67078f29f8de420a13d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/588cb956d6bbe67078f29f8de420a13d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/588cb956d6bbe67078f29f8de420a13d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/588cb956d6bbe67078f29f8de420a13d-Supplemental.pdf
Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.
Boosting Adversarial Training with Hypersphere Embedding
https://papers.nips.cc/paper_files/paper/2020/hash/5898d8095428ee310bf7fa3da1864ff7-Abstract.html
Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Jun Zhu, Hang Su
https://papers.nips.cc/paper_files/paper/2020/hash/5898d8095428ee310bf7fa3da1864ff7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5898d8095428ee310bf7fa3da1864ff7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10376-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5898d8095428ee310bf7fa3da1864ff7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5898d8095428ee310bf7fa3da1864ff7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5898d8095428ee310bf7fa3da1864ff7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5898d8095428ee310bf7fa3da1864ff7-Supplemental.pdf
Adversarial training (AT) is one of the most effective defenses against adversarial attacks for deep learning models. In this work, we advocate incorporating the hypersphere embedding (HE) mechanism into the AT procedure by regularizing the features onto compact manifolds, which constitutes a lightweight yet effective module to blend in the strength of representation learning. Our extensive analyses reveal that AT and HE are well coupled to benefit the robustness of the adversarially trained models from several aspects. We validate the effectiveness and adaptability of HE by embedding it into the popular AT frameworks including PGD-AT, ALP, and TRADES, as well as the FreeAT and FastAT strategies. In the experiments, we evaluate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets, which verifies that integrating HE can consistently enhance the model robustness for each AT framework with little extra computation.
Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs
https://papers.nips.cc/paper_files/paper/2020/hash/58ae23d878a47004366189884c2f8440-Abstract.html
Jiong Zhu, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, Danai Koutra
https://papers.nips.cc/paper_files/paper/2020/hash/58ae23d878a47004366189884c2f8440-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/58ae23d878a47004366189884c2f8440-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10377-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/58ae23d878a47004366189884c2f8440-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/58ae23d878a47004366189884c2f8440-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/58ae23d878a47004366189884c2f8440-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/58ae23d878a47004366189884c2f8440-Supplemental.pdf
We investigate the representation power of graph neural networks in the semi-supervised node classification task under heterophily or low homophily, i.e., in networks where connected nodes may have different class labels and dissimilar features. Many popular GNNs fail to generalize to this setting, and are even outperformed by models that ignore the graph structure (e.g., multilayer perceptrons). Motivated by this limitation, we identify a set of key designs—ego- and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations—that boost learning from the graph structure under heterophily. We combine them into a graph neural network, H2GCN, which we use as the base method to empirically evaluate the effectiveness of the identified designs. Going beyond the traditional benchmarks with strong homophily, our empirical analysis shows that the identified designs increase the accuracy of GNNs by up to 40% and 27% over models without them on synthetic and real networks with heterophily, respectively, and yield competitive performance under homophily.
Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows
https://papers.nips.cc/paper_files/paper/2020/hash/58c54802a9fb9526cd0923353a34a7ae-Abstract.html
Ruizhi Deng, Bo Chang, Marcus A. Brubaker, Greg Mori, Andreas Lehrmann
https://papers.nips.cc/paper_files/paper/2020/hash/58c54802a9fb9526cd0923353a34a7ae-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/58c54802a9fb9526cd0923353a34a7ae-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10378-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/58c54802a9fb9526cd0923353a34a7ae-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/58c54802a9fb9526cd0923353a34a7ae-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/58c54802a9fb9526cd0923353a34a7ae-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/58c54802a9fb9526cd0923353a34a7ae-Supplemental.pdf
Normalizing flows transform a simple base distribution into a complex target distribution and have proved to be powerful models for data generation and density estimation. In this work, we propose a novel type of normalizing flow driven by a differential deformation of the continuous-time Wiener process. As a result, we obtain a rich time series model whose observable process inherits many of the appealing properties of its base process, such as efficient computation of likelihoods and marginals. Furthermore, our continuous treatment provides a natural framework for irregular time series with an independent arrival process, including straightforward interpolation. We illustrate the desirable properties of the proposed model on popular stochastic processes and demonstrate its superior flexibility to variational RNN and latent ODE baselines in a series of experiments on synthetic and real-world data.
Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent
https://papers.nips.cc/paper_files/paper/2020/hash/5938b4d054136e5d59ada6ec9c295d7a-Abstract.html
Dimitris Fotakis, Thanasis Lianeas, Georgios Piliouras, Stratis Skoulakis
https://papers.nips.cc/paper_files/paper/2020/hash/5938b4d054136e5d59ada6ec9c295d7a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5938b4d054136e5d59ada6ec9c295d7a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10379-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5938b4d054136e5d59ada6ec9c295d7a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5938b4d054136e5d59ada6ec9c295d7a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5938b4d054136e5d59ada6ec9c295d7a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5938b4d054136e5d59ada6ec9c295d7a-Supplemental.pdf
The widely studied Generalized Min-Sum-Set-Cover (GMSSC) problem serves as a formal model for the setting above. GMSSC is NP-hard and the standard application of no-regret online learning algorithms is computationally inefficient, because they operate in the space of rankings. In this work, we show how to achieve low regret for GMSSC in polynomial-time. We employ dimensionality reduction from rankings to the space of doubly stochastic matrices, where we apply Online Gradient Descent. A key step is to show how subgradients can be computed efficiently, by solving the dual of a configuration LP. Using deterministic and randomized rounding schemes, we map doubly stochastic matrices back to rankings with a small loss in the GMSSC objective.
Training Normalizing Flows with the Information Bottleneck for Competitive Generative Classification
https://papers.nips.cc/paper_files/paper/2020/hash/593906af0d138e69f49d251d3e7cbed0-Abstract.html
Lynton Ardizzone, Radek Mackowiak, Carsten Rother, Ullrich Köthe
https://papers.nips.cc/paper_files/paper/2020/hash/593906af0d138e69f49d251d3e7cbed0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/593906af0d138e69f49d251d3e7cbed0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10380-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/593906af0d138e69f49d251d3e7cbed0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/593906af0d138e69f49d251d3e7cbed0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/593906af0d138e69f49d251d3e7cbed0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/593906af0d138e69f49d251d3e7cbed0-Supplemental.pdf
The Information Bottleneck (IB) objective uses information theory to formulate a task-performance versus robustness trade-off. It has been successfully applied in the standard discriminative classification setting. We pose the question whether the IB can also be used to train generative likelihood models such as normalizing flows. Since normalizing flows use invertible network architectures (INNs), they are information-preserving by construction. This seems contradictory to the idea of a bottleneck. In this work, firstly, we develop the theory and methodology of IB-INNs, a class of conditional normalizing flows where INNs are trained using the IB objective: Introducing a small amount of controlled information loss allows for an asymptotically exact formulation of the IB, while keeping the INN's generative capabilities intact. Secondly, we investigate the properties of these models experimentally, specifically used as generative classifiers. This model class offers advantages such as improved uncertainty quantification and out-of-distribution detection, but traditional generative classifier solutions suffer considerably in classification accuracy. We find the trade-off parameter in the IB controls a mix of generative capabilities and accuracy close to standard classifiers. Empirically, our uncertainty estimates in this mixed regime compare favourably to conventional generative and discriminative classifiers. Code is provided in the supplement.
Detecting Hands and Recognizing Physical Contact in the Wild
https://papers.nips.cc/paper_files/paper/2020/hash/595373f017b659cb7743291e920a8857-Abstract.html
Supreeth Narasimhaswamy, Trung Nguyen, Minh Hoai Nguyen
https://papers.nips.cc/paper_files/paper/2020/hash/595373f017b659cb7743291e920a8857-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/595373f017b659cb7743291e920a8857-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10381-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/595373f017b659cb7743291e920a8857-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/595373f017b659cb7743291e920a8857-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/595373f017b659cb7743291e920a8857-Review.html
null
We investigate a new problem of detecting hands and recognizing their physical contact state in unconstrained conditions. This is a challenging inference task given the need to reason beyond the local appearance of hands. The lack of training annotations indicating which object or parts of an object the hand is in contact with further complicates the task. We propose a novel convolutional network based on Mask-RCNN that can jointly learn to localize hands and predict their physical contact to address this problem. The network uses outputs from another object detector to obtain locations of objects present in the scene. It uses these outputs and hand locations to recognize the hand's contact state using two attention mechanisms. The first attention mechanism is based on the hand and a region's affinity, enclosing the hand and the object, and densely pools features from this region to the hand region. The second attention module adaptively selects salient features from this plausible region of contact. To develop and evaluate our method's performance, we introduce a large-scale dataset called ContactHands, containing unconstrained images annotated with hand locations and contact states. The proposed network, including the parameters of attention modules, is end-to-end trainable. This network achieves approximately 7% relative improvement over a baseline network that was built on the vanilla Mask-RCNN architecture and trained for recognizing hand contact states.
On the Theory of Transfer Learning: The Importance of Task Diversity
https://papers.nips.cc/paper_files/paper/2020/hash/59587bffec1c7846f3e34230141556ae-Abstract.html
Nilesh Tripuraneni, Michael Jordan, Chi Jin
https://papers.nips.cc/paper_files/paper/2020/hash/59587bffec1c7846f3e34230141556ae-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/59587bffec1c7846f3e34230141556ae-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10382-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/59587bffec1c7846f3e34230141556ae-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/59587bffec1c7846f3e34230141556ae-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/59587bffec1c7846f3e34230141556ae-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/59587bffec1c7846f3e34230141556ae-Supplemental.pdf
We provide new statistical guarantees for transfer learning via representation learning--when transfer is achieved by learning a feature representation shared across different tasks. This enables learning on new tasks using far less data than is required to learn them in isolation. Formally, we consider $t+1$ tasks parameterized by functions of the form $f_j \circ h$ in a general function class $F \circ H$, where each $f_j$ is a task-specific function in $F$ and $h$ is the shared representation in $H$. Letting $C(\cdot)$ denote the complexity measure of the function class, we show that for diverse training tasks (1) the sample complexity needed to learn the shared representation across the first $t$ training tasks scales as $C(H) + t C(F)$, despite no explicit access to a signal from the feature representation and (2) with an accurate estimate of the representation, the sample complexity needed to learn a new task scales only with $C(F)$. Our results depend upon a new general notion of task diversity--applicable to models with general tasks, features, and losses--as well as a novel chain rule for Gaussian complexities. Finally, we exhibit the utility of our general framework in several models of importance in the literature.
Finite-Time Analysis of Round-Robin Kullback-Leibler Upper Confidence Bounds for Optimal Adaptive Allocation with Multiple Plays and Markovian Rewards
https://papers.nips.cc/paper_files/paper/2020/hash/597c7b407a02cc0a92167e7a371eca25-Abstract.html
Vrettos Moulos
https://papers.nips.cc/paper_files/paper/2020/hash/597c7b407a02cc0a92167e7a371eca25-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/597c7b407a02cc0a92167e7a371eca25-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10383-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/597c7b407a02cc0a92167e7a371eca25-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/597c7b407a02cc0a92167e7a371eca25-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/597c7b407a02cc0a92167e7a371eca25-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/597c7b407a02cc0a92167e7a371eca25-Supplemental.pdf
We study an extension of the classic stochastic multi-armed bandit problem which involves multiple plays and Markovian rewards in the rested bandits setting. In order to tackle this problem we consider an adaptive allocation rule which at each stage combines the information from the sample means of all the arms, with the Kullback-Leibler upper confidence bound of a single arm which is selected in round-robin way. For rewards generated from a one-parameter exponential family of Markov chains, we provide a finite-time upper bound for the regret incurred from this adaptive allocation rule, which reveals the logarithmic dependence of the regret on the time horizon, and which is asymptotically optimal. For our analysis we devise several concentration results for Markov chains, including a maximal inequality for Markov chains, that may be of interest in their own right. As a byproduct of our analysis we also establish asymptotically optimal, finite-time guarantees for the case of multiple plays, and i.i.d. rewards drawn from a one-parameter exponential family of probability densities. Additionally, we provide simulation results that illustrate that calculating Kullback-Leibler upper confidence bounds in a round-robin way, is significantly more efficient than calculating them for every arm at each round, and that the expected regrets of those two approaches behave similarly.
Neural Star Domain as Primitive Representation
https://papers.nips.cc/paper_files/paper/2020/hash/59a3adea76fadcb6dd9e54c96fc155d1-Abstract.html
Yuki Kawana, Yusuke Mukuta, Tatsuya Harada
https://papers.nips.cc/paper_files/paper/2020/hash/59a3adea76fadcb6dd9e54c96fc155d1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/59a3adea76fadcb6dd9e54c96fc155d1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10384-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/59a3adea76fadcb6dd9e54c96fc155d1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/59a3adea76fadcb6dd9e54c96fc155d1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/59a3adea76fadcb6dd9e54c96fc155d1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/59a3adea76fadcb6dd9e54c96fc155d1-Supplemental.pdf
Reconstructing 3D objects from 2D images is a fundamental task in computer vision. Acurate structured reconstruction by parsimonious and semantic primitive representation further broadens its application. When reconstructing a target shape with multiple primitives, it is preferable that one can instantly access the union of basic properties of the shape such as collective volume and surface, treating the primitives as if they are one single shape. This becomes possible by primitive representation with unified implicit and explicit representations. However, primitive representations in current approaches do not satisfy all of the above requirements at the same time. To solve this problem, we propose a novel primitive representation named neural star domain (NSD) that learns primitive shapes in the star domain. We show that NSD is a universal approximator of the star domain and is not only parsimonious and semantic but also an implicit and explicit shape representation. We demonstrate that our approach outperforms existing methods in image reconstruction tasks, semantic capabilities, and speed and quality of sampling high-resolution meshes.
Off-Policy Interval Estimation with Lipschitz Value Iteration
https://papers.nips.cc/paper_files/paper/2020/hash/59accb9fe696ce55e28b7d23a009e2d1-Abstract.html
Ziyang Tang, Yihao Feng, Na Zhang, Jian Peng, Qiang Liu
https://papers.nips.cc/paper_files/paper/2020/hash/59accb9fe696ce55e28b7d23a009e2d1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/59accb9fe696ce55e28b7d23a009e2d1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10385-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/59accb9fe696ce55e28b7d23a009e2d1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/59accb9fe696ce55e28b7d23a009e2d1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/59accb9fe696ce55e28b7d23a009e2d1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/59accb9fe696ce55e28b7d23a009e2d1-Supplemental.zip
Off-policy evaluation provides an essential tool for evaluating the effects of different policies or treatments using only observed data. When applied to high-stakes scenarios such as medical diagnosis or financial decision-making, it is essential to provide provably correct upper and lower bounds of the expected reward, not just a classical single point estimate, to the end-users, as executing a poor policy can be very costly. In this work, we propose a provably correct method for obtaining interval bounds for off-policy evaluation in a general continuous setting. The idea is to search for the maximum and minimum values of the expected reward among all the Lipschitz Q-functions that are consistent with the observations, which amounts to solving a constrained optimization problem on a Lipschitz function space. We go on to introduce a Lipschitz value iteration method to monotonically tighten the interval, which is simple yet efficient and provably convergent. We demonstrate the practical efficiency of our method on a range of benchmarks.
Inverse Rational Control with Partially Observable Continuous Nonlinear Dynamics
https://papers.nips.cc/paper_files/paper/2020/hash/5a01f0597ac4bdf35c24846734ee9a76-Abstract.html
Minhae Kwon, Saurabh Daptardar, Paul R. Schrater, Xaq Pitkow
https://papers.nips.cc/paper_files/paper/2020/hash/5a01f0597ac4bdf35c24846734ee9a76-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5a01f0597ac4bdf35c24846734ee9a76-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10386-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5a01f0597ac4bdf35c24846734ee9a76-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5a01f0597ac4bdf35c24846734ee9a76-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5a01f0597ac4bdf35c24846734ee9a76-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5a01f0597ac4bdf35c24846734ee9a76-Supplemental.pdf
A fundamental question in neuroscience is how the brain creates an internal model of the world to guide actions using sequences of ambiguous sensory information. This is naturally formulated as a reinforcement learning problem under partial observations, where an agent must estimate relevant latent variables in the world from its evidence, anticipate possible future states, and choose actions that optimize total expected reward. This problem can be solved by control theory, which allows us to find the optimal actions for a given system dynamics and objective function. However, animals often appear to behave suboptimally. Why? We hypothesize that animals have their own flawed internal model of the world, and choose actions with the highest expected subjective reward according to that flawed model. We describe this behavior as {\it rational} but not optimal. The problem of Inverse Rational Control (IRC) aims to identify which internal model would best explain an agent's actions. Our contribution here generalizes past work on Inverse Rational Control which solved this problem for discrete control in partially observable Markov decision processes. Here we accommodate continuous nonlinear dynamics and continuous actions, and impute sensory observations corrupted by unknown noise that is private to the animal. We first build an optimal Bayesian agent that learns an optimal policy generalized over the entire model space of dynamics and subjective rewards using deep reinforcement learning. Crucially, this allows us to compute a likelihood over models for experimentally observable action trajectories acquired from a suboptimal agent. We then find the model parameters that maximize the likelihood using gradient ascent. Our method successfully recovers the true model of rational agents. This approach provides a foundation for interpreting the behavioral and neural dynamics of animal brains during complex tasks.
Deep Statistical Solvers
https://papers.nips.cc/paper_files/paper/2020/hash/5a16bce575f3ddce9c819de125ba0029-Abstract.html
Balthazar Donon, Zhengying Liu, Wenzhuo LIU, Isabelle Guyon, Antoine Marot, Marc Schoenauer
https://papers.nips.cc/paper_files/paper/2020/hash/5a16bce575f3ddce9c819de125ba0029-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5a16bce575f3ddce9c819de125ba0029-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10387-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5a16bce575f3ddce9c819de125ba0029-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5a16bce575f3ddce9c819de125ba0029-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5a16bce575f3ddce9c819de125ba0029-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5a16bce575f3ddce9c819de125ba0029-Supplemental.zip
This paper introduces Deep Statistical Solvers (DSS), a new class of trainable solvers for optimization problems, arising e.g., from system simulations. The key idea is to learn a solver that generalizes to a given distribution of problem instances. This is achieved by directly using as loss the objective function of the problem, as opposed to most previous Machine Learning based approaches, which mimic the solutions attained by an existing solver. Though both types of approaches outperform classical solvers with respect to speed for a given accuracy, a distinctive advantage of DSS is that they can be trained without a training set of sample solutions. Focusing on use cases of systems of interacting and interchangeable entities (e.g. molecular dynamics, power systems, discretized PDEs), the proposed approach is instantiated within a class of Graph Neural Networks. Under sufficient conditions, we prove that the corresponding set of functions contains approximations to any arbitrary precision of the actual solution of the optimization problem. The proposed approach is experimentally validated on large linear problems, demonstrating super-generalisation properties; And on AC power grid simulations, on which the predictions of the trained model have a correlation higher than 99.99% with the outputs of the classical Newton-Raphson method (known for its accuracy), while being 2 to 3 orders of magnitude faster.
Distributionally Robust Parametric Maximum Likelihood Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/5a29503a4909fcade36b1823e7cebcf5-Abstract.html
Viet Anh Nguyen, Xuhui Zhang, Jose Blanchet, Angelos Georghiou
https://papers.nips.cc/paper_files/paper/2020/hash/5a29503a4909fcade36b1823e7cebcf5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5a29503a4909fcade36b1823e7cebcf5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10388-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5a29503a4909fcade36b1823e7cebcf5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5a29503a4909fcade36b1823e7cebcf5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5a29503a4909fcade36b1823e7cebcf5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5a29503a4909fcade36b1823e7cebcf5-Supplemental.zip
We consider the parameter estimation problem of a probabilistic generative model prescribed using a natural exponential family of distributions. For this problem, the typical maximum likelihood estimator usually overfits under limited training sample size, is sensitive to noise and may perform poorly on downstream predictive tasks. To mitigate these issues, we propose a distributionally robust maximum likelihood estimator that minimizes the worst-case expected log-loss uniformly over a parametric Kullback-Leibler ball around a parametric nominal distribution. Leveraging the analytical expression of the Kullback-Leibler divergence between two distributions in the same natural exponential family, we show that the min-max estimation problem is tractable in a broad setting, including the robust training of generalized linear models. Our novel robust estimator also enjoys statistical consistency and delivers promising empirical results in both regression and classification tasks.
Secretary and Online Matching Problems with Machine Learned Advice
https://papers.nips.cc/paper_files/paper/2020/hash/5a378f8490c8d6af8647a753812f6e31-Abstract.html
Antonios Antoniadis, Themis Gouleakis, Pieter Kleer, Pavel Kolev
https://papers.nips.cc/paper_files/paper/2020/hash/5a378f8490c8d6af8647a753812f6e31-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5a378f8490c8d6af8647a753812f6e31-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10389-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5a378f8490c8d6af8647a753812f6e31-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5a378f8490c8d6af8647a753812f6e31-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5a378f8490c8d6af8647a753812f6e31-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5a378f8490c8d6af8647a753812f6e31-Supplemental.pdf
The classical analysis of online algorithms, due to its worst-case nature, can be quite pessimistic when the input instance at hand is far from worst-case. Often this is not an issue with machine learning approaches, which shine in exploiting patterns in past inputs in order to predict the future. However, such predictions, although usually accurate, can be arbitrarily poor. Inspired by a recent line of work, we augment three well-known online settings with machine learned predictions about the future, and develop algorithms that take them into account. In particular, we study the following online selection problems: (i) the classical secretary problem, (ii) online bipartite matching and (iii) the graphic matroid secretary problem. Our algorithms still come with a worst-case performance guarantee in the case that predictions are subpar while obtaining an improved competitive ratio (over the best-known classical online algorithm for each problem) when the predictions are sufficiently accurate. For each algorithm, we establish a trade-off between the competitive ratios obtained in the two respective cases.
Deep Transformation-Invariant Clustering
https://papers.nips.cc/paper_files/paper/2020/hash/5a5eab21ca2a8fef4af5e35709ecca15-Abstract.html
Tom Monnier, Thibault Groueix, Mathieu Aubry
https://papers.nips.cc/paper_files/paper/2020/hash/5a5eab21ca2a8fef4af5e35709ecca15-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5a5eab21ca2a8fef4af5e35709ecca15-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10390-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5a5eab21ca2a8fef4af5e35709ecca15-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5a5eab21ca2a8fef4af5e35709ecca15-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5a5eab21ca2a8fef4af5e35709ecca15-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5a5eab21ca2a8fef4af5e35709ecca15-Supplemental.pdf
Recent advances in image clustering typically focus on learning better deep representations. In contrast, we present an orthogonal approach that does not rely on abstract features but instead learns to predict transformations and performs clustering directly in image space. This learning process naturally fits in the gradient-based training of K-means and Gaussian mixture model, without requiring any additional loss or hyper-parameters. It leads us to two new deep transformation-invariant clustering frameworks, which jointly learn prototypes and transformations. More specifically, we use deep learning modules that enable us to resolve invariance to spatial, color and morphological transformations. Our approach is conceptually simple and comes with several advantages, including the possibility to easily adapt the desired invariance to the task and a strong interpretability of both cluster centers and assignments to clusters. We demonstrate that our novel approach yields competitive and highly promising results on standard image clustering benchmarks. Finally, we showcase its robustness and the advantages of its improved interpretability by visualizing clustering results over real photograph collections.
Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
https://papers.nips.cc/paper_files/paper/2020/hash/5a66b9200f29ac3fa0ae244cc2a51b39-Abstract.html
Peizhong Ju, Xiaojun Lin, Jia Liu
https://papers.nips.cc/paper_files/paper/2020/hash/5a66b9200f29ac3fa0ae244cc2a51b39-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5a66b9200f29ac3fa0ae244cc2a51b39-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10391-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5a66b9200f29ac3fa0ae244cc2a51b39-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5a66b9200f29ac3fa0ae244cc2a51b39-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5a66b9200f29ac3fa0ae244cc2a51b39-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5a66b9200f29ac3fa0ae244cc2a51b39-Supplemental.pdf
Recently, there have been significant interests in studying the so-called "double-descent" of the generalization error of linear regression models under the overparameterized and overfitting regime, with the hope that such analysis may provide the first step towards understanding why overparameterized deep neural networks (DNN) still generalize well. However, to date most of these studies focused on the min L2-norm solution that overfits the data. In contrast, in this paper we study the overfitting solution that minimizes the L1-norm, which is known as Basis Pursuit (BP) in the compressed sensing literature. Under a sparse true linear regression model with p i.i.d. Gaussian features, we show that for a large range of p up to a limit that grows exponentially with the number of samples n, with high probability the model error of BP is upper bounded by a value that decreases with p. To the best of our knowledge, this is the first analytical result in the literature establishing the double-descent of overfitting BP for finite n and p. Further, our results reveal significant differences between the double-descent of BP and min L2-norm solutions. Specifically, the double-descent upper-bound of BP is independent of the signal strength, and for high SNR and sparse models the descent-floor of BP can be much lower and wider than that of min L2-norm solutions.
Improving Generalization in Reinforcement Learning with Mixture Regularization
https://papers.nips.cc/paper_files/paper/2020/hash/5a751d6a0b6ef05cfe51b86e5d1458e6-Abstract.html
KAIXIN WANG, Bingyi Kang, Jie Shao, Jiashi Feng
https://papers.nips.cc/paper_files/paper/2020/hash/5a751d6a0b6ef05cfe51b86e5d1458e6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5a751d6a0b6ef05cfe51b86e5d1458e6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10392-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5a751d6a0b6ef05cfe51b86e5d1458e6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5a751d6a0b6ef05cfe51b86e5d1458e6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5a751d6a0b6ef05cfe51b86e5d1458e6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5a751d6a0b6ef05cfe51b86e5d1458e6-Supplemental.pdf
Deep reinforcement learning (RL) agents trained in a limited set of environments tend to suffer overfitting and fail to generalize to unseen testing environments. To improve their generalizability, data augmentation approaches (e.g. cutout and random convolution) are previously explored to increase the data diversity. However, we find these approaches only locally perturb the observations regardless of the training environments, showing limited effectiveness on enhancing the data diversity and the generalization performance. In this work, we introduce a simple approach, named mixreg, which trains agents on a mixture of observations from different training environments and imposes linearity constraints on the observation interpolations and the supervision (e.g. associated reward) interpolations. Mixreg increases the data diversity more effectively and helps learn smoother policies. We verify its effectiveness on improving generalization by conducting extensive experiments on the large-scale Procgen benchmark. Results show mixreg outperforms the well-established baselines on unseen testing environments by a large margin. Mixreg is simple, effective and general. It can be applied to both policy-based and value-based RL algorithms. Code is available at https://github.com/kaixin96/mixreg.
Pontryagin Differentiable Programming: An End-to-End Learning and Control Framework
https://papers.nips.cc/paper_files/paper/2020/hash/5a7b238ba0f6502e5d6be14424b20ded-Abstract.html
Wanxin Jin, Zhaoran Wang, Zhuoran Yang, Shaoshuai Mou
https://papers.nips.cc/paper_files/paper/2020/hash/5a7b238ba0f6502e5d6be14424b20ded-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5a7b238ba0f6502e5d6be14424b20ded-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10393-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5a7b238ba0f6502e5d6be14424b20ded-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5a7b238ba0f6502e5d6be14424b20ded-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5a7b238ba0f6502e5d6be14424b20ded-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5a7b238ba0f6502e5d6be14424b20ded-Supplemental.pdf
This paper develops a Pontryagin differentiable programming (PDP) methodology, which establishes a unified framework to solve a broad class of learning and control tasks. The PDP distinguishes from existing methods by two novel techniques: first, we differentiate through Pontryagin's Maximum Principle, and this allows to obtain the analytical derivative of a trajectory with respect to tunable parameters within an optimal control system, enabling end-to-end learning of dynamics, policies, or/and control objective functions; and second, we propose an auxiliary control system in the backward pass of the PDP framework, and the output of this auxiliary control system is the analytical derivative of the original system's trajectory with respect to the parameters, which can be iteratively solved using standard control tools. We investigate three learning modes of the PDP: inverse reinforcement learning, system identification, and control/planning. We demonstrate the capability of the PDP in each learning mode on different high-dimensional systems, including multilink robot arm, 6-DoF maneuvering UAV, and 6-DoF rocket powered landing.
Learning from Aggregate Observations
https://papers.nips.cc/paper_files/paper/2020/hash/5b0fa0e4c041548bb6289e15d865a696-Abstract.html
Yivan Zhang, Nontawat Charoenphakdee, Zhenguo Wu, Masashi Sugiyama
https://papers.nips.cc/paper_files/paper/2020/hash/5b0fa0e4c041548bb6289e15d865a696-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5b0fa0e4c041548bb6289e15d865a696-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10394-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5b0fa0e4c041548bb6289e15d865a696-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5b0fa0e4c041548bb6289e15d865a696-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5b0fa0e4c041548bb6289e15d865a696-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5b0fa0e4c041548bb6289e15d865a696-Supplemental.pdf
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances instead of individual instances, while the goal is still to predict labels of unseen individuals. A well-known example is multiple instance learning (MIL). In this paper, we extend MIL beyond binary classification to other problems such as multiclass classification and regression. We present a general probabilistic framework that accommodates a variety of aggregate observations, e.g., pairwise similarity/triplet comparison for classification and mean/difference/rank observation for regression. Simple maximum likelihood solutions can be applied to various differentiable models such as deep neural networks and gradient boosting machines. Moreover, we develop the concept of consistency up to an equivalence relation to characterize our estimator and show that it has nice convergence properties under mild assumptions. Experiments on three problem settings --- classification via triplet comparison and regression via mean/rank observation indicate the effectiveness of the proposed method.
The Devil is in the Detail: A Framework for Macroscopic Prediction via Microscopic Models
https://papers.nips.cc/paper_files/paper/2020/hash/5b8e9841e87fb8fc590434f5d933c92c-Abstract.html
Yingxiang Yang, Negar Kiyavash, Le Song, Niao He
https://papers.nips.cc/paper_files/paper/2020/hash/5b8e9841e87fb8fc590434f5d933c92c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5b8e9841e87fb8fc590434f5d933c92c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10395-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5b8e9841e87fb8fc590434f5d933c92c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5b8e9841e87fb8fc590434f5d933c92c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5b8e9841e87fb8fc590434f5d933c92c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5b8e9841e87fb8fc590434f5d933c92c-Supplemental.pdf
Macroscopic data aggregated from microscopic events are pervasive in machine learning, such as country-level COVID-19 infection statistics based on city-level data. Yet, many existing approaches for predicting macroscopic behavior only use aggregated data, leaving a large amount of fine-grained microscopic information unused. In this paper, we propose a principled optimization framework for macroscopic prediction by fitting microscopic models based on conditional stochastic optimization. The framework leverages both macroscopic and microscopic information, and adapts to individual microscopic models involved in the aggregation. In addition, we propose efficient learning algorithms with convergence guarantees. In our experiments, we show that the proposed learning framework clearly outperforms other plug-in supervised learning approaches in real-world applications, including the prediction of daily infections of COVID-19 and medicare claims.
Subgraph Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/5bca8566db79f3788be9efd96c9ed70d-Abstract.html
Emily Alsentzer, Samuel Finlayson, Michelle Li, Marinka Zitnik
https://papers.nips.cc/paper_files/paper/2020/hash/5bca8566db79f3788be9efd96c9ed70d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5bca8566db79f3788be9efd96c9ed70d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10396-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5bca8566db79f3788be9efd96c9ed70d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5bca8566db79f3788be9efd96c9ed70d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5bca8566db79f3788be9efd96c9ed70d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5bca8566db79f3788be9efd96c9ed70d-Supplemental.pdf
Deep learning methods for graphs achieve remarkable performance on many node-level and graph-level prediction tasks. However, despite the proliferation of the methods and their success, prevailing Graph Neural Networks (GNNs) neglect subgraphs, rendering subgraph prediction tasks challenging to tackle in many impactful applications. Further, subgraph prediction tasks present several unique challenges: subgraphs can have non-trivial internal topology, but also carry a notion of position and external connectivity information relative to the underlying graph in which they exist. Here, we introduce SubGNN, a subgraph neural network to learn disentangled subgraph representations. We propose a novel subgraph routing mechanism that propagates neural messages between the subgraph’s components and randomly sampled anchor patches from the underlying graph, yielding highly accurate subgraph representations. SubGNN specifies three channels, each designed to capture a distinct aspect of subgraph topology, and we provide empirical evidence that the channels encode their intended properties. We design a series of new synthetic and real-world subgraph datasets. Empirical results for subgraph classification on eight datasets show that SubGNN achieves considerable performance gains, outperforming strong baseline methods, including node-level and graph-level GNNs, by 19.8% over the strongest baseline. SubGNN performs exceptionally well on challenging biomedical datasets, where subgraphs have complex topology and even comprise multiple disconnected components.
Demystifying Orthogonal Monte Carlo and Beyond
https://papers.nips.cc/paper_files/paper/2020/hash/5bce843dd76db8c939d5323dd3e54ec9-Abstract.html
Han Lin, Haoxian Chen, Krzysztof M. Choromanski, Tianyi Zhang, Clement Laroche
https://papers.nips.cc/paper_files/paper/2020/hash/5bce843dd76db8c939d5323dd3e54ec9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5bce843dd76db8c939d5323dd3e54ec9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10397-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5bce843dd76db8c939d5323dd3e54ec9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5bce843dd76db8c939d5323dd3e54ec9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5bce843dd76db8c939d5323dd3e54ec9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5bce843dd76db8c939d5323dd3e54ec9-Supplemental.pdf
Orthogonal Monte Carlo (OMC) is a very effective sampling algorithm imposing structural geometric conditions (orthogonality) on samples for variance reduction. Due to its simplicity and superior performance as compared to its Quasi Monte Carlo counterparts, OMC is used in a wide spectrum of challenging machine learning applications ranging from scalable kernel methods to predictive recurrent neural networks, generative models and reinforcement learning. However theoretical understanding of the method remains very limited. In this paper we shed new light on the theoretical principles behind OMC, applying theory of negatively dependent random variables to obtain several new concentration results. As a corollary, we manage to obtain first uniform convergence results for OMCs and consequently, substantially strengthen best known downstream guarantees for kernel ridge regression via OMCs. We also propose novel extensions of the method leveraging theory of algebraic varieties over finite fields and particle algorithms, called Near-Orthogonal Monte Carlo (NOMC). We show that NOMC is the first algorithm consistently outperforming OMC in applications ranging from kernel methods to approximating distances in probabilistic metric spaces.
Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online Algorithms
https://papers.nips.cc/paper_files/paper/2020/hash/5bd844f11fa520d54fa5edec06ea2507-Abstract.html
Alexander Wei, Fred Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/5bd844f11fa520d54fa5edec06ea2507-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5bd844f11fa520d54fa5edec06ea2507-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10398-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5bd844f11fa520d54fa5edec06ea2507-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5bd844f11fa520d54fa5edec06ea2507-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5bd844f11fa520d54fa5edec06ea2507-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5bd844f11fa520d54fa5edec06ea2507-Supplemental.pdf
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions. The goal is to design algorithms that are both consistent and robust, meaning that the algorithm performs well when predictions are accurate and maintains worst-case guarantees. Such algorithms have been studied in a recent line of works due to Lykouris and Vassilvitskii (ICML '18) and Purohit et al (NeurIPS '18). They provide robustness-consistency trade-offs for a variety of online problems. However, they leave open the question of whether these trade-offs are tight, i.e., to what extent to such trade-offs are necessary. In this paper, we provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions. We focus on the classic problems of ski-rental and non-clairvoyant scheduling and provide optimal trade-offs in various settings.
A Scalable Approach for Privacy-Preserving Collaborative Machine Learning
https://papers.nips.cc/paper_files/paper/2020/hash/5bf8aaef51c6e0d363cbe554acaf3f20-Abstract.html
Jinhyun So, Basak Guler, Salman Avestimehr
https://papers.nips.cc/paper_files/paper/2020/hash/5bf8aaef51c6e0d363cbe554acaf3f20-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5bf8aaef51c6e0d363cbe554acaf3f20-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10399-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5bf8aaef51c6e0d363cbe554acaf3f20-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5bf8aaef51c6e0d363cbe554acaf3f20-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5bf8aaef51c6e0d363cbe554acaf3f20-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5bf8aaef51c6e0d363cbe554acaf3f20-Supplemental.pdf
We consider a collaborative learning scenario in which multiple data-owners wish to jointly train a logistic regression model, while keeping their individual datasets private from the other parties. We propose COPML, a fully-decentralized training framework that achieves scalability and privacy-protection simultaneously. The key idea of COPML is to securely encode the individual datasets to distribute the computation load effectively across many parties and to perform the training computations as well as the model updates in a distributed manner on the securely encoded data. We provide the privacy analysis of COPML and prove its convergence. Furthermore, we experimentally demonstrate that COPML can achieve significant speedup in training over the benchmark protocols. Our protocol provides strong statistical privacy guarantees against colluding parties (adversaries) with unbounded computational power, while achieving up to $16\times$ speedup in the training time against the benchmark protocols.
Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search
https://papers.nips.cc/paper_files/paper/2020/hash/5c3b99e8f92532e5ad1556e53ceea00c-Abstract.html
Jaehyeon Kim, Sungwon Kim, Jungil Kong, Sungroh Yoon
https://papers.nips.cc/paper_files/paper/2020/hash/5c3b99e8f92532e5ad1556e53ceea00c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5c3b99e8f92532e5ad1556e53ceea00c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10400-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5c3b99e8f92532e5ad1556e53ceea00c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5c3b99e8f92532e5ad1556e53ceea00c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5c3b99e8f92532e5ad1556e53ceea00c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5c3b99e8f92532e5ad1556e53ceea00c-Supplemental.pdf
Recently, text-to-speech (TTS) models such as FastSpeech and ParaNet have been proposed to generate mel-spectrograms from text in parallel. Despite the advantage, the parallel TTS models cannot be trained without guidance from autoregressive TTS models as their external aligners. In this work, we propose Glow-TTS, a flow-based generative model for parallel TTS that does not require any external aligner. By combining the properties of flows and dynamic programming, the proposed model searches for the most probable monotonic alignment between text and the latent representation of speech on its own. We demonstrate that enforcing hard monotonic alignments enables robust TTS, which generalizes to long utterances, and employing generative flows enables fast, diverse, and controllable speech synthesis. Glow-TTS obtains an order-of-magnitude speed-up over the autoregressive model, Tacotron 2, at synthesis with comparable speech quality. We further show that our model can be easily extended to a multi-speaker setting.
Towards Learning Convolutions from Scratch
https://papers.nips.cc/paper_files/paper/2020/hash/5c528e25e1fdeaf9d8160dc24dbf4d60-Abstract.html
Behnam Neyshabur
https://papers.nips.cc/paper_files/paper/2020/hash/5c528e25e1fdeaf9d8160dc24dbf4d60-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5c528e25e1fdeaf9d8160dc24dbf4d60-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10401-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5c528e25e1fdeaf9d8160dc24dbf4d60-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5c528e25e1fdeaf9d8160dc24dbf4d60-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5c528e25e1fdeaf9d8160dc24dbf4d60-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5c528e25e1fdeaf9d8160dc24dbf4d60-Supplemental.pdf
Convolution is one of the most essential components of modern architectures used in computer vision. As machine learning moves towards reducing the expert bias and learning it from data, a natural next step seems to be learning convolution-like structures from scratch. This, however, has proven elusive. For example, current state-of-the-art architecture search algorithms use convolution as one of the existing modules rather than learning it from data. In an attempt to understand the inductive bias that gives rise to convolutions, we investigate minimum description length as a guiding principle and show that in some settings, it can indeed be indicative of the performance of architectures. To find architectures with small description length, we propose beta-LASSO, a simple variant of LASSO algorithm that, when applied on fully-connected networks for image classification tasks, learns architectures with local connections and achieves state-of-the-art accuracies for training fully-connected networks on CIFAR-10 (84.50%), CIFAR-100 (57.76%) and SVHN (93.84%) bridging the gap between fully-connected and convolutional networks.
Cycle-Contrast for Self-Supervised Video Representation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/5c9452254bccd24b8ad0bb1ab4408ad1-Abstract.html
Quan Kong, Wenpeng Wei, Ziwei Deng, Tomoaki Yoshinaga, Tomokazu Murakami
https://papers.nips.cc/paper_files/paper/2020/hash/5c9452254bccd24b8ad0bb1ab4408ad1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5c9452254bccd24b8ad0bb1ab4408ad1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10402-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5c9452254bccd24b8ad0bb1ab4408ad1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5c9452254bccd24b8ad0bb1ab4408ad1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5c9452254bccd24b8ad0bb1ab4408ad1-Review.html
null
We present Cycle-Contrastive Learning (CCL), a novel self-supervised method for learning video representation. Following a nature that there is a belong and inclusion relation of video and its frames, CCL is designed to find correspondences across frames and videos considering the contrastive representation in their domains respectively. It is different from recent approaches that merely learn correspondences across frames or clips. In our method, the frame and video representations are learned from a single network based on an R3D network, with a shared non-linear transformation for embedding both frame and video features before the cycle-contrastive loss. We demonstrate that the video representation learned by CCL can be transferred well to downstream tasks of video understanding, outperforming previous methods in nearest neighbour retrieval and action recognition tasks on UCF101, HMDB51 and MMAct.
Posterior Re-calibration for Imbalanced Datasets
https://papers.nips.cc/paper_files/paper/2020/hash/5ca359ab1e9e3b9c478459944a2d9ca5-Abstract.html
Junjiao Tian, Yen-Cheng Liu, Nathaniel Glaser, Yen-Chang Hsu, Zsolt Kira
https://papers.nips.cc/paper_files/paper/2020/hash/5ca359ab1e9e3b9c478459944a2d9ca5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5ca359ab1e9e3b9c478459944a2d9ca5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10403-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5ca359ab1e9e3b9c478459944a2d9ca5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5ca359ab1e9e3b9c478459944a2d9ca5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5ca359ab1e9e3b9c478459944a2d9ca5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5ca359ab1e9e3b9c478459944a2d9ca5-Supplemental.pdf
Neural Networks can perform poorly when the training label distribution is heavily imbalanced, as well as when the testing data differs from the training distribution. In order to deal with shift in the testing label distribution, which imbalance causes, we motivate the problem from the perspective of an optimal Bayes classifier and derive a prior rebalancing technique that can be solved through a KL-divergence based optimization. This method allows a flexible post-training hyper-parameter to be efficiently tuned on a validation set and effectively modify the classifier margin to deal with this imbalance. We further combine this method with existing likelihood shift methods, re-interpreting them from the same Bayesian perspective, and demonstrating that our method can deal with both problems in a unified way. The resulting algorithm can be conveniently used on probabilistic classification problems agnostic to underlying architectures. Our results on six different datasets and five different architectures show state of art accuracy, including on large-scale imbalanced datasets such as iNaturalist for classification and Synthia for semantic segmentation. Please see https://github.com/GT-RIPL/UNO-IC.git for implementation.
Novelty Search in Representational Space for Sample Efficient Exploration
https://papers.nips.cc/paper_files/paper/2020/hash/5ca41a86596a5ed567d15af0be224952-Abstract.html
Ruo Yu Tao, Vincent Francois-Lavet, Joelle Pineau
https://papers.nips.cc/paper_files/paper/2020/hash/5ca41a86596a5ed567d15af0be224952-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5ca41a86596a5ed567d15af0be224952-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10404-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5ca41a86596a5ed567d15af0be224952-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5ca41a86596a5ed567d15af0be224952-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5ca41a86596a5ed567d15af0be224952-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5ca41a86596a5ed567d15af0be224952-Supplemental.zip
We present a new approach for efficient exploration which leverages a low-dimensional encoding of the environment learned with a combination of model-based and model-free objectives. Our approach uses intrinsic rewards that are based on the distance of nearest neighbors in the low dimensional representational space to gauge novelty. We then leverage these intrinsic rewards for sample-efficient exploration with planning routines in representational space for hard exploration tasks with sparse rewards. One key element of our approach is the use of information theoretic principles to shape our representations in a way so that our novelty reward goes beyond pixel similarity. We test our approach on a number of maze tasks, as well as a control problem and show that our exploration approach is more sample-efficient compared to strong baselines.
Robust Reinforcement Learning via Adversarial training with Langevin Dynamics
https://papers.nips.cc/paper_files/paper/2020/hash/5cb0e249689cd6d8369c4885435a56c2-Abstract.html
Parameswaran Kamalaruban, Yu-Ting Huang, Ya-Ping Hsieh, Paul Rolland, Cheng Shi, Volkan Cevher
https://papers.nips.cc/paper_files/paper/2020/hash/5cb0e249689cd6d8369c4885435a56c2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5cb0e249689cd6d8369c4885435a56c2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10405-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5cb0e249689cd6d8369c4885435a56c2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5cb0e249689cd6d8369c4885435a56c2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5cb0e249689cd6d8369c4885435a56c2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5cb0e249689cd6d8369c4885435a56c2-Supplemental.zip
We introduce a \emph{sampling} perspective to tackle the challenging task of training robust Reinforcement Learning (RL) agents. Leveraging the powerful Stochastic Gradient Langevin Dynamics, we present a novel, scalable two-player RL algorithm, which is a sampling variant of the two-player policy gradient method. Our algorithm consistently outperforms existing baselines, in terms of generalization across different training and testing conditions, on several MuJoCo environments. Our experiments also show that, even for objective functions that entirely ignore potential environmental shifts, our sampling approach remains highly robust in comparison to standard RL algorithms.
Adversarial Blocking Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/5cc3749a6e56ef6d656735dff9176074-Abstract.html
Nicholas Bishop, Hau Chan, Debmalya Mandal, Long Tran-Thanh
https://papers.nips.cc/paper_files/paper/2020/hash/5cc3749a6e56ef6d656735dff9176074-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5cc3749a6e56ef6d656735dff9176074-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10406-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5cc3749a6e56ef6d656735dff9176074-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5cc3749a6e56ef6d656735dff9176074-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5cc3749a6e56ef6d656735dff9176074-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5cc3749a6e56ef6d656735dff9176074-Supplemental.pdf
We consider a general adversarial multi-armed blocking bandit setting where each played arm can be blocked (unavailable) for some time periods and the reward per arm is given at each time period adversarially without obeying any distribution. The setting models scenarios of allocating scarce limited supplies (e.g., arms) where the supplies replenish and can be reused only after certain time periods. We first show that, in the optimization setting, when the blocking durations and rewards are known in advance, finding an optimal policy (e.g., determining which arm per round) that maximises the cumulative reward is strongly NP-hard, eliminating the possibility of a fully polynomial-time approximation scheme (FPTAS) for the problem unless P = NP. To complement our result, we show that a greedy algorithm that plays the best available arm at each round provides an approximation guarantee that depends on the blocking durations and the path variance of the rewards. In the bandit setting, when the blocking durations and rewards are not known, we design two algorithms, RGA and RGA-META, for the case of bounded duration an path variation. In particular, when the variation budget BT is known in advance, RGA can achieve O(\sqrt{T(2\tilde{D}+K)B{T}}) dynamic approximate regret. On the other hand, when B_T is not known, we show that the dynamic approximate regret of RGA-META is at most O((K+\tilde{D})^{1/4}\tilde{B}^{1/2}T^{3/4}) where \tilde{B} is the maximal path variation budget within each batch of RGA-META (which is provably in order of o(\sqrt{T}). We also prove that if either the variation budget or the maximal blocking duration is unbounded, the approximate regret will be at least Theta(T). We also show that the regret upper bound of RGA is tight if the blocking durations are bounded above by an order of O(1).
Online Algorithms for Multi-shop Ski Rental with Machine Learned Advice
https://papers.nips.cc/paper_files/paper/2020/hash/5cc4bb753030a3d804351b2dfec0d8b5-Abstract.html
Shufan Wang, Jian Li, Shiqiang Wang
https://papers.nips.cc/paper_files/paper/2020/hash/5cc4bb753030a3d804351b2dfec0d8b5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5cc4bb753030a3d804351b2dfec0d8b5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10407-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5cc4bb753030a3d804351b2dfec0d8b5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5cc4bb753030a3d804351b2dfec0d8b5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5cc4bb753030a3d804351b2dfec0d8b5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5cc4bb753030a3d804351b2dfec0d8b5-Supplemental.pdf
We study the problem of augmenting online algorithms with machine learned (ML) advice. In particular, we consider the \emph{multi-shop ski rental} (MSSR) problem, which is a generalization of the classical ski rental problem. In MSSR, each shop has different prices for buying and renting a pair of skis, and a skier has to make decisions on when and where to buy. We obtain both deterministic and randomized online algorithms with provably improved performance when either a single or multiple ML predictions are used to make decisions. These online algorithms have no knowledge about the quality or the prediction error type of the ML prediction. The performance of these online algorithms are robust to the poor performance of the predictors, but improve with better predictions. Extensive experiments using both synthetic and real world data traces verify our theoretical observations and show better performance against algorithms that purely rely on online decision making.
Multi-label Contrastive Predictive Coding
https://papers.nips.cc/paper_files/paper/2020/hash/5cd5058bca53951ffa7801bcdf421651-Abstract.html
Jiaming Song, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2020/hash/5cd5058bca53951ffa7801bcdf421651-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5cd5058bca53951ffa7801bcdf421651-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10408-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5cd5058bca53951ffa7801bcdf421651-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5cd5058bca53951ffa7801bcdf421651-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5cd5058bca53951ffa7801bcdf421651-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5cd5058bca53951ffa7801bcdf421651-Supplemental.pdf
Variational mutual information (MI) estimators are widely used in unsupervised representation learning methods such as contrastive predictive coding (CPC). A lower bound on MI can be obtained from a multi-class classification problem, where a critic attempts to distinguish a positive sample drawn from the underlying joint distribution from (m-1) negative samples drawn from a suitable proposal distribution. Using this approach, MI estimates are bounded above by \log m, and could thus severely underestimate unless m is very large. To overcome this limitation, we introduce a novel estimator based on a multi-label classification problem, where the critic needs to jointly identify \emph{multiple} positive samples at the same time. We show that using the same amount of negative samples, multi-label CPC is able to exceed the \log m bound, while still being a valid lower bound of mutual information. We demonstrate that the proposed approach is able to lead to better mutual information estimation, gain empirical improvements in unsupervised representation learning, and beat the current state-of-the-art in knowledge distillation over 10 out of 13 tasks.
Rotation-Invariant Local-to-Global Representation Learning for 3D Point Cloud
https://papers.nips.cc/paper_files/paper/2020/hash/5d0cb12f8c9ad6845110317afc6e2183-Abstract.html
SEOHYUN KIM, JaeYoo Park, Bohyung Han
https://papers.nips.cc/paper_files/paper/2020/hash/5d0cb12f8c9ad6845110317afc6e2183-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5d0cb12f8c9ad6845110317afc6e2183-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10409-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5d0cb12f8c9ad6845110317afc6e2183-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5d0cb12f8c9ad6845110317afc6e2183-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5d0cb12f8c9ad6845110317afc6e2183-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5d0cb12f8c9ad6845110317afc6e2183-Supplemental.pdf
We propose a local-to-global representation learning algorithm for 3D point cloud data, which is appropriate to handle various geometric transformations, especially rotation, without explicit data augmentation with respect to the transformations. Our model takes advantage of multi-level abstraction based on graph convolutional neural networks, which constructs a descriptor hierarchy to encode rotation-invariant shape information of an input object in a bottom-up manner. The descriptors in each level are obtained from a neural network based on a graph via stochastic sampling of 3D points, which is effective in making the learned representations robust to the variations of input data. The proposed algorithm presents the state-of-the-art performance on the rotation-augmented 3D object recognition and segmentation benchmarks, and we further analyze its characteristics through comprehensive ablative experiments.
Learning Invariants through Soft Unification
https://papers.nips.cc/paper_files/paper/2020/hash/5d0d5594d24f0f955548f0fc0ff83d10-Abstract.html
Nuri Cingillioglu, Alessandra Russo
https://papers.nips.cc/paper_files/paper/2020/hash/5d0d5594d24f0f955548f0fc0ff83d10-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5d0d5594d24f0f955548f0fc0ff83d10-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10410-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5d0d5594d24f0f955548f0fc0ff83d10-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5d0d5594d24f0f955548f0fc0ff83d10-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5d0d5594d24f0f955548f0fc0ff83d10-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5d0d5594d24f0f955548f0fc0ff83d10-Supplemental.pdf
Human reasoning involves recognising common underlying principles across many examples. The by-products of such reasoning are invariants that capture patterns such as "if someone went somewhere then they are there", expressed using variables "someone" and "somewhere" instead of mentioning specific people or places. Humans learn what variables are and how to use them at a young age. This paper explores whether machines can also learn and use variables solely from examples without requiring human pre-engineering. We propose Unification Networks, an end-to-end differentiable neural network approach capable of lifting examples into invariants and using those invariants to solve a given task. The core characteristic of our architecture is soft unification between examples that enables the network to generalise parts of the input into variables, thereby learning invariants. We evaluate our approach on five datasets to demonstrate that learning invariants captures patterns in the data and can improve performance over baselines.
One Solution is Not All You Need: Few-Shot Extrapolation via Structured MaxEnt RL
https://papers.nips.cc/paper_files/paper/2020/hash/5d151d1059a6281335a10732fc49620e-Abstract.html
Saurabh Kumar, Aviral Kumar, Sergey Levine, Chelsea Finn
https://papers.nips.cc/paper_files/paper/2020/hash/5d151d1059a6281335a10732fc49620e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5d151d1059a6281335a10732fc49620e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10411-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5d151d1059a6281335a10732fc49620e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5d151d1059a6281335a10732fc49620e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5d151d1059a6281335a10732fc49620e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5d151d1059a6281335a10732fc49620e-Supplemental.pdf
While reinforcement learning algorithms can learn effective policies for complex tasks, these policies are often brittle to even minor task variations, especially when variations are not explicitly provided during training. One natural approach to this problem is to train agents with manually specified variation in the training task or environment. However, this may be infeasible in practical situations, either because making perturbations is not possible, or because it is unclear how to choose suitable perturbation strategies without sacrificing performance. The key insight of this work is that learning diverse behaviors for accomplishing a task can directly lead to behavior that generalizes to varying environments, without needing to perform explicit perturbations during training. By identifying multiple solutions for the task in a single environment during training, our approach can generalize to new situations by abandoning solutions that are no longer effective and adopting those that are. We theoretically characterize a robustness set of environments that arises from our algorithm and empirically find that our diversity-driven approach can extrapolate to various changes in the environment and task.
Variational Bayesian Monte Carlo with Noisy Likelihoods
https://papers.nips.cc/paper_files/paper/2020/hash/5d40954183d62a82257835477ccad3d2-Abstract.html
Luigi Acerbi
https://papers.nips.cc/paper_files/paper/2020/hash/5d40954183d62a82257835477ccad3d2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5d40954183d62a82257835477ccad3d2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10412-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5d40954183d62a82257835477ccad3d2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5d40954183d62a82257835477ccad3d2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5d40954183d62a82257835477ccad3d2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5d40954183d62a82257835477ccad3d2-Supplemental.pdf
Variational Bayesian Monte Carlo (VBMC) is a recently introduced framework that uses Gaussian process surrogates to perform approximate Bayesian inference in models with black-box, non-cheap likelihoods. In this work, we extend VBMC to deal with noisy log-likelihood evaluations, such as those arising from simulation-based models. We introduce new global' acquisition functions, such as expected information gain (EIG) and variational interquantile range (VIQR), which are robust to noise and can be efficiently evaluated within the VBMC setting. In a novel, challenging, noisy-inference benchmark comprising of a variety of models with real datasets from computational and cognitive neuroscience, VBMC+VIQR achieves state-of-the-art performance in recovering the ground-truth posteriors and model evidence. In particular, our method vastly outperformslocal' acquisition functions and other surrogate-based inference methods while keeping a small algorithmic cost. Our benchmark corroborates VBMC as a general-purpose technique for sample-efficient black-box Bayesian inference also with noisy models.
Finite-Sample Analysis of Contractive Stochastic Approximation Using Smooth Convex Envelopes
https://papers.nips.cc/paper_files/paper/2020/hash/5d44ee6f2c3f71b73125876103c8f6c4-Abstract.html
Zaiwei Chen, Siva Theja Maguluri, Sanjay Shakkottai, Karthikeyan Shanmugam
https://papers.nips.cc/paper_files/paper/2020/hash/5d44ee6f2c3f71b73125876103c8f6c4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5d44ee6f2c3f71b73125876103c8f6c4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10413-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5d44ee6f2c3f71b73125876103c8f6c4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5d44ee6f2c3f71b73125876103c8f6c4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5d44ee6f2c3f71b73125876103c8f6c4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5d44ee6f2c3f71b73125876103c8f6c4-Supplemental.pdf
Stochastic Approximation (SA) is a popular approach for solving fixed-point equations where the information is corrupted by noise. In this paper, we consider an SA involving a contraction mapping with respect to an arbitrary norm, and show its finite-sample error bounds while using different stepsizes. The idea is to construct a smooth Lyapunov function using the generalized Moreau envelope, and show that the iterates of SA have negative drift with respect to that Lyapunov function. Our result is applicable in Reinforcement Learning (RL). In particular, we use it to establish the first-known convergence rate of the V-trace algorithm for off-policy TD-learning [18]. Importantly, our construction results in only a logarithmic dependence of the convergence bound on the size of the state-space.
Self-Supervised Generative Adversarial Compression
https://papers.nips.cc/paper_files/paper/2020/hash/5d79099fcdf499f12b79770834c0164a-Abstract.html
Chong Yu, Jeff Pool
https://papers.nips.cc/paper_files/paper/2020/hash/5d79099fcdf499f12b79770834c0164a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5d79099fcdf499f12b79770834c0164a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10414-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5d79099fcdf499f12b79770834c0164a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5d79099fcdf499f12b79770834c0164a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5d79099fcdf499f12b79770834c0164a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5d79099fcdf499f12b79770834c0164a-Supplemental.pdf
Deep learning’s success has led to larger and larger models to handle more and more complex tasks; trained models often contain millions of parameters. These large models are compute- and memory-intensive, which makes it a challenge to deploy them with latency, throughput, and storage constraints. Some model compression methods have been successfully applied to image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning and knowledge distillation, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different compression granularities.
An efficient nonconvex reformulation of stagewise convex optimization problems
https://papers.nips.cc/paper_files/paper/2020/hash/5d97f4dd7c44b2905c799db681b80ce0-Abstract.html
Rudy R. Bunel, Oliver Hinder, Srinadh Bhojanapalli, Krishnamurthy Dvijotham
https://papers.nips.cc/paper_files/paper/2020/hash/5d97f4dd7c44b2905c799db681b80ce0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5d97f4dd7c44b2905c799db681b80ce0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10415-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5d97f4dd7c44b2905c799db681b80ce0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5d97f4dd7c44b2905c799db681b80ce0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5d97f4dd7c44b2905c799db681b80ce0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5d97f4dd7c44b2905c799db681b80ce0-Supplemental.pdf
Convex optimization problems with staged structure appear in several contexts, including optimal control, verification of deep neural networks, and isotonic regression. Off-the-shelf solvers can solve these problems but may scale poorly. We develop a nonconvex reformulation designed to exploit this staged structure. Our reformulation has only simple bound constraints, enabling solution via projected gradient methods and their accelerated variants. The method automatically generates a sequence of primal and dual feasible solutions to the original convex problem, making optimality certification easy. We establish theoretical properties of the nonconvex formulation, showing that it is (almost) free of spurious local minima and has the same global optimum as the convex problem. We modify projected gradient descent to avoid spurious local minimizers so it always converges to the global minimizer. For neural network verification, our approach obtains small duality gaps in only a few gradient steps. Consequently, it can provide tight duality gaps for many large-scale verification problems where both off-the-shelf and specialized solvers struggle.
From Finite to Countable-Armed Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/5dbc8390f17e019d300d5a162c3ce3bc-Abstract.html
Anand Kalvit, Assaf Zeevi
https://papers.nips.cc/paper_files/paper/2020/hash/5dbc8390f17e019d300d5a162c3ce3bc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5dbc8390f17e019d300d5a162c3ce3bc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10416-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5dbc8390f17e019d300d5a162c3ce3bc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5dbc8390f17e019d300d5a162c3ce3bc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5dbc8390f17e019d300d5a162c3ce3bc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5dbc8390f17e019d300d5a162c3ce3bc-Supplemental.pdf
We consider a stochastic bandit problem with countably many arms that belong to a finite set of types, each characterized by a unique mean reward. In addition, there is a fixed distribution over types which sets the proportion of each type in the population of arms. The decision maker is oblivious to the type of any arm and to the aforementioned distribution over types, but perfectly knows the total number of types occurring in the population of arms. We propose a fully adaptive online learning algorithm that achieves O(log n) distribution-dependent expected cumulative regret after any number of plays n, and show that this order of regret is best possible. The analysis of our algorithm relies on newly discovered concentration and convergence properties of optimism-based policies like UCB in finite-armed bandit problems with zero gap, which may be of independent interest.
Adversarial Distributional Training for Robust Deep Learning
https://papers.nips.cc/paper_files/paper/2020/hash/5de8a36008b04a6167761fa19b61aa6c-Abstract.html
Yinpeng Dong, Zhijie Deng, Tianyu Pang, Jun Zhu, Hang Su
https://papers.nips.cc/paper_files/paper/2020/hash/5de8a36008b04a6167761fa19b61aa6c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5de8a36008b04a6167761fa19b61aa6c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10417-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5de8a36008b04a6167761fa19b61aa6c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5de8a36008b04a6167761fa19b61aa6c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5de8a36008b04a6167761fa19b61aa6c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5de8a36008b04a6167761fa19b61aa6c-Supplemental.pdf
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples. However, most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks. Besides, a single attack algorithm could be insufficient to explore the space of perturbations. In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models. ADT is formulated as a minimax optimization problem, where the inner maximization aims to learn an adversarial distribution to characterize the potential adversarial examples around a natural one under an entropic regularizer, and the outer minimization aims to train robust models by minimizing the expected loss over the worst-case adversarial distributions. Through a theoretical analysis, we develop a general algorithm for solving ADT, and present three approaches for parameterizing the adversarial distributions, ranging from the typical Gaussian distributions to the flexible implicit ones. Empirical results on several benchmarks validate the effectiveness of ADT compared with the state-of-the-art AT methods.
Meta-Learning Stationary Stochastic Process Prediction with Convolutional Neural Processes
https://papers.nips.cc/paper_files/paper/2020/hash/5df0385cba256a135be596dbe28fa7aa-Abstract.html
Andrew Foong, Wessel Bruinsma, Jonathan Gordon, Yann Dubois, James Requeima, Richard Turner
https://papers.nips.cc/paper_files/paper/2020/hash/5df0385cba256a135be596dbe28fa7aa-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5df0385cba256a135be596dbe28fa7aa-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10418-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5df0385cba256a135be596dbe28fa7aa-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5df0385cba256a135be596dbe28fa7aa-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5df0385cba256a135be596dbe28fa7aa-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5df0385cba256a135be596dbe28fa7aa-Supplemental.pdf
Stationary stochastic processes (SPs) are a key component of many probabilistic models, such as those for off-the-grid spatio-temporal data. They enable the statistical symmetry of underlying physical phenomena to be leveraged, thereby aiding generalization. Prediction in such models can be viewed as a translation equivariant map from observed data sets to predictive SPs, emphasizing the intimate relationship between stationarity and equivariance. Building on this, we propose the Convolutional Neural Process (ConvNP), which endows Neural Processes (NPs) with translation equivariance and extends convolutional conditional NPs to allow for dependencies in the predictive distribution. The latter enables ConvNPs to be deployed in settings which require coherent samples, such as Thompson sampling or conditional image completion. Moreover, we propose a new maximum-likelihood objective to replace the standard ELBO objective in NPs, which conceptually simplifies the framework and empirically improves performance. We demonstrate the strong performance and generalization capabilities of ConvNPs on 1D regression, image completion, and various tasks with real-world spatio-temporal data.
Theory-Inspired Path-Regularized Differential Network Architecture Search
https://papers.nips.cc/paper_files/paper/2020/hash/5e1b18c4c6a6d31695acbae3fd70ecc6-Abstract.html
Pan Zhou, Caiming Xiong, Richard Socher, Steven Chu Hong Hoi
https://papers.nips.cc/paper_files/paper/2020/hash/5e1b18c4c6a6d31695acbae3fd70ecc6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5e1b18c4c6a6d31695acbae3fd70ecc6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10419-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5e1b18c4c6a6d31695acbae3fd70ecc6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5e1b18c4c6a6d31695acbae3fd70ecc6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5e1b18c4c6a6d31695acbae3fd70ecc6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5e1b18c4c6a6d31695acbae3fd70ecc6-Supplemental.pdf
Despite its high search efficiency, differential architecture search (DARTS) often selects network architectures with dominated skip connections which lead to performance degradation. However, theoretical understandings on this issue remain absent, hindering the development of more advanced methods in a principled way. In this work, we solve this problem by theoretically analyzing the effects of various types of operations, e.g. convolution, skip connection and zero operation, to the network optimization. We prove that the architectures with more skip connections can converge faster than the other candidates, and thus are selected by DARTS. This result, for the first time, theoretically and explicitly reveals the impact of skip connections to fast network optimization and its competitive advantage over other types of operations in DARTS. Then we propose a theory-inspired path-regularized DARTS that consists of two key modules: (i) a differential group-structured sparse binary gate introduced for each operation to avoid unfair competition among operations, and (ii) a path-depth-wise regularization used to incite search exploration for deep architectures that often converge slower than shallow ones as shown in our theory and are not well explored during search. Experimental results on image classification tasks validate its advantages. Codes and models will be released.
Conic Descent and its Application to Memory-efficient Optimization over Positive Semidefinite Matrices
https://papers.nips.cc/paper_files/paper/2020/hash/5e5dd00d770ef3e9154a4257edcb80b8-Abstract.html
John C. Duchi, Oliver Hinder, Andrew Naber, Yinyu Ye
https://papers.nips.cc/paper_files/paper/2020/hash/5e5dd00d770ef3e9154a4257edcb80b8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5e5dd00d770ef3e9154a4257edcb80b8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10420-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5e5dd00d770ef3e9154a4257edcb80b8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5e5dd00d770ef3e9154a4257edcb80b8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5e5dd00d770ef3e9154a4257edcb80b8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5e5dd00d770ef3e9154a4257edcb80b8-Supplemental.pdf
We present an extension of the conditional gradient method to problems whose feasible sets are convex cones. We provide a convergence analysis for the method and for variants with nonconvex objectives, and we extend the analysis to practical cases with effective line search strategies. For the specific case of the positive semidefinite cone, we present a memory-efficient version based on randomized matrix sketches and advocate a heuristic greedy step that greatly improves its practical performance. Numerical results on phase retrieval and matrix completion problems indicate that our method can offer substantial advantages over traditional conditional gradient and Burer-Monteiro approaches.
Learning the Geometry of Wave-Based Imaging
https://papers.nips.cc/paper_files/paper/2020/hash/5e98d23afe19a774d1b2dcbefd5103eb-Abstract.html
Konik Kothari, Maarten de Hoop, Ivan Dokmanić
https://papers.nips.cc/paper_files/paper/2020/hash/5e98d23afe19a774d1b2dcbefd5103eb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5e98d23afe19a774d1b2dcbefd5103eb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10421-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5e98d23afe19a774d1b2dcbefd5103eb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5e98d23afe19a774d1b2dcbefd5103eb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5e98d23afe19a774d1b2dcbefd5103eb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5e98d23afe19a774d1b2dcbefd5103eb-Supplemental.pdf
We propose a general physics-based deep learning architecture for wave-based imaging problems. A key difficulty in imaging problems with a varying background wave speed is that the medium ``bends'' the waves differently depending on their position and direction. This space-bending geometry makes the equivariance to translations of convolutional networks an undesired inductive bias. We build an interpretable neural architecture inspired by Fourier integral operators (FIOs) which approximate the wave physics. FIOs model a wide range of imaging modalities, from seismology and radar to Doppler and ultrasound. We focus on learning the geometry of wave propagation captured by FIOs, which is implicit in the data, via a loss based on optimal transport. The proposed FIONet performs significantly better than the usual baselines on a number of imaging inverse problems, especially in out-of-distribution tests.
Greedy inference with structure-exploiting lazy maps
https://papers.nips.cc/paper_files/paper/2020/hash/5ef20b89bab8fed38253e98a12f26316-Abstract.html
Michael Brennan, Daniele Bigoni, Olivier Zahm, Alessio Spantini, Youssef Marzouk
https://papers.nips.cc/paper_files/paper/2020/hash/5ef20b89bab8fed38253e98a12f26316-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5ef20b89bab8fed38253e98a12f26316-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10422-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5ef20b89bab8fed38253e98a12f26316-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5ef20b89bab8fed38253e98a12f26316-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5ef20b89bab8fed38253e98a12f26316-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5ef20b89bab8fed38253e98a12f26316-Supplemental.pdf
We propose a framework for solving high-dimensional Bayesian inference problems using \emph{structure-exploiting} low-dimensional transport maps or flows. These maps are confined to a low-dimensional subspace (hence, lazy), and the subspace is identified by minimizing an upper bound on the Kullback--Leibler divergence (hence, structured). Our framework provides a principled way of identifying and exploiting low-dimensional structure in an inference problem. It focuses the expressiveness of a transport map along the directions of most significant discrepancy from the posterior, and can be used to build deep compositions of lazy maps, where low-dimensional projections of the parameters are iteratively transformed to match the posterior. We prove weak convergence of the generated sequence of distributions to the posterior, and we demonstrate the benefits of the framework on challenging inference problems in machine learning and differential equations, using inverse autoregressive flows and polynomial maps as examples of the underlying density estimators.
Nimble: Lightweight and Parallel GPU Task Scheduling for Deep Learning
https://papers.nips.cc/paper_files/paper/2020/hash/5f0ad4db43d8723d18169b2e4817a160-Abstract.html
Woosuk Kwon, Gyeong-In Yu, Eunji Jeong, Byung-Gon Chun
https://papers.nips.cc/paper_files/paper/2020/hash/5f0ad4db43d8723d18169b2e4817a160-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5f0ad4db43d8723d18169b2e4817a160-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10423-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5f0ad4db43d8723d18169b2e4817a160-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5f0ad4db43d8723d18169b2e4817a160-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5f0ad4db43d8723d18169b2e4817a160-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5f0ad4db43d8723d18169b2e4817a160-Supplemental.pdf
Deep learning (DL) frameworks take advantage of GPUs to improve the speed of DL inference and training. Ideally, DL frameworks should be able to fully utilize the computation power of GPUs such that the running time depends on the amount of computation assigned to GPUs. Yet, we observe that in scheduling GPU tasks, existing DL frameworks suffer from inefficiencies such as large scheduling overhead and unnecessary serial execution. To this end, we propose Nimble, a DL execution engine that runs GPU tasks in parallel with minimal scheduling overhead. Nimble introduces a novel technique called ahead-of-time (AoT) scheduling. Here, the scheduling procedure finishes before executing the GPU kernel, thereby removing most of the scheduling overhead during run time. Furthermore, Nimble automatically parallelizes the execution of GPU tasks by exploiting multiple GPU streams in a single GPU. Evaluation on a variety of neural networks shows that compared to PyTorch, Nimble speeds up inference and training by up to 22.34× and 3.61×, respectively. Moreover, Nimble outperforms state-of-the-art inference systems, TensorRT and TVM, by up to 2.81× and 1.70×, respectively.
Finding the Homology of Decision Boundaries with Active Learning
https://papers.nips.cc/paper_files/paper/2020/hash/5f14615696649541a025d3d0f8e0447f-Abstract.html
Weizhi Li, Gautam Dasarathy, Karthikeyan Natesan Ramamurthy, Visar Berisha
https://papers.nips.cc/paper_files/paper/2020/hash/5f14615696649541a025d3d0f8e0447f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/5f14615696649541a025d3d0f8e0447f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10424-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/5f14615696649541a025d3d0f8e0447f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/5f14615696649541a025d3d0f8e0447f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/5f14615696649541a025d3d0f8e0447f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/5f14615696649541a025d3d0f8e0447f-Supplemental.pdf
Accurately and efficiently characterizing the decision boundary of classifiers is important for problems related to model selection and meta-learning. Inspired by topological data analysis, the characterization of decision boundaries using their homology has recently emerged as a general and powerful tool. In this paper, we propose an active learning algorithm to recover the homology of decision boundaries. Our algorithm sequentially and adaptively selects which samples it requires the labels of. We theoretically analyze the proposed framework and show that the query complexity of our active learning algorithm depends naturally on the intrinsic complexity of the underlying manifold. We demonstrate the effectiveness of our framework in selecting best-performing machine learning models for datasets just using their respective homological summaries. Experiments on several standard datasets show the sample complexity improvement in recovering the homology and demonstrate the practical utility of the framework for model selection.