title
stringlengths 13
150
| url
stringlengths 97
97
| authors
stringlengths 8
467
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | AuthorFeedback
stringlengths 102
102
⌀ | Bibtex
stringlengths 53
54
| MetaReview
stringlengths 99
99
| Paper
stringlengths 93
93
| Review
stringlengths 95
95
| Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 53
2k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Unsupervised Learning of Lagrangian Dynamics from Images for Prediction and Control | https://papers.nips.cc/paper_files/paper/2020/hash/79f56e5e3e0e999b3c139f225838d41f-Abstract.html | Yaofeng Desmond Zhong, Naomi Leonard | https://papers.nips.cc/paper_files/paper/2020/hash/79f56e5e3e0e999b3c139f225838d41f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/79f56e5e3e0e999b3c139f225838d41f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10625-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/79f56e5e3e0e999b3c139f225838d41f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/79f56e5e3e0e999b3c139f225838d41f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/79f56e5e3e0e999b3c139f225838d41f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/79f56e5e3e0e999b3c139f225838d41f-Supplemental.pdf | Recent approaches for modelling dynamics of physical systems with neural networks enforce Lagrangian or Hamiltonian structure to improve prediction and generalization. However, when coordinates are embedded in high-dimensional data such as images, these approaches either lose interpretability or can only be applied to one particular example. We introduce a new unsupervised neural network model that learns Lagrangian dynamics from images, with interpretability that benefits prediction and control. The model infers Lagrangian dynamics on generalized coordinates that are simultaneously learned with a coordinate-aware variational autoencoder (VAE). The VAE is designed to account for the geometry of physical systems composed of multiple rigid bodies in the plane. By inferring interpretable Lagrangian dynamics, the model learns physical system properties, such as kinetic and potential energy, which enables long-term prediction of dynamics in the image space and synthesis of energy-based controllers. |
High-Dimensional Sparse Linear Bandits | https://papers.nips.cc/paper_files/paper/2020/hash/7a006957be65e608e863301eb98e1808-Abstract.html | Botao Hao, Tor Lattimore, Mengdi Wang | https://papers.nips.cc/paper_files/paper/2020/hash/7a006957be65e608e863301eb98e1808-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a006957be65e608e863301eb98e1808-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10626-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a006957be65e608e863301eb98e1808-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a006957be65e608e863301eb98e1808-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a006957be65e608e863301eb98e1808-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a006957be65e608e863301eb98e1808-Supplemental.pdf | Stochastic linear bandits with high-dimensional sparse features are a practical model for a variety of domains, such as personalized medicine and online advertising. We derive a novel O(n^{2/3}) dimension-free minimax regret lower bound for sparse linear bandits in the data-poor regime where the horizon is larger than the ambient dimension and where the feature vectors admit a well-conditioned exploration distribution. This is complemented by a nearly matching upper bound for an explore-then-commit algorithm showing that that O(n^{2/3}) is the optimal rate in the data-poor regime. The results complement existing bounds for the data-rich regime and also provide another example where carefully balancing the trade-off between information and regret is necessary. Finally, we prove a dimension-free O(\sqrt{n}) regret upper bound under an additional assumption on the magnitude of the signal for relevant features. |
Non-Stochastic Control with Bandit Feedback | https://papers.nips.cc/paper_files/paper/2020/hash/7a1d9028a78f418cb8f01909a348d9b2-Abstract.html | Paula Gradu, John Hallman, Elad Hazan | https://papers.nips.cc/paper_files/paper/2020/hash/7a1d9028a78f418cb8f01909a348d9b2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a1d9028a78f418cb8f01909a348d9b2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10627-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a1d9028a78f418cb8f01909a348d9b2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a1d9028a78f418cb8f01909a348d9b2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a1d9028a78f418cb8f01909a348d9b2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a1d9028a78f418cb8f01909a348d9b2-Supplemental.pdf | We study the problem of controlling a linear dynamical system with adversarial perturbations where the only feedback available to the controller is the scalar loss, and the loss function itself is unknown. For this problem, with either a known or unknown system, we give an efficient sublinear regret algorithm. The main algorithmic difficulty is the dependence of the loss on past controls. To overcome this issue, we propose an efficient algorithm for the general setting of bandit convex optimization for loss functions with memory, which may be of independent interest. |
Generalized Leverage Score Sampling for Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/7a22c0c0a4515485e31f95fd372050c9-Abstract.html | Jason D. Lee, Ruoqi Shen, Zhao Song, Mengdi Wang, zheng Yu | https://papers.nips.cc/paper_files/paper/2020/hash/7a22c0c0a4515485e31f95fd372050c9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a22c0c0a4515485e31f95fd372050c9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10628-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a22c0c0a4515485e31f95fd372050c9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a22c0c0a4515485e31f95fd372050c9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a22c0c0a4515485e31f95fd372050c9-Review.html | null | Leverage score sampling is a powerful technique that originates from theoretical computer science, which can be used to speed up a large number of fundamental questions, e.g. linear regression, linear programming, semi-definite programming, cutting plane method, graph sparsification, maximum matching and max-flow. Recently, it has been shown that leverage score sampling helps to accelerate kernel methods [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17]. In this work, we generalize the results in [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17] to a broader class of kernels. We further bring the leverage score sampling into the field of deep learning theory.
1. We show the connection between the initialization for neural network training and approximating the neural tangent kernel with random features.
2. We prove the equivalence between regularized neural network and neural tangent kernel ridge regression under the initialization of both classical random Gaussian and leverage score sampling. |
An Optimal Elimination Algorithm for Learning a Best Arm | https://papers.nips.cc/paper_files/paper/2020/hash/7a43ed4e82d06a1e6b2e88518fb8c2b0-Abstract.html | Avinatan Hassidim, Ron Kupfer, Yaron Singer | https://papers.nips.cc/paper_files/paper/2020/hash/7a43ed4e82d06a1e6b2e88518fb8c2b0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a43ed4e82d06a1e6b2e88518fb8c2b0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10629-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a43ed4e82d06a1e6b2e88518fb8c2b0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a43ed4e82d06a1e6b2e88518fb8c2b0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a43ed4e82d06a1e6b2e88518fb8c2b0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a43ed4e82d06a1e6b2e88518fb8c2b0-Supplemental.zip | We consider the classic problem of $(\epsilon,\delta)$-\texttt{PAC} learning a best arm where the goal is to identify with confidence $1-\delta$ an arm whose mean is an $\epsilon$-approximation to that of the highest mean arm in a multi-armed bandit setting.
This problem is one of the most fundamental problems in statistics and learning theory, yet somewhat surprisingly its worst case sample complexity is not well understood. In this paper we propose a new approach for $(\epsilon,\delta)$-\texttt{PAC} learning a best arm. This approach leads to an algorithm whose sample complexity converges to \emph{exactly} the optimal sample complexity of $(\epsilon,\delta)$-learning the mean of $n$ arms separately and we complement this result with a conditional matching lower bound. More specifically:
\begin{itemize}
\item The algorithm's sample complexity converges to \emph{exactly} $\frac{n}{2\epsilon^2}\log \frac{1}{\delta}$ as $n$ grows and $\delta \geq \frac{1}{n}$;
%
\item We prove that no elimination algorithm obtains sample complexity arbitrarily lower than $\frac{n}{2\epsilon^2}\log \frac{1}{\delta}$. Elimination algorithms is a broad class of $(\epsilon,\delta)$-\texttt{PAC} best arm learning algorithms that includes many algorithms in the literature.
\end{itemize}
When $n$ is independent of $\delta$ our approach yields an algorithm whose sample complexity converges to $\frac{2n}{\epsilon^2} \log \frac{1}{\delta}$ as $n$ grows. In comparison with the best known algorithm for this problem our approach improves the sample complexity by a factor of over 1500 and over 6000 when $\delta\geq \frac{1}{n}$. |
Efficient Projection-free Algorithms for Saddle Point Problems | https://papers.nips.cc/paper_files/paper/2020/hash/7a53928fa4dd31e82c6ef826f341daec-Abstract.html | Cheng Chen, Luo Luo, Weinan Zhang, Yong Yu | https://papers.nips.cc/paper_files/paper/2020/hash/7a53928fa4dd31e82c6ef826f341daec-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a53928fa4dd31e82c6ef826f341daec-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10630-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a53928fa4dd31e82c6ef826f341daec-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a53928fa4dd31e82c6ef826f341daec-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a53928fa4dd31e82c6ef826f341daec-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a53928fa4dd31e82c6ef826f341daec-Supplemental.pdf | The Frank-Wolfe algorithm is a classic method for constrained optimization problems. It has recently been popular in many machine learning applications because its projection-free property leads to more efficient iterations. In this paper, we study projection-free algorithms for convex-strongly-concave saddle point problems with complicated constraints. Our method combines Conditional Gradient Sliding with Mirror-Prox and show that it only requires $\tilde{\cO}(1/\sqrt{\epsilon})$ gradient evaluations and $\tilde{\cO}(1/\epsilon^2)$ linear optimizations in the batch setting. We also extend our method to the stochastic setting and propose first stochastic projection-free algorithms for saddle point problems. Experimental results demonstrate the effectiveness of our algorithms and verify our theoretical guarantees. |
A mathematical model for automatic differentiation in machine learning | https://papers.nips.cc/paper_files/paper/2020/hash/7a674153c63cff1ad7f0e261c369ab2c-Abstract.html | Jérôme Bolte, Edouard Pauwels | https://papers.nips.cc/paper_files/paper/2020/hash/7a674153c63cff1ad7f0e261c369ab2c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a674153c63cff1ad7f0e261c369ab2c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10631-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a674153c63cff1ad7f0e261c369ab2c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a674153c63cff1ad7f0e261c369ab2c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a674153c63cff1ad7f0e261c369ab2c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a674153c63cff1ad7f0e261c369ab2c-Supplemental.pdf | Automatic differentiation, as implemented today, does not have a simple mathematical model adapted to the needs of modern machine learning. In this work we articulate the relationships between differentiation of programs as implemented in practice, and differentiation of nonsmooth functions. To this end we provide a simple class of functions, a nonsmooth calculus, and show how they apply to stochastic approximation methods. We also evidence the issue of artificial critical points created by algorithmic differentiation and show how usual methods avoid these points with probability one. |
Unsupervised Text Generation by Learning from Search | https://papers.nips.cc/paper_files/paper/2020/hash/7a677bb4477ae2dd371add568dd19e23-Abstract.html | Jingjing Li, Zichao Li, Lili Mou, Xin Jiang, Michael Lyu, Irwin King | https://papers.nips.cc/paper_files/paper/2020/hash/7a677bb4477ae2dd371add568dd19e23-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a677bb4477ae2dd371add568dd19e23-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10632-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a677bb4477ae2dd371add568dd19e23-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a677bb4477ae2dd371add568dd19e23-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a677bb4477ae2dd371add568dd19e23-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a677bb4477ae2dd371add568dd19e23-Supplemental.pdf | In this work, we propose TGLS, a novel framework for unsupervised Text Generation by Learning from Search. We start by applying a strong search algorithm (in particular, simulated annealing) towards a heuristically defined objective that (roughly) estimates the quality of sentences. Then, a conditional generative model learns from the search results, and meanwhile smooth out the noise of search. The alternation between search and learning can be repeated for performance bootstrapping. We demonstrate the effectiveness of TGLS on two real-world natural language generation tasks, unsupervised paraphrasing and text formalization. Our model significantly outperforms unsupervised baseline methods in both tasks. Especially, it achieves comparable performance to strong supervised methods for paraphrase generation. |
Learning Compositional Rules via Neural Program Synthesis | https://papers.nips.cc/paper_files/paper/2020/hash/7a685d9edd95508471a9d3d6fcace432-Abstract.html | Maxwell Nye, Armando Solar-Lezama, Josh Tenenbaum, Brenden M. Lake | https://papers.nips.cc/paper_files/paper/2020/hash/7a685d9edd95508471a9d3d6fcace432-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a685d9edd95508471a9d3d6fcace432-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10633-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a685d9edd95508471a9d3d6fcace432-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a685d9edd95508471a9d3d6fcace432-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a685d9edd95508471a9d3d6fcace432-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a685d9edd95508471a9d3d6fcace432-Supplemental.pdf | Many aspects of human reasoning, including language, require learning rules from very little data. Humans can do this, often learning systematic rules from very few examples, and combining these rules to form compositional rule-based systems. Current neural architectures, on the other hand, often fail to generalize in a compositional manner, especially when evaluated in ways that vary systematically from training. In this work, we present a neuro-symbolic model which learns entire rule systems from a small set of examples. Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples, drawing upon techniques from the neural program synthesis literature. Our rule-synthesis approach outperforms neural meta-learning techniques in three domains: an artificial instruction-learning domain used to evaluate human learning, the SCAN challenge datasets, and learning rule-based translations of number words into integers for a wide range of human languages. |
Incorporating BERT into Parallel Sequence Decoding with Adapters | https://papers.nips.cc/paper_files/paper/2020/hash/7a6a74cbe87bc60030a4bd041dd47b78-Abstract.html | Junliang Guo, Zhirui Zhang, Linli Xu, Hao-Ran Wei, Boxing Chen, Enhong Chen | https://papers.nips.cc/paper_files/paper/2020/hash/7a6a74cbe87bc60030a4bd041dd47b78-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a6a74cbe87bc60030a4bd041dd47b78-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10634-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a6a74cbe87bc60030a4bd041dd47b78-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a6a74cbe87bc60030a4bd041dd47b78-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a6a74cbe87bc60030a4bd041dd47b78-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a6a74cbe87bc60030a4bd041dd47b78-Supplemental.pdf | While large scale pre-trained language models such as BERT have achieved great success on various natural language understanding tasks, how to efficiently and effectively incorporate them into sequence-to-sequence models and the corresponding text generation tasks remains a non-trivial problem. In this paper, we propose to address this problem by taking two different BERT models as the encoder and decoder respectively, and fine-tuning them by introducing simple and lightweight adapter modules, which are inserted between BERT layers and tuned on the task-specific dataset. In this way, we obtain a flexible and efficient model which is able to jointly leverage the information contained in the source-side and target-side BERT models, while bypassing the catastrophic forgetting problem. Each component in the framework can be considered as a plug-in unit, making the framework flexible and task agnostic.
Our framework is based on a parallel sequence decoding algorithm named Mask-Predict considering the bi-directional and conditional independent nature of BERT, and can be adapted to traditional autoregressive decoding easily.
We conduct extensive experiments on neural machine translation tasks where
the proposed method consistently outperforms autoregressive baselines while reducing the inference latency by half,
and achieves $36.49$/$33.57$ BLEU scores on IWSLT14 German-English/WMT14 German-English translation.
When adapted to autoregressive decoding, the proposed method achieves $30.60$/$43.56$ BLEU scores on WMT14 English-German/English-French translation,
on par with the state-of-the-art baseline models. |
Estimating Fluctuations in Neural Representations of Uncertain Environments | https://papers.nips.cc/paper_files/paper/2020/hash/7a8b8402b2f0fc78cf726ee484a0a2b7-Abstract.html | Sahand Farhoodi, Mark Plitt, Lisa Giocomo, Uri Eden | https://papers.nips.cc/paper_files/paper/2020/hash/7a8b8402b2f0fc78cf726ee484a0a2b7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a8b8402b2f0fc78cf726ee484a0a2b7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10635-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a8b8402b2f0fc78cf726ee484a0a2b7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a8b8402b2f0fc78cf726ee484a0a2b7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a8b8402b2f0fc78cf726ee484a0a2b7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a8b8402b2f0fc78cf726ee484a0a2b7-Supplemental.pdf | Neural Coding analyses often reflect an assumption that neural populations respond uniquely and consistently to particular stimuli. For example, analyses of spatial remapping in hippocampal populations often assume that each environment has one unique representation and that remapping occurs over long time scales as an animal traverses between distinct environments. However, as neuroscience experiments begin to explore more naturalistic tasks and stimuli, and reflect more ambiguity in neural representations, methods for analyzing population neural codes must adapt to reflect these features. In this paper, we develop a new state-space modeling framework to address two important issues related to remapping. First, neurons may exhibit significant trial-to-trial or moment-to-moment variability in the firing patterns used to represent a particular environment or stimulus. Second, in ambiguous environments and tasks that involve cognitive uncertainty, neural populations may rapidly fluctuate between multiple representations. The state-space model addresses these two issues by integrating an observation model, which allows for multiple representations of the same stimulus or environment, with a state model, which characterizes the moment-by-moment probability of a shift in the neural representation. These models allow us to compute instantaneous estimates of the stimulus or environment currently represented by the population. We demonstrate the application of this approach to the analysis of population activity in the CA1 region of hippocampus of a mouse moving through ambiguous virtual environments. Our analyses demonstrate that many hippocampal cells express significant trial-to-trial variability in their representations and that the population representation can fluctuate rapidly between environments within a single trial when spatial cues are most ambiguous. |
Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for Semantic Segmentation | https://papers.nips.cc/paper_files/paper/2020/hash/7a9a322cbe0d06a98667fdc5160dc6f8-Abstract.html | KwanYong Park, Sanghyun Woo, Inkyu Shin, In So Kweon | https://papers.nips.cc/paper_files/paper/2020/hash/7a9a322cbe0d06a98667fdc5160dc6f8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7a9a322cbe0d06a98667fdc5160dc6f8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10636-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7a9a322cbe0d06a98667fdc5160dc6f8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7a9a322cbe0d06a98667fdc5160dc6f8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7a9a322cbe0d06a98667fdc5160dc6f8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7a9a322cbe0d06a98667fdc5160dc6f8-Supplemental.pdf | Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently, as it could be beneficial for various label-scarce real-world scenarios (e.g., robot control, autonomous driving, medical imaging, etc.). Despite the significant progress in this field, current works mainly focus on a single-source single-target setting, which cannot handle more practical settings of multiple targets or even unseen targets.
In this paper, we investigate open compound domain adaptation (OCDA), which deals with mixed and novel situations at the same time, for semantic segmentation.
We present a novel framework based on three main design principles: discover, hallucinate, and adapt. The scheme first clusters compound target data based on style, discovering multiple latent domains (discover). Then, it hallucinates multiple latent target domains in source by using image-translation (hallucinate). This step ensures the latent domains in the source and the target to be paired. Finally, target-to-source alignment is learned separately between domains (adapt). In high-level, our solution replaces a hard OCDA problem with much easier multiple UDA problems.
We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results. |
SURF: A Simple, Universal, Robust, Fast Distribution Learning Algorithm | https://papers.nips.cc/paper_files/paper/2020/hash/7ac52e3f2729d1b3f6d2b7e8f6467226-Abstract.html | Yi Hao, Ayush Jain, Alon Orlitsky, Vaishakh Ravindrakumar | https://papers.nips.cc/paper_files/paper/2020/hash/7ac52e3f2729d1b3f6d2b7e8f6467226-Abstract.html | NIPS 2020 | null | https://papers.nips.cc/paper_files/paper/10637-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7ac52e3f2729d1b3f6d2b7e8f6467226-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7ac52e3f2729d1b3f6d2b7e8f6467226-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7ac52e3f2729d1b3f6d2b7e8f6467226-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7ac52e3f2729d1b3f6d2b7e8f6467226-Supplemental.zip | Sample- and computationally-efficient distribution estimation is a fundamental tenet in statistics and machine learning. We present $\SURF$, an algorithm for approximating distributions by piecewise polynomials. $\SURF$ is:
simple, replacing prior complex optimization techniques by straight-forward empirical probability approximation of each potential polynomial piece through simple empirical-probability interpolation, and using plain divide-and-conquer to merge the pieces; universal, as well-known polynomial-approximation results imply that it accurately approximates a large class of common distributions;
robust to distribution mis-specification as for any degree $d \le 8$, it estimates any distribution to an $\ell_1$ distance $< 3$ times that of the nearest degree-$d$ piecewise polynomial, improving known factor upper bounds of 3 for single polynomials and 15 for polynomials with arbitrarily many pieces;
fast, using optimal sample complexity, running in near sample-linear time, and if given sorted samples it may be parallelized to run in sub-linear time.
In experiments, $\SURF$ outperforms state-of-the art algorithms. |
Understanding Approximate Fisher Information for Fast Convergence of Natural Gradient Descent in Wide Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/7b41bfa5085806dfa24b8c9de0ce567f-Abstract.html | Ryo Karakida, Kazuki Osawa | https://papers.nips.cc/paper_files/paper/2020/hash/7b41bfa5085806dfa24b8c9de0ce567f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7b41bfa5085806dfa24b8c9de0ce567f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10638-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7b41bfa5085806dfa24b8c9de0ce567f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7b41bfa5085806dfa24b8c9de0ce567f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7b41bfa5085806dfa24b8c9de0ce567f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7b41bfa5085806dfa24b8c9de0ce567f-Supplemental.pdf | Natural Gradient Descent (NGD) helps to accelerate the convergence of gradient descent dynamics, but it requires approximations in large-scale deep neural networks because of its high computational cost. Empirical studies have confirmed that some NGD methods with approximate Fisher information converge sufficiently fast in practice. Nevertheless, it remains unclear from the theoretical perspective why and under what conditions such heuristic approximations work well. In this work, we reveal that, under specific conditions, NGD with approximate Fisher information achieves the same fast convergence to global minima as exact NGD. We consider deep neural networks in the infinite-width limit, and analyze the asymptotic training dynamics of NGD in function space via the neural tangent kernel. In the function space, the training dynamics with the approximate Fisher information are identical to those with the exact Fisher information, and they converge quickly. The fast convergence holds in layer-wise approximations; for instance, in block diagonal approximation where each block corresponds to a layer as well as in block tri-diagonal and K-FAC approximations. We also find that a unit-wise approximation achieves the same fast convergence under some assumptions. All of these different approximations have an isotropic gradient in the function space, and this plays a fundamental role in achieving the same convergence properties in training. Thus, the current study gives a novel and unified theoretical foundation with which to understand NGD methods in deep learning. |
General Transportability of Soft Interventions: Completeness Results | https://papers.nips.cc/paper_files/paper/2020/hash/7b497aa1b2a83ec63d1777a88676b0c2-Abstract.html | Juan Correa, Elias Bareinboim | https://papers.nips.cc/paper_files/paper/2020/hash/7b497aa1b2a83ec63d1777a88676b0c2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7b497aa1b2a83ec63d1777a88676b0c2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10639-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7b497aa1b2a83ec63d1777a88676b0c2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7b497aa1b2a83ec63d1777a88676b0c2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7b497aa1b2a83ec63d1777a88676b0c2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7b497aa1b2a83ec63d1777a88676b0c2-Supplemental.pdf | The challenge of generalizing causal knowledge across different environments is pervasive in scientific explorations, including in AI, ML, and Data Science. Experiments are usually performed in one environment (e.g., in a lab, on Earth) with the intent, almost invariably, of being used elsewhere (e.g., outside the lab, on Mars), where the conditions are likely to be different. In the causal inference literature, this generalization task has been formalized under the rubric of transportability (Pearl and Bareinboim, 2011), where a number of criteria and algorithms have been developed for various settings. Despite the generality of such results, transportability theory has been confined to atomic, do()-interventions. In practice, many real-world applications require more complex, stochastic interventions; for instance, in reinforcement learning, agents need to continuously adapt to the changing conditions of an uncertain and unknown environment.
In this paper, we extend transportability theory to encompass these more complex types of interventions, which are known as "soft," both relative to the input as well as the target distribution of the analysis. Specifically, we develop a graphical condition that is both necessary and sufficient for deciding soft-transportability. Second, we develop an algorithm to determine whether a non-atomic intervention is computable from a combination of the distributions available across domains. As a corollary, we show that the $\sigma$-calculus is complete for the task of soft-transportability. |
GAIT-prop: A biologically plausible learning rule derived from backpropagation of error | https://papers.nips.cc/paper_files/paper/2020/hash/7ba0691b7777b6581397456412a41390-Abstract.html | Nasir Ahmad, Marcel A. J. van Gerven, Luca Ambrogioni | https://papers.nips.cc/paper_files/paper/2020/hash/7ba0691b7777b6581397456412a41390-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7ba0691b7777b6581397456412a41390-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10640-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7ba0691b7777b6581397456412a41390-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7ba0691b7777b6581397456412a41390-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7ba0691b7777b6581397456412a41390-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7ba0691b7777b6581397456412a41390-Supplemental.pdf | Traditional backpropagation of error, though a highly successful algorithm for learning in artificial neural network models, includes features which are biologically implausible for learning in real neural circuits. An alternative called target propagation proposes to solve this implausibility by using a top-down model of neural activity to convert an error at the output of a neural network into layer-wise and plausible ‘targets’ for every unit. These targets can then be used to produce weight updates for network training. However, thus far, target propagation has been heuristically proposed without demonstrable equivalence to backpropagation. Here, we derive an exact correspondence between backpropagation and a modified form of target propagation (GAIT-prop) where the target is a small perturbation of the forward pass. Specifically, backpropagation and GAIT-prop give identical updates when synaptic weight matrices are orthogonal. In a series of simple computer vision experiments, we show near-identical performance between backpropagation and GAIT-prop with a soft orthogonality-inducing regularizer. |
Lipschitz Bounds and Provably Robust Training by Laplacian Smoothing | https://papers.nips.cc/paper_files/paper/2020/hash/7bab7650be60b0738e22c3b8745f937d-Abstract.html | Vishaal Krishnan, Abed AlRahman Al Makdah, Fabio Pasqualetti | https://papers.nips.cc/paper_files/paper/2020/hash/7bab7650be60b0738e22c3b8745f937d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7bab7650be60b0738e22c3b8745f937d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10641-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7bab7650be60b0738e22c3b8745f937d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7bab7650be60b0738e22c3b8745f937d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7bab7650be60b0738e22c3b8745f937d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7bab7650be60b0738e22c3b8745f937d-Supplemental.pdf | In this work we propose a graph-based learning framework to train
models with provable robustness to adversarial perturbations. In
contrast to regularization-based approaches, we formulate the
adversarially robust learning problem as one of loss minimization
with a Lipschitz constraint, and show that the saddle point of the
associated Lagrangian is characterized by a Poisson equation with
weighted Laplace operator. Further, the weighting for the Laplace
operator is given by the Lagrange multiplier for the Lipschitz
constraint, which modulates the sensitivity of the minimizer to
perturbations. We then design a provably robust training scheme
using graph-based discretization of the input space and a
primal-dual algorithm to converge to the Lagrangian's saddle
point. Our analysis establishes a novel connection between elliptic
operators with constraint-enforced weighting and adversarial
learning. We also study the complementary problem of improving the robustness
of minimizers with a margin on their loss, formulated as a
loss-constrained minimization problem of the Lipschitz constant. We
propose a technique to obtain robustified minimizers, and evaluate
fundamental Lipschitz lower bounds by approaching Lipschitz constant
minimization via a sequence of gradient $p$-norm minimization
problems. Ultimately, our results show that, for a desired nominal
performance, there exists a fundamental lower bound on the
sensitivity to adversarial perturbations that depends only on the
loss function and the data distribution, and that improvements in
robustness beyond this bound can only be made at the expense of
nominal performance. Our training schemes provably achieve these
bounds both under constraints on performance and~robustness. |
SCOP: Scientific Control for Reliable Neural Network Pruning | https://papers.nips.cc/paper_files/paper/2020/hash/7bcdf75ad237b8e02e301f4091fb6bc8-Abstract.html | Yehui Tang, Yunhe Wang, Yixing Xu, Dacheng Tao, Chunjing XU, Chao Xu, Chang Xu | https://papers.nips.cc/paper_files/paper/2020/hash/7bcdf75ad237b8e02e301f4091fb6bc8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7bcdf75ad237b8e02e301f4091fb6bc8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10642-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7bcdf75ad237b8e02e301f4091fb6bc8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7bcdf75ad237b8e02e301f4091fb6bc8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7bcdf75ad237b8e02e301f4091fb6bc8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7bcdf75ad237b8e02e301f4091fb6bc8-Supplemental.pdf | This paper proposes a reliable neural network pruning algorithm by setting up a scientific control. Existing pruning methods have developed various hypotheses to approximate the importance of filters to the network and then execute filter pruning accordingly. To increase the reliability of the results, we prefer to have a more rigorous research design by including a scientific control group as an essential part to minimize the effect of all factors except the association between the filter and expected network output. Acting as a control group, knockoff feature is generated to mimic the feature map produced by the network filter, but they are conditionally independent of the example label given the real feature map. We theoretically suggest that the knockoff condition can be approximately preserved given the information propagation of network layers. Besides the real feature map on an intermediate layer, the corresponding knockoff feature is brought in as another auxiliary input signal for the subsequent layers.
Redundant filters can be discovered in the adversarial process of different features. Through experiments, we demonstrate the superiority of the proposed algorithm over state-of-the-art methods. For example, our method can reduce 57.8% parameters and 60.2% FLOPs of ResNet-101 with only 0.01% top-1 accuracy loss on ImageNet. |
Provably Consistent Partial-Label Learning | https://papers.nips.cc/paper_files/paper/2020/hash/7bd28f15a49d5e5848d6ec70e584e625-Abstract.html | Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama | https://papers.nips.cc/paper_files/paper/2020/hash/7bd28f15a49d5e5848d6ec70e584e625-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7bd28f15a49d5e5848d6ec70e584e625-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10643-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7bd28f15a49d5e5848d6ec70e584e625-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7bd28f15a49d5e5848d6ec70e584e625-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7bd28f15a49d5e5848d6ec70e584e625-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7bd28f15a49d5e5848d6ec70e584e625-Supplemental.pdf | Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels. Even though many practical PLL methods have been proposed in the last two decades, there lacks a theoretical understanding of the consistency of those methods - none of the PLL methods hitherto possesses a generation process of candidate label sets, and then it is still unclear why such a method works on a specific dataset and when it may fail given a different dataset. In this paper, we propose the first generation model of candidate label sets, and develop two PLL methods that are guaranteed to be provably consistent, i.e., one is risk-consistent and the other is classifier-consistent. Our methods are advantageous, since they are compatible with any deep network or stochastic optimizer. Furthermore, thanks to the generation model, we would be able to answer the two questions above by testing if the generation model matches given candidate label sets. Experiments on benchmark and real-world datasets validate the effectiveness of the proposed generation model and two PLL methods. |
Robust, Accurate Stochastic Optimization for Variational Inference | https://papers.nips.cc/paper_files/paper/2020/hash/7cac11e2f46ed46c339ec3d569853759-Abstract.html | Akash Kumar Dhaka, Alejandro Catalina, Michael R. Andersen, Måns Magnusson, Jonathan Huggins, Aki Vehtari | https://papers.nips.cc/paper_files/paper/2020/hash/7cac11e2f46ed46c339ec3d569853759-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7cac11e2f46ed46c339ec3d569853759-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10644-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7cac11e2f46ed46c339ec3d569853759-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7cac11e2f46ed46c339ec3d569853759-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7cac11e2f46ed46c339ec3d569853759-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7cac11e2f46ed46c339ec3d569853759-Supplemental.pdf | We examine the accuracy of black box variational posterior approximations for parametric models in a probabilistic programming context. The performance of these approximations depends on (1) how well the variational family approximates the true posterior distribution, (2) the choice of divergence, and (3) the optimization of the variational objective. We show that even when the true variational family is used, high-dimensional posteriors can be very poorly approximated using common stochastic gradient descent (SGD) optimizers. Motivated by recent theory, we propose a simple and parallel way to improve SGD estimates for variational inference. The approach is theoretically motivated and comes with a diagnostic for convergence and a novel stopping rule, which is robust to noisy objective
functions evaluations. We show empirically, the new workflow works well on a diverse set of models and datasets, or warns if the stochastic optimization fails or if the used variational distribution is not good. |
Discovering conflicting groups in signed networks | https://papers.nips.cc/paper_files/paper/2020/hash/7cc538b1337957dae283c30ad46def38-Abstract.html | Ruo-Chun Tzeng, Bruno Ordozgoiti, Aristides Gionis | https://papers.nips.cc/paper_files/paper/2020/hash/7cc538b1337957dae283c30ad46def38-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7cc538b1337957dae283c30ad46def38-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10645-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7cc538b1337957dae283c30ad46def38-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7cc538b1337957dae283c30ad46def38-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7cc538b1337957dae283c30ad46def38-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7cc538b1337957dae283c30ad46def38-Supplemental.pdf | Signed networks are graphs where edges are annotated with a positive or negative sign, indicating whether an edge interaction is friendly or antagonistic. Signed networks can be used to study a variety of social phenomena, such as mining polarized discussions in social media, or modeling relations of trust and distrust in online review platforms.
In this paper we study the problem of detecting $k$ conflicting groups in a signed network. Our premise is that each group is positively connected internally and negatively connected with the other $k-1$ groups.
An important aspect of our formulation is that we are not searching for a complete partition of the signed network, instead, we allow other nodes to be neutral with respect to the conflict structure we are searching. As a result, the problem we tackle differs from previously studied problems, such as correlation clustering and $k$-way partitioning.
To solve the conflicting-group discovery problem, we derive a novel formulation in which each conflicting group is naturally characterized by the solution to the maximum discrete Rayleigh's quotient (\maxdrq) problem.
We present two spectral methods for finding approximate solutions to the \maxdrq problem, which we analyze theoretically. Our experimental evaluation shows that, compared to state-of-the-art baselines, our methods find solutions of higher quality, are faster, and recover ground truth conflicting groups with higher accuracy. |
Learning Some Popular Gaussian Graphical Models without Condition Number Bounds | https://papers.nips.cc/paper_files/paper/2020/hash/7cc980b0f894bd0cf05c37c246f215f3-Abstract.html | Jonathan Kelner, Frederic Koehler, Raghu Meka, Ankur Moitra | https://papers.nips.cc/paper_files/paper/2020/hash/7cc980b0f894bd0cf05c37c246f215f3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7cc980b0f894bd0cf05c37c246f215f3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10646-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7cc980b0f894bd0cf05c37c246f215f3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7cc980b0f894bd0cf05c37c246f215f3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7cc980b0f894bd0cf05c37c246f215f3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7cc980b0f894bd0cf05c37c246f215f3-Supplemental.zip | Here we give the first fixed polynomial-time algorithms for learning attractive GGMs and walk-summable GGMs with a logarithmic number of samples without any such assumptions. In particular, our algorithms can tolerate strong dependencies among the variables. Our result for structure recovery in walk-summable GGMs is derived from a more general result for efficient sparse linear regression in walk-summable models without any norm dependencies.
We complement our results with experiments showing that many existing algorithms fail even in some simple settings where there are long dependency chains. Our algorithms do not. |
Sense and Sensitivity Analysis: Simple Post-Hoc Analysis of Bias Due to Unobserved Confounding | https://papers.nips.cc/paper_files/paper/2020/hash/7d265aa7147bd3913fb84c7963a209d1-Abstract.html | Victor Veitch, Anisha Zaveri | https://papers.nips.cc/paper_files/paper/2020/hash/7d265aa7147bd3913fb84c7963a209d1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7d265aa7147bd3913fb84c7963a209d1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10647-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7d265aa7147bd3913fb84c7963a209d1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7d265aa7147bd3913fb84c7963a209d1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7d265aa7147bd3913fb84c7963a209d1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7d265aa7147bd3913fb84c7963a209d1-Supplemental.zip | It is a truth universally acknowledged that an observed association without known mechanism must be in want of a causal estimate. Causal estimates from observational data will be biased in the presence of ‘unobserved confounding’. However, we might hope that the influence of unobserved confounders is weak relative to a ‘large’ estimated effect. The purpose of this paper is to develop Austen plots, a sensitivity analysis tool to aid such judgments by making it easier to reason about potential bias induced by unobserved confounding. We formalize confounding strength in terms of how strongly the unobserved confounding influences treatment assignment and outcome. For a target level of bias, an Austen plot shows the minimum values of treatment and outcome influence required to induce that level of bias. Austen plots generalize the classic sensitivity analysis approach of Imbens [Imb03]. Critically, Austen plots allow any approach for modeling the observed data. We illustrate the tool by assessing biases for several real causal inference problems, using a variety of machine learning approaches for the initial data analysis. Code, demo data, and a tutorial are available at github.com/anishazaveri/austen_plots. |
Mix and Match: An Optimistic Tree-Search Approach for Learning Models from Mixture Distributions | https://papers.nips.cc/paper_files/paper/2020/hash/7d3d5bcad324d3edc08e40738e663554-Abstract.html | Matthew Faw, Rajat Sen, Karthikeyan Shanmugam, Constantine Caramanis, Sanjay Shakkottai | https://papers.nips.cc/paper_files/paper/2020/hash/7d3d5bcad324d3edc08e40738e663554-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7d3d5bcad324d3edc08e40738e663554-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10648-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7d3d5bcad324d3edc08e40738e663554-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7d3d5bcad324d3edc08e40738e663554-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7d3d5bcad324d3edc08e40738e663554-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7d3d5bcad324d3edc08e40738e663554-Supplemental.pdf | We consider a covariate shift problem where one has access to several different
training datasets for the same learning problem and a small validation set
which possibly differs from all the individual training distributions.
The distribution shift is due, in part, to \emph{unobserved} features in the datasets.
The objective, then, is to find the best mixture distribution over the training
datasets (with only observed features) such that training a learning algorithm
using this mixture has the best validation performance. Our proposed algorithm,
\textsf{Mix\&Match}, combines stochastic gradient descent (SGD) with optimistic tree search and model re-use (evolving partially trained models with samples from different mixture distributions) over the space of mixtures, for this task. We prove a novel high probability bound on the final SGD iterate without relying on a global gradient norm bound, and use it to show the advantages of model re-use. Additionally, we provide simple regret guarantees for our algorithm with respect to recovering the optimal mixture, given a total budget of SGD evaluations. Finally, we validate our algorithm on two real-world datasets. |
Understanding Double Descent Requires A Fine-Grained Bias-Variance Decomposition | https://papers.nips.cc/paper_files/paper/2020/hash/7d420e2b2939762031eed0447a9be19f-Abstract.html | Ben Adlam, Jeffrey Pennington | https://papers.nips.cc/paper_files/paper/2020/hash/7d420e2b2939762031eed0447a9be19f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7d420e2b2939762031eed0447a9be19f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10649-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7d420e2b2939762031eed0447a9be19f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7d420e2b2939762031eed0447a9be19f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7d420e2b2939762031eed0447a9be19f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7d420e2b2939762031eed0447a9be19f-Supplemental.pdf | Classical learning theory suggests that the optimal generalization performance of a machine learning model should occur at an intermediate model complexity, with simpler models exhibiting high bias and more complex models exhibiting high variance of the predictive function. However, such a simple trade-off does not adequately describe deep learning models that simultaneously attain low bias and variance in the heavily overparameterized regime. A primary obstacle in explaining this behavior is that deep learning algorithms typically involve multiple sources of randomness whose individual contributions are not visible in the total variance. To enable fine-grained analysis, we describe an interpretable, symmetric decomposition of the variance into terms associated with the randomness from sampling, initialization, and the labels. Moreover, we compute the high-dimensional asymptotic behavior of this decomposition for random feature kernel regression, and analyze the strikingly rich phenomenology that arises. We find that the bias decreases monotonically with the network width, but the variance terms exhibit non-monotonic behavior and can diverge at the interpolation boundary, even in the absence of label noise. The divergence is caused by the interaction between sampling and initialization and can therefore be eliminated by marginalizing over samples (i.e. bagging) or over the initial parameters (i.e. ensemble learning). |
VIME: Extending the Success of Self- and Semi-supervised Learning to Tabular Domain | https://papers.nips.cc/paper_files/paper/2020/hash/7d97667a3e056acab9aaf653807b4a03-Abstract.html | Jinsung Yoon, Yao Zhang, James Jordon, Mihaela van der Schaar | https://papers.nips.cc/paper_files/paper/2020/hash/7d97667a3e056acab9aaf653807b4a03-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7d97667a3e056acab9aaf653807b4a03-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10650-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7d97667a3e056acab9aaf653807b4a03-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7d97667a3e056acab9aaf653807b4a03-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7d97667a3e056acab9aaf653807b4a03-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7d97667a3e056acab9aaf653807b4a03-Supplemental.pdf | Self- and semi-supervised learning frameworks have made significant progress in training machine learning models with limited labeled data in image and language domains. These methods heavily rely on the unique structure in the domain datasets (such as spatial relationships in images or semantic relationships in language). They are not adaptable to general tabular data which does not have the same explicit structure as image and language data. In this paper, we fill this gap by proposing novel self- and semi-supervised learning frameworks for tabular data, which we refer to collectively as VIME (Value Imputation and Mask Estimation). We create a novel pretext task of estimating mask vectors from corrupted tabular data in addition to the reconstruction pretext task for self-supervised learning. We also introduce a novel tabular data augmentation method for self- and semi-supervised learning frameworks. In experiments, we evaluate the proposed framework in multiple tabular datasets from various application domains, such as genomics and clinical data. VIME exceeds state-of-the-art performance in comparison to the existing baseline methods. |
The Smoothed Possibility of Social Choice | https://papers.nips.cc/paper_files/paper/2020/hash/7e05d6f828574fbc975a896b25bb011e-Abstract.html | Lirong Xia | https://papers.nips.cc/paper_files/paper/2020/hash/7e05d6f828574fbc975a896b25bb011e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7e05d6f828574fbc975a896b25bb011e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10651-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7e05d6f828574fbc975a896b25bb011e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7e05d6f828574fbc975a896b25bb011e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7e05d6f828574fbc975a896b25bb011e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7e05d6f828574fbc975a896b25bb011e-Supplemental.pdf | We develop a framework that leverages the smoothed complexity analysis by Spielman and Teng to circumvent paradoxes and impossibility theorems in social choice, motivated by modern applications of social choice powered by AI and ML. For Condrocet’s paradox, we prove that the smoothed likelihood of the paradox either vanishes at an exponential rate as the number of agents increases, or does not vanish at all. For the ANR impossibility on the non-existence of voting rules that simultaneously satisfy anonymity, neutrality, and resolvability, we characterize the rate for the impossibility to vanish, to be either polynomially fast or exponentially fast. We also propose a novel easy-to-compute tie-breaking mechanism that optimally preserves anonymity and neutrality for even number of alternatives in natural settings. Our results illustrate the smoothed possibility of social choice—even though the paradox and the impossibility theorem hold in the worst case, they may not be a big concern in practice. |
A Decentralized Parallel Algorithm for Training Generative Adversarial Nets | https://papers.nips.cc/paper_files/paper/2020/hash/7e0a0209b929d097bd3e8ef30567a5c1-Abstract.html | Mingrui Liu, Wei Zhang, Youssef Mroueh, Xiaodong Cui, Jarret Ross, Tianbao Yang, Payel Das | https://papers.nips.cc/paper_files/paper/2020/hash/7e0a0209b929d097bd3e8ef30567a5c1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7e0a0209b929d097bd3e8ef30567a5c1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10652-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7e0a0209b929d097bd3e8ef30567a5c1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7e0a0209b929d097bd3e8ef30567a5c1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7e0a0209b929d097bd3e8ef30567a5c1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7e0a0209b929d097bd3e8ef30567a5c1-Supplemental.pdf | Generative Adversarial Networks (GANs) are a powerful class of generative models in the deep learning community. Current practice on large-scale GAN training utilizes large models and distributed large-batch training strategies, and is implemented on deep learning frameworks (e.g., TensorFlow, PyTorch, etc.) designed in a centralized manner. In the centralized network topology, every worker needs to either directly communicate with the central node or indirectly communicate with all other workers in every iteration. However, when the network bandwidth is low or network latency is high, the performance would be significantly degraded. Despite recent progress on decentralized algorithms for training deep neural networks, it remains unclear whether it is possible to train GANs in a decentralized manner. The main difficulty lies at handling the nonconvex-nonconcave min-max optimization and the decentralized communication simultaneously. In this paper, we address this difficulty by designing the \textbf{first gradient-based decentralized parallel algorithm} which allows workers to have multiple rounds of communications in one iteration and to update the discriminator and generator simultaneously, and this design makes it amenable for the convergence analysis of the proposed decentralized algorithm. Theoretically, our proposed decentralized algorithm is able to solve a class of non-convex non-concave min-max problems with provable non-asymptotic convergence to first-order stationary point. Experimental results on GANs demonstrate the effectiveness of the proposed algorithm. |
Phase retrieval in high dimensions: Statistical and computational phase transitions | https://papers.nips.cc/paper_files/paper/2020/hash/7ec0dbeee45813422897e04ad8424a5e-Abstract.html | Antoine Maillard, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová | https://papers.nips.cc/paper_files/paper/2020/hash/7ec0dbeee45813422897e04ad8424a5e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7ec0dbeee45813422897e04ad8424a5e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10653-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7ec0dbeee45813422897e04ad8424a5e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7ec0dbeee45813422897e04ad8424a5e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7ec0dbeee45813422897e04ad8424a5e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7ec0dbeee45813422897e04ad8424a5e-Supplemental.pdf | We consider the phase retrieval problem of reconstructing a $n$-dimensional real or complex signal $\mathbf{X}^\star$ from $m$ (possibly noisy) observations $Y_\mu = | \sum_{i=1}^n \Phi_{\mu i} X^{\star}_i/\sqrt{n}|$, for a large class of correlated real and complex random sensing matrices $\mathbf{\Phi}$, in a high-dimensional setting where $m,n\to\infty$ while $\alpha = m/n=\Theta(1)$. First, we derive sharp asymptotics for the lowest possible estimation error achievable statistically and we unveil the existence of sharp phase transitions for the weak- and full-recovery thresholds as a function of the singular values of the matrix $\mathbf{\Phi}$. This is achieved by providing a rigorous proof of a result first obtained by the replica method from statistical mechanics. In particular, the information-theoretic transition to perfect recovery for full-rank matrices appears at $\alpha=1$ (real case) and $\alpha=2$ (complex case). Secondly, we analyze the performance of the best-known polynomial time algorithm for this problem --- approximate message-passing--- establishing the existence of statistical-to-algorithmic gap depending, again, on the spectral properties of $\mathbf{\Phi}$. Our work provides an extensive classification of the statistical and algorithmic thresholds in high-dimensional phase retrieval for a broad class of random matrices. |
Fair Performance Metric Elicitation | https://papers.nips.cc/paper_files/paper/2020/hash/7ec2442aa04c157590b2fa1a7d093a33-Abstract.html | Gaurush Hiranandani, Harikrishna Narasimhan, Sanmi Koyejo | https://papers.nips.cc/paper_files/paper/2020/hash/7ec2442aa04c157590b2fa1a7d093a33-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7ec2442aa04c157590b2fa1a7d093a33-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10654-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7ec2442aa04c157590b2fa1a7d093a33-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7ec2442aa04c157590b2fa1a7d093a33-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7ec2442aa04c157590b2fa1a7d093a33-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7ec2442aa04c157590b2fa1a7d093a33-Supplemental.pdf | What is a fair performance metric? We consider the choice of fairness metrics through the lens of metric elicitation -- a principled framework for selecting performance metrics that best reflect implicit preferences. The use of metric elicitation enables a practitioner to tune the performance and fairness metrics to the task, context, and population at hand. Specifically, we propose a novel strategy to elicit group-fair performance metrics for multiclass classification problems with multiple sensitive groups that also includes selecting the trade-off between predictive performance and fairness violation. The proposed elicitation strategy requires only relative preference feedback and is robust to both finite sample and feedback noise. |
Hybrid Variance-Reduced SGD Algorithms For Minimax Problems with Nonconvex-Linear Function | https://papers.nips.cc/paper_files/paper/2020/hash/7f141cf8e7136ce8701dc6636c2a6fe4-Abstract.html | Quoc Tran Dinh, Deyi Liu, Lam Nguyen | https://papers.nips.cc/paper_files/paper/2020/hash/7f141cf8e7136ce8701dc6636c2a6fe4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7f141cf8e7136ce8701dc6636c2a6fe4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10655-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7f141cf8e7136ce8701dc6636c2a6fe4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7f141cf8e7136ce8701dc6636c2a6fe4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7f141cf8e7136ce8701dc6636c2a6fe4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7f141cf8e7136ce8701dc6636c2a6fe4-Supplemental.pdf | We develop a novel and single-loop variance-reduced algorithm to solve a class of stochastic nonconvex-convex minimax problems involving a nonconvex-linear objective function, which has various applications in different fields such as ma- chine learning and robust optimization. This problem class has several compu- tational challenges due to its nonsmoothness, nonconvexity, nonlinearity, and non-separability of the objective functions. Our approach relies on a new combi- nation of recent ideas, including smoothing and hybrid biased variance-reduced techniques. Our algorithm and its variants can achieve $\mathcal{O}(T^{-2/3})$-convergence rate and the best-known oracle complexity under standard assumptions, where T is the iteration counter. They have several computational advantages compared to exist- ing methods such as simple to implement and less parameter tuning requirements. They can also work with both single sample or mini-batch on derivative estimators, and with constant or diminishing step-sizes. We demonstrate the benefits of our algorithms over existing methods through two numerical examples, including a nonsmooth and nonconvex-non-strongly concave minimax model. |
Belief-Dependent Macro-Action Discovery in POMDPs using the Value of Information | https://papers.nips.cc/paper_files/paper/2020/hash/7f2be1b45d278ac18804b79207a24c53-Abstract.html | Genevieve Flaspohler, Nicholas A. Roy, John W. Fisher III | https://papers.nips.cc/paper_files/paper/2020/hash/7f2be1b45d278ac18804b79207a24c53-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7f2be1b45d278ac18804b79207a24c53-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10656-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7f2be1b45d278ac18804b79207a24c53-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7f2be1b45d278ac18804b79207a24c53-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7f2be1b45d278ac18804b79207a24c53-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7f2be1b45d278ac18804b79207a24c53-Supplemental.pdf | This work introduces macro-action discovery using value-of-information (VoI) for robust and efficient planning in partially observable Markov decision processes (POMDPs). POMDPs are a powerful framework for planning under uncertainty. Previous approaches have used high-level macro-actions within POMDP policies to reduce planning complexity. However, macro-action design is often heuristic and rarely comes with performance guarantees. Here, we present a method for extracting belief-dependent, variable-length macro-actions directly from a low-level POMDP model. We construct macro-actions by chaining sequences of open-loop actions together when the task-specific value of information (VoI) --- the change in expected task performance caused by observations in the current planning iteration --- is low. Importantly, we provide performance guarantees on the resulting VoI macro-action policies in the form of bounded regret relative to the optimal policy. In simulated tracking experiments, we achieve higher reward than both closed-loop and hand-coded macro-action baselines, selectively using VoI macro-actions to reduce planning complexity while maintaining near-optimal task performance. |
Soft Contrastive Learning for Visual Localization | https://papers.nips.cc/paper_files/paper/2020/hash/7f2cba89a7116c7c6b0a769572d5fad9-Abstract.html | Janine Thoma, Danda Pani Paudel, Luc V. Gool | https://papers.nips.cc/paper_files/paper/2020/hash/7f2cba89a7116c7c6b0a769572d5fad9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7f2cba89a7116c7c6b0a769572d5fad9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10657-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7f2cba89a7116c7c6b0a769572d5fad9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7f2cba89a7116c7c6b0a769572d5fad9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7f2cba89a7116c7c6b0a769572d5fad9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7f2cba89a7116c7c6b0a769572d5fad9-Supplemental.zip | Localization by image retrieval is inexpensive and scalable due to simple mapping and matching techniques. Such localization, however, depends upon the quality of image features often obtained using Contrastive learning frameworks. Most contrastive learning strategies opt for features to distinguish different classes. In the context of localization, however, there is no natural definition of classes. Therefore, images are usually artificially separated into positive and negative classes, with respect to the chosen anchor images, based on some geometric proximity measure. In this paper, we show why such divisions are problematic for learning localization features. We argue that any artificial division based on some proximity measure is undesirable, due to the inherently ambiguous supervision for images near proximity threshold. To this end, we propose a novel technique that uses soft positive/negative assignments of images for contrastive learning, avoiding the aforementioned problem. Our soft assignment makes a gradual distinction between close and far images in both geometric and feature spaces. Experiments on four large-scale benchmark datasets demonstrate the superiority of the proposed soft contrastive learning over the state-of-the-art method for retrieval-based visual localization. |
Fine-Grained Dynamic Head for Object Detection | https://papers.nips.cc/paper_files/paper/2020/hash/7f6caf1f0ba788cd7953d817724c2b6e-Abstract.html | Lin Song, Yanwei Li, Zhengkai Jiang, Zeming Li, Hongbin Sun, Jian Sun, Nanning Zheng | https://papers.nips.cc/paper_files/paper/2020/hash/7f6caf1f0ba788cd7953d817724c2b6e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7f6caf1f0ba788cd7953d817724c2b6e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10658-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7f6caf1f0ba788cd7953d817724c2b6e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7f6caf1f0ba788cd7953d817724c2b6e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7f6caf1f0ba788cd7953d817724c2b6e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7f6caf1f0ba788cd7953d817724c2b6e-Supplemental.pdf | The Feature Pyramid Network (FPN) presents a remarkable approach to alleviate the scale variance in object representation by performing instance-level assignments. Nevertheless, this strategy ignores the distinct characteristics of different sub-regions in an instance. To this end, we propose a fine-grained dynamic head to conditionally select a pixel-level combination of FPN features from different scales for each instance, which further releases the ability of multi-scale feature representation. Moreover, we design a spatial gate with the new activation function to reduce computational complexity dramatically through spatially sparse convolutions. Extensive experiments demonstrate the effectiveness and efficiency of the proposed method on several state-of-the-art detection benchmarks. Code is available at https://github.com/StevenGrove/DynamicHead. |
LoCo: Local Contrastive Representation Learning | https://papers.nips.cc/paper_files/paper/2020/hash/7fa215c9efebb3811a7ef58409907899-Abstract.html | Yuwen Xiong, Mengye Ren, Raquel Urtasun | https://papers.nips.cc/paper_files/paper/2020/hash/7fa215c9efebb3811a7ef58409907899-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7fa215c9efebb3811a7ef58409907899-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10659-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7fa215c9efebb3811a7ef58409907899-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7fa215c9efebb3811a7ef58409907899-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7fa215c9efebb3811a7ef58409907899-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7fa215c9efebb3811a7ef58409907899-Supplemental.pdf | Deep neural nets typically perform end-to-end backpropagation to learn the weights, a procedure that creates synchronization constraints in the weight update step across layers and is not biologically plausible. Recent advances in unsupervised contrastive representation learning invite the question of whether a learning algorithm can also be made local, that is, the updates of lower layers do not directly depend on the computation of upper layers. While Greedy InfoMax separately learns each block with a local objective, we found that it consistently hurts readout accuracy in state-of-the-art unsupervised contrastive learning algorithms, possibly due to the greedy objective as well as gradient isolation. In this work, we discover that by overlapping local blocks stacking on top of each other, we effectively increase the decoder depth and allow upper blocks to implicitly send feedbacks to lower blocks. This simple design closes the performance gap between local learning and end-to-end contrastive learning algorithms for the first time. Aside from standard ImageNet experiments, we also show results on complex downstream tasks such as object detection and instance segmentation directly using readout features. |
Modeling and Optimization Trade-off in Meta-learning | https://papers.nips.cc/paper_files/paper/2020/hash/7fc63ff01769c4fa7d9279e97e307829-Abstract.html | Katelyn Gao, Ozan Sener | https://papers.nips.cc/paper_files/paper/2020/hash/7fc63ff01769c4fa7d9279e97e307829-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7fc63ff01769c4fa7d9279e97e307829-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10660-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7fc63ff01769c4fa7d9279e97e307829-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7fc63ff01769c4fa7d9279e97e307829-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7fc63ff01769c4fa7d9279e97e307829-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7fc63ff01769c4fa7d9279e97e307829-Supplemental.pdf | By searching for shared inductive biases across tasks, meta-learning promises to accelerate learning on novel tasks, but with the cost of solving a complex bilevel optimization problem. We introduce and rigorously define the trade-off between accurate modeling and optimization ease in meta-learning.
At one end, classic meta-learning algorithms account for the structure of meta-learning but solve a complex optimization problem, while at the other end domain randomized search (otherwise known as joint training) ignores the structure of meta-learning and solves a single level optimization problem.
Taking MAML as the representative meta-learning algorithm, we theoretically characterize the trade-off for general non-convex risk functions as well as linear regression, for which we are able to provide explicit bounds on the errors associated with modeling and optimization. We also empirically study this trade-off for meta-reinforcement learning benchmarks. |
SnapBoost: A Heterogeneous Boosting Machine | https://papers.nips.cc/paper_files/paper/2020/hash/7fd3b80fb1884e2927df46a7139bb8bf-Abstract.html | Thomas Parnell, Andreea Anghel, Małgorzata Łazuka, Nikolas Ioannou, Sebastian Kurella, Peshal Agarwal, Nikolaos Papandreou, Haralampos Pozidis | https://papers.nips.cc/paper_files/paper/2020/hash/7fd3b80fb1884e2927df46a7139bb8bf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/7fd3b80fb1884e2927df46a7139bb8bf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10661-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/7fd3b80fb1884e2927df46a7139bb8bf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/7fd3b80fb1884e2927df46a7139bb8bf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/7fd3b80fb1884e2927df46a7139bb8bf-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/7fd3b80fb1884e2927df46a7139bb8bf-Supplemental.pdf | Modern gradient boosting software frameworks, such as XGBoost and LightGBM, implement Newton descent in a functional space. At each boosting iteration, their goal is to find the base hypothesis, selected from some base hypothesis class, that is closest to the Newton descent direction in a Euclidean sense. Typically, the base hypothesis class is fixed to be all binary decision trees up to a given depth. In this work, we study a Heterogeneous Newton Boosting Machine (HNBM) in which the base hypothesis class may vary across boosting iterations. Specifically, at each boosting iteration, the base hypothesis class is chosen, from a fixed set of subclasses, by sampling from a probability distribution. We derive a global linear convergence rate for the HNBM under certain assumptions, and show that it agrees with existing rates for Newton's method when the Newton direction can be perfectly fitted by the base hypothesis at each boosting iteration. We then describe a particular realization of a HNBM, SnapBoost, that, at each boosting iteration, randomly selects between either a decision tree of variable depth or a linear regressor with random Fourier features. We describe how SnapBoost is implemented, with a focus on the training complexity. Finally, we present experimental results, using OpenML and Kaggle datasets, that show that SnapBoost is able to achieve better generalization loss than competing boosting frameworks, without taking significantly longer to tune. |
On Adaptive Distance Estimation | https://papers.nips.cc/paper_files/paper/2020/hash/803ef56843860e4a48fc4cdb3065e8ce-Abstract.html | Yeshwanth Cherapanamjeri, Jelani Nelson | https://papers.nips.cc/paper_files/paper/2020/hash/803ef56843860e4a48fc4cdb3065e8ce-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/803ef56843860e4a48fc4cdb3065e8ce-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10662-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/803ef56843860e4a48fc4cdb3065e8ce-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/803ef56843860e4a48fc4cdb3065e8ce-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/803ef56843860e4a48fc4cdb3065e8ce-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/803ef56843860e4a48fc4cdb3065e8ce-Supplemental.pdf | We provide a static data structure for distance estimation which supports {\it adaptive} queries. Concretely, given a dataset $X = \{x_i\}_{i = 1}^n$ of $n$ points in $\mathbb{R}^d$ and $0 < p \leq 2$, we construct a randomized data structure with low memory consumption and query time which, when later given any query point $q \in \mathbb{R}^d$, outputs a $(1+\varepsilon)$-approximation of $\|q - x_i\|_p$ with high probability for all $i\in[n]$. The main novelty is our data structure's correctness guarantee holds even when the sequence of queries can be chosen adaptively: an adversary is allowed to choose the $j$th query point $q_j$ in a way that depends on the answers reported by the data structure for $q_1,\ldots,q_{j-1}$. Previous randomized Monte Carlo methods do not provide error guarantees in the setting of adaptively chosen queries. Our memory consumption is $\tilde O(nd/\varepsilon^2)$, slightly more than the $O(nd)$ required to store $X$ in memory explicitly, but with the benefit that our time to answer queries is only $\tilde O(\varepsilon^{-2}(n + d))$, much faster than the naive $\Theta(nd)$ time obtained from a linear scan in the case of $n$ and $d$ very large. Here $\tilde O$ hides $\log(nd/\varepsilon)$ factors. We discuss applications to nearest neighbor search and nonparametric estimation.
Our method is simple and likely to applicable to other domains: we describe a generic approach for transforming randomized Monte Carlo data structures which do not support adaptive queries to ones that do, and show that for the problem at hand it can be applied to standard nonadaptive solutions to $\ell_p$ norm estimation with negligible overhead in query time and a factor $d$ overhead in memory. |
Stage-wise Conservative Linear Bandits | https://papers.nips.cc/paper_files/paper/2020/hash/804741413d7fe0e515b19a7ffc7b3027-Abstract.html | Ahmadreza Moradipari, Christos Thrampoulidis, Mahnoosh Alizadeh | https://papers.nips.cc/paper_files/paper/2020/hash/804741413d7fe0e515b19a7ffc7b3027-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/804741413d7fe0e515b19a7ffc7b3027-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10663-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/804741413d7fe0e515b19a7ffc7b3027-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/804741413d7fe0e515b19a7ffc7b3027-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/804741413d7fe0e515b19a7ffc7b3027-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/804741413d7fe0e515b19a7ffc7b3027-Supplemental.pdf | We study stage-wise conservative linear stochastic bandits: an instance of bandit optimization, which accounts for (unknown) safety constraints that appear in applications such as online advertising and medical trials. At each stage, the learner must choose actions that not only maximize cumulative reward across the entire time horizon, but further satisfy a linear baseline constraint that takes the form of a lower bound on the instantaneous reward. For this problem, we present two novel algorithms, stage-wise conservative linear Thompson Sampling (SCLTS) and stage-wise conservative linear UCB (SCLUCB), that respect the baseline constraints and enjoy probabilistic regret bounds of order $\mathcal{O}(\sqrt{T} \log^{3/2}T)$ and $\mathcal{O}(\sqrt{T} \log T)$, respectively. Notably, the proposed algorithms can be adjusted with only minor modifications to tackle different problem variations, such as, constraints with bandit-feedback, or an unknown sequence of baseline rewards. We discuss these and other improvements over the state-of-the art. For instance, compared to existing solutions, we show that SCLTS plays the (non-optimal) baseline action at most $\mathcal{O}(\log{T})$ times (compared to $\mathcal{O}(\sqrt{T})$). Finally, we make connections to another studied form of safety-constraints that takes the form of an upper bound on the instantaneous reward. While this incurs additional complexity to the learning process as the optimal action is not guaranteed to belong to the safe-set at each round, we show that SCLUCB can properly adjust in this setting via a simple modification. |
RELATE: Physically Plausible Multi-Object Scene Synthesis Using Structured Latent Spaces | https://papers.nips.cc/paper_files/paper/2020/hash/806beafe154032a5b818e97b4420ad98-Abstract.html | Sebastien Ehrhardt, Oliver Groth, Aron Monszpart, Martin Engelcke, Ingmar Posner, Niloy Mitra, Andrea Vedaldi | https://papers.nips.cc/paper_files/paper/2020/hash/806beafe154032a5b818e97b4420ad98-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/806beafe154032a5b818e97b4420ad98-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10664-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/806beafe154032a5b818e97b4420ad98-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/806beafe154032a5b818e97b4420ad98-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/806beafe154032a5b818e97b4420ad98-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/806beafe154032a5b818e97b4420ad98-Supplemental.zip | We present RELATE, a model that learns to generate physically plausible scenes and videos of multiple interacting objects.
Similar to other generative approaches, RELATE is trained end-to-end on raw, unlabeled data.
RELATE combines an object-centric GAN formulation with a model that explicitly accounts for correlations between individual objects.
This allows the model to generate realistic scenes and videos from a physically-interpretable parameterization.
Furthermore, we show that modeling the object correlation is necessary to learn to disentangle object positions and identity.
We find that RELATE is also amenable to physically realistic scene editing and that it significantly outperforms prior art in object-centric scene generation in both synthetic (CLEVR, ShapeStacks) and real-world data (cars).
In addition, in contrast to state-of-the-art methods in object-centric generative modeling, RELATE also extends naturally to dynamic scenes and generates videos of high visual fidelity. Source code, datasets and more results are available at http://geometry.cs.ucl.ac.uk/projects/2020/relate/. |
Metric-Free Individual Fairness in Online Learning | https://papers.nips.cc/paper_files/paper/2020/hash/80b618ebcac7aa97a6dac2ba65cb7e36-Abstract.html | Yahav Bechavod, Christopher Jung, Steven Z. Wu | https://papers.nips.cc/paper_files/paper/2020/hash/80b618ebcac7aa97a6dac2ba65cb7e36-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/80b618ebcac7aa97a6dac2ba65cb7e36-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10665-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/80b618ebcac7aa97a6dac2ba65cb7e36-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/80b618ebcac7aa97a6dac2ba65cb7e36-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/80b618ebcac7aa97a6dac2ba65cb7e36-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/80b618ebcac7aa97a6dac2ba65cb7e36-Supplemental.pdf | We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly. Unlike prior work on individual fairness, we do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form. Instead, we leverage the existence of an auditor who detects fairness violations without enunciating the quantitative measure. In each round, the auditor examines the learner's decisions and attempts to identify a pair of individuals that are treated unfairly by the learner. We provide a general reduction framework that reduces online classification in our model to standard online classification, which allows us to leverage existing online learning algorithms to achieve sub-linear regret and number of fairness violations. Surprisingly, in the stochastic setting where the data are drawn independently from a distribution, we are also able to establish PAC-style fairness and accuracy generalization guarantees (Rothblum and Yona (2018)), despite only having access to a very restricted form of fairness feedback. Our fairness generalization bound qualitatively matches the uniform convergence bound of Rothblum and Yona (2018), while also providing a meaningful accuracy generalization guarantee. Our results resolve an open question by Gillen et al. (2018) by showing that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the underlying similarity measure. |
GreedyFool: Distortion-Aware Sparse Adversarial Attack | https://papers.nips.cc/paper_files/paper/2020/hash/8169e05e2a0debcb15458f2cc1eff0ea-Abstract.html | Xiaoyi Dong, Dongdong Chen, Jianmin Bao, Chuan Qin, Lu Yuan, Weiming Zhang, Nenghai Yu, Dong Chen | https://papers.nips.cc/paper_files/paper/2020/hash/8169e05e2a0debcb15458f2cc1eff0ea-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8169e05e2a0debcb15458f2cc1eff0ea-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10666-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8169e05e2a0debcb15458f2cc1eff0ea-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8169e05e2a0debcb15458f2cc1eff0ea-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8169e05e2a0debcb15458f2cc1eff0ea-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8169e05e2a0debcb15458f2cc1eff0ea-Supplemental.pdf | Modern deep neural networks(DNNs) are vulnerable to adversarial samples. Sparse adversarial samples are a special branch of adversarial samples that can fool the target model by only perturbing a few pixels. The existence of the sparse adversarial attack points out that DNNs are much more vulnerable than people believed, which is also a new aspect for analyzing DNNs. However, current sparse adversarial attack methods still have some shortcomings on both sparsity and invisibility. In this paper, we propose a novel two-stage distortion-aware greedy-based method dubbed as ''GreedyFool". Specifically, it first selects the most effective candidate positions to modify by considering both the gradient(for adversary) and the distortion map(for invisibility), then drops some less important points in the reduce stage.
Experiments demonstrate that compared with the start-of-the-art method, we only need to modify 3 times fewer pixels under the same sparse perturbation setting. For target attack, the success rate of our method is 9.96% higher than the start-of-the-art method under the same pixel budget. |
VAEM: a Deep Generative Model for Heterogeneous Mixed Type Data | https://papers.nips.cc/paper_files/paper/2020/hash/8171ac2c5544a5cb54ac0f38bf477af4-Abstract.html | Chao Ma, Sebastian Tschiatschek, Richard Turner, José Miguel Hernández-Lobato, Cheng Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/8171ac2c5544a5cb54ac0f38bf477af4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8171ac2c5544a5cb54ac0f38bf477af4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10667-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8171ac2c5544a5cb54ac0f38bf477af4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8171ac2c5544a5cb54ac0f38bf477af4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8171ac2c5544a5cb54ac0f38bf477af4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8171ac2c5544a5cb54ac0f38bf477af4-Supplemental.zip | Deep generative models often perform poorly in real-world applications due to the heterogeneity of natural data sets. Heterogeneity arises from data containing different types of features (categorical, ordinal, continuous, etc.) and features of the same type having different marginal distributions. We propose an extension of
variational autoencoders (VAEs) called VAEM to handle such heterogeneous data. VAEM is a deep generative model that is trained in a two stage manner, such that the first stage provides a more uniform representation of the data to the second stage, thereby sidestepping the problems caused by heterogeneous data.
We provide extensions of VAEM to handle partially observed data, and demonstrate its performance in data generation, missing data prediction and sequential feature selection tasks. Our results show that VAEM broadens the range of real-world applications where deep generative models can be successfully deployed. |
RetroXpert: Decompose Retrosynthesis Prediction Like A Chemist | https://papers.nips.cc/paper_files/paper/2020/hash/819f46e52c25763a55cc642422644317-Abstract.html | Chaochao Yan, Qianggang Ding, Peilin Zhao, Shuangjia Zheng, JINYU YANG, Yang Yu, Junzhou Huang | https://papers.nips.cc/paper_files/paper/2020/hash/819f46e52c25763a55cc642422644317-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/819f46e52c25763a55cc642422644317-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10668-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/819f46e52c25763a55cc642422644317-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/819f46e52c25763a55cc642422644317-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/819f46e52c25763a55cc642422644317-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/819f46e52c25763a55cc642422644317-Supplemental.pdf | Retrosynthesis is the process of recursively decomposing target molecules into available building blocks. It plays an important role in solving problems in organic synthesis planning. To automate or assist in the retrosynthesis analysis, various retrosynthesis prediction algorithms have been proposed. However, most of them are cumbersome and lack interpretability about their predictions. In this paper, we devise a novel template-free algorithm for automatic retrosynthetic expansion inspired by how chemists approach retrosynthesis prediction. Our method disassembles retrosynthesis into two steps: i) identify the potential reaction center of the target molecule through a novel graph neural network and generate intermediate synthons, and ii) generate the reactants associated with synthons via a robust reactant generation model. While outperforming the state-of-the-art baselines by a significant margin, our model also provides chemically reasonable interpretation. |
Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining | https://papers.nips.cc/paper_files/paper/2020/hash/81e3225c6ad49623167a4309eb4b2e75-Abstract.html | Austin Tripp, Erik Daxberger, José Miguel Hernández-Lobato | https://papers.nips.cc/paper_files/paper/2020/hash/81e3225c6ad49623167a4309eb4b2e75-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/81e3225c6ad49623167a4309eb4b2e75-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10669-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/81e3225c6ad49623167a4309eb4b2e75-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/81e3225c6ad49623167a4309eb4b2e75-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/81e3225c6ad49623167a4309eb4b2e75-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/81e3225c6ad49623167a4309eb4b2e75-Supplemental.pdf | Many important problems in science and engineering, such as drug design, involve optimizing an expensive black-box objective function over a complex, high-dimensional, and structured input space. Although machine learning techniques have shown promise in solving such problems, existing approaches substantially lack sample efficiency. We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model. In contrast to previous approaches, we actively steer the generative model to maintain a latent manifold that is highly useful for efficiently optimizing the objective. We achieve this by periodically retraining the generative model on the data points queried along the optimization trajectory, as well as weighting those data points according to their objective function value. This weighted retraining can be easily implemented on top of existing methods, and is empirically shown to significantly improve their efficiency and performance on synthetic and real-world optimization problems. |
Improved Sample Complexity for Incremental Autonomous Exploration in MDPs | https://papers.nips.cc/paper_files/paper/2020/hash/81e793dc8317a3dbc3534ed3f242c418-Abstract.html | Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric | https://papers.nips.cc/paper_files/paper/2020/hash/81e793dc8317a3dbc3534ed3f242c418-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/81e793dc8317a3dbc3534ed3f242c418-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10670-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/81e793dc8317a3dbc3534ed3f242c418-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/81e793dc8317a3dbc3534ed3f242c418-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/81e793dc8317a3dbc3534ed3f242c418-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/81e793dc8317a3dbc3534ed3f242c418-Supplemental.pdf | We study the problem of exploring an unknown environment when no reward function is provided to the agent. Building on the incremental exploration setting introduced by Lim and Auer (2012), we define the objective of learning the set of $\epsilon$-optimal goal-conditioned policies attaining all states that are incrementally reachable within $L$ steps (in expectation) from a reference state $s_0$. In this paper, we introduce a novel model-based approach that interleaves discovering new states from $s_0$ and improving the accuracy of a model estimate that is used to compute goal-conditioned policies. The resulting algorithm, DisCo, achieves a sample complexity scaling as $\widetilde{O}_{\epsilon}(L^5 S_{L+\epsilon} \Gamma_{L+\epsilon} A \epsilon^{-2})$, where $A$ is the number of actions, $S_{L+\epsilon}$ is the number of states that are incrementally reachable from $s_0$ in $L+\epsilon$ steps, and $\Gamma_{L+\epsilon}$ is the branching factor of the dynamics over such states. This improves over the algorithm proposed in (Lim and Auer, 2012) in both $\epsilon$ and $L$ at the cost of an extra $\Gamma_{L+\epsilon}$ factor, which is small in most environments of interest. Furthermore, DisCo is the first algorithm that can return an $\epsilon/c_{\min}$-optimal policy for any cost-sensitive shortest-path problem defined on the $L$-reachable states with minimum cost $c_{\min}$. Finally, we report preliminary empirical results confirming our theoretical findings. |
TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning | https://papers.nips.cc/paper_files/paper/2020/hash/81f7acabd411274fcf65ce2070ed568a-Abstract.html | Han Cai, Chuang Gan, Ligeng Zhu, Song Han | https://papers.nips.cc/paper_files/paper/2020/hash/81f7acabd411274fcf65ce2070ed568a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/81f7acabd411274fcf65ce2070ed568a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10671-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/81f7acabd411274fcf65ce2070ed568a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/81f7acabd411274fcf65ce2070ed568a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/81f7acabd411274fcf65ce2070ed568a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/81f7acabd411274fcf65ce2070ed568a-Supplemental.pdf | Efficient on-device learning requires a small memory footprint at training time to fit the tight memory constraint. Existing work solves this problem by reducing the number of trainable parameters. However, this doesn't directly translate to memory saving since the major bottleneck is the activations, not parameters.
In this work, we present Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning. TinyTL freezes the weights while only learns the memory-efficient bias modules, thus no need to store the intermediate activations. To maintain the adaptation capacity, we introduce a new memory-efficient bias module, the lite residual module, to refine the feature extractor by learning small residual feature maps adding only 3.8% memory overhead. Extensive experiments show that TinyTL significantly saves the memory (up to 6.5x) with little accuracy loss compared to fine-tuning the full network. Compared to fine-tuning the last layer, TinyTL provides significant accuracy improvements (up to 33.8%) with little memory overhead. Furthermore, combined with feature extractor adaptation, TinyTL provides 7.5-12.9x memory saving without sacrificing accuracy compared to fine-tuning the full Inception-V3. Code is released at https://github.com/mit-han-lab/tinyML/tree/master/tinyTL. |
RD$^2$: Reward Decomposition with Representation Decomposition | https://papers.nips.cc/paper_files/paper/2020/hash/82039d16dce0aab3913b6a7ac73deff7-Abstract.html | Zichuan Lin, Derek Yang, Li Zhao, Tao Qin, Guangwen Yang, Tie-Yan Liu | https://papers.nips.cc/paper_files/paper/2020/hash/82039d16dce0aab3913b6a7ac73deff7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/82039d16dce0aab3913b6a7ac73deff7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10672-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/82039d16dce0aab3913b6a7ac73deff7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/82039d16dce0aab3913b6a7ac73deff7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/82039d16dce0aab3913b6a7ac73deff7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/82039d16dce0aab3913b6a7ac73deff7-Supplemental.pdf | Reward decomposition, which aims to decompose the full reward into multiple sub-rewards, has been proven beneficial for improving sample efficiency in reinforcement learning. Existing works on discovering reward decomposition are mostly policy dependent, which constrains diverse or disentangled behavior between different policies induced by different sub-rewards. In this work, we propose a set of novel reward decomposition principles by constraining uniqueness and compactness of different state features/representations relevant to different sub-rewards. Our principles encourage sub-rewards with minimal relevant features, while maintaining the uniqueness of each sub-reward. We derive a deep learning algorithm based on our principle, and term our method as RD$^2$, since we learn reward decomposition and representation decomposition jointly. RD$^2$ is evaluated on a toy case, where we have the true reward structure, and some Atari environments where reward structure exists but is unknown to the agent to demonstrate the effectiveness of RD$^2$ against existing reward decomposition methods. |
Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID | https://papers.nips.cc/paper_files/paper/2020/hash/821fa74b50ba3f7cba1e6c53e8fa6845-Abstract.html | Yixiao Ge, Feng Zhu, Dapeng Chen, Rui Zhao, hongsheng Li | https://papers.nips.cc/paper_files/paper/2020/hash/821fa74b50ba3f7cba1e6c53e8fa6845-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/821fa74b50ba3f7cba1e6c53e8fa6845-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10673-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/821fa74b50ba3f7cba1e6c53e8fa6845-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/821fa74b50ba3f7cba1e6c53e8fa6845-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/821fa74b50ba3f7cba1e6c53e8fa6845-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/821fa74b50ba3f7cba1e6c53e8fa6845-Supplemental.pdf | Domain adaptive object re-ID aims to transfer the learned knowledge from the labeled source domain to the unlabeled target domain to tackle the open-class re-identification problems. Although state-of-the-art pseudo-label-based methods have achieved great success, they did not make full use of all valuable information because of the domain gap and unsatisfying clustering performance. To solve these problems, we propose a novel self-paced contrastive learning framework with hybrid memory. The hybrid memory dynamically generates source-domain class-level, target-domain cluster-level and un-clustered instance-level supervisory signals for learning feature representations. Different from the conventional contrastive learning strategy, the proposed framework jointly distinguishes source-domain classes, and target-domain clusters and un-clustered instances. Most importantly, the proposed self-paced method gradually creates more reliable clusters to refine the hybrid memory and learning targets, and is shown to be the key to our outstanding performance. Our method outperforms state-of-the-arts on multiple domain adaptation tasks of object re-ID and even boosts the performance on the source domain without any extra annotations. Our generalized version on unsupervised object re-ID surpasses state-of-the-art algorithms by considerable 16.7% and 7.9% on Market-1501 and MSMT17 benchmarks. |
Fairness constraints can help exact inference in structured prediction | https://papers.nips.cc/paper_files/paper/2020/hash/8248a99e81e752cb9b41da3fc43fbe7f-Abstract.html | Kevin Bello, Jean Honorio | https://papers.nips.cc/paper_files/paper/2020/hash/8248a99e81e752cb9b41da3fc43fbe7f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8248a99e81e752cb9b41da3fc43fbe7f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10674-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8248a99e81e752cb9b41da3fc43fbe7f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8248a99e81e752cb9b41da3fc43fbe7f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8248a99e81e752cb9b41da3fc43fbe7f-Review.html | null | Many inference problems in structured prediction can be modeled as maximizing a score function on a space of labels, where graphs are a natural representation to decompose the total score into a sum of unary (nodes) and pairwise (edges) scores. Given a generative model with an undirected connected graph G and true vector of binary labels $\bar{y}$, it has been previously shown that when G has good expansion properties, such as complete graphs or d-regular expanders, one can exactly recover $\bar{y}$ (with high probability and in polynomial time) from a single noisy observation of each edge and node. We analyze the previously studied generative model by Globerson et al. (2015) under a notion of statistical parity.
That is, given a fair binary node labeling, we ask the question whether it is possible to recover the fair assignment, with high probability and in polynomial time, from single edge and node observations. We find that, in contrast to the known trade-offs between fairness and model performance, the addition of the fairness constraint improves the probability of exact recovery. We effectively explain this phenomenon and empirically show how graphs with poor expansion properties, such as grids, are now capable of achieving exact recovery. Finally, as a byproduct of our analysis, we provide a tighter minimum-eigenvalue bound than that which can be derived from Weyl's inequality. |
Instance-based Generalization in Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/82674fc29bc0d9895cee346548c2cb5c-Abstract.html | Martin Bertran, Natalia Martinez, Mariano Phielipp, Guillermo Sapiro | https://papers.nips.cc/paper_files/paper/2020/hash/82674fc29bc0d9895cee346548c2cb5c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/82674fc29bc0d9895cee346548c2cb5c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10675-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/82674fc29bc0d9895cee346548c2cb5c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/82674fc29bc0d9895cee346548c2cb5c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/82674fc29bc0d9895cee346548c2cb5c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/82674fc29bc0d9895cee346548c2cb5c-Supplemental.pdf | Agents trained via deep reinforcement learning (RL) routinely fail to generalize to unseen environments, even when these share the same underlying dynamics as the training levels. Understanding the generalization properties of RL is one of the challenges of modern machine learning. Towards this goal, we analyze policy learning in the context of Partially Observable Markov Decision Processes (POMDPs) and formalize the dynamics of training levels as instances. We prove that, independently of the exploration strategy, reusing instances introduces significant changes on the effective Markov dynamics the agent observes during training. Maximizing expected rewards impacts the learned belief state of the agent by inducing undesired instance-specific speed-running policies instead of generalizable ones, which are sub-optimal on the training set.
We provide generalization bounds to the value gap in train and test environments based on the number of training instances, and use insights based on these to improve performance on unseen levels. We propose training a shared belief representation over an ensemble of specialized policies, from which we compute a consensus policy that is used for data collection, disallowing instance-specific exploitation. We experimentally validate our theory, observations, and the proposed computational solution over the CoinRun benchmark. |
Smooth And Consistent Probabilistic Regression Trees | https://papers.nips.cc/paper_files/paper/2020/hash/8289889263db4a40463e3f358bb7c7a1-Abstract.html | Sami Alkhoury, Emilie Devijver, Marianne Clausel, Myriam Tami, Eric Gaussier, georges Oppenheim | https://papers.nips.cc/paper_files/paper/2020/hash/8289889263db4a40463e3f358bb7c7a1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8289889263db4a40463e3f358bb7c7a1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10676-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8289889263db4a40463e3f358bb7c7a1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8289889263db4a40463e3f358bb7c7a1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8289889263db4a40463e3f358bb7c7a1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8289889263db4a40463e3f358bb7c7a1-Supplemental.zip | We propose here a generalization of regression trees, referred to as Probabilistic Regression (PR) trees, that adapt to the smoothness of the prediction function relating input and output variables while preserving the interpretability of the prediction and being robust to noise. In PR trees, an observation is associated to all regions of a tree through a probability distribution that reflects how far the observation is to a region. We show that such trees are consistent, meaning that their error tends to 0 when the sample size tends to infinity, a property that has not been established for similar, previous proposals as Soft trees and Smooth Transition Regression trees. We further explain how PR trees can be used in different ensemble methods, namely Random Forests and Gradient Boosted Trees. Lastly, we assess their performance through extensive experiments that illustrate their benefits in terms of performance, interpretability and robustness to noise. |
Computing Valid p-value for Optimal Changepoint by Selective Inference using Dynamic Programming | https://papers.nips.cc/paper_files/paper/2020/hash/82b04cd5aa016d979fe048f3ddf0e8d3-Abstract.html | Vo Nguyen Le Duy, Hiroki Toda, Ryota Sugiyama, Ichiro Takeuchi | https://papers.nips.cc/paper_files/paper/2020/hash/82b04cd5aa016d979fe048f3ddf0e8d3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/82b04cd5aa016d979fe048f3ddf0e8d3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10677-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/82b04cd5aa016d979fe048f3ddf0e8d3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/82b04cd5aa016d979fe048f3ddf0e8d3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/82b04cd5aa016d979fe048f3ddf0e8d3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/82b04cd5aa016d979fe048f3ddf0e8d3-Supplemental.pdf | Although there is a vast body of literature related to methods for detecting change-points (CPs), less attention has been paid to assessing the statistical reliability of the detected CPs. In this paper, we introduce a novel method to perform statistical inference on the significance of the CPs, estimated by a Dynamic Programming (DP)-based optimal CP detection algorithm. Our main idea is to employ a Selective Inference (SI) approach---a new statistical inference framework that has recently received a lot of attention---to compute exact (non-asymptotic) valid p-values for the detected optimal CPs. Although it is well-known that SI has low statistical power because of over-conditioning, we address this drawback by introducing a novel method called parametric DP, which enables SI to be conducted with the minimum amount of conditioning, leading to high statistical power. We conduct experiments on both synthetic and real-world datasets, through which we offer evidence that our proposed method is more powerful than existing methods, has decent performance in terms of computational efficiency, and provides good results in many practical applications. |
Factorized Neural Processes for Neural Processes: K-Shot Prediction of Neural Responses | https://papers.nips.cc/paper_files/paper/2020/hash/82e9e7a12665240d13d0b928be28f230-Abstract.html | Ronald (James) Cotton, Fabian Sinz, Andreas Tolias | https://papers.nips.cc/paper_files/paper/2020/hash/82e9e7a12665240d13d0b928be28f230-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/82e9e7a12665240d13d0b928be28f230-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10678-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/82e9e7a12665240d13d0b928be28f230-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/82e9e7a12665240d13d0b928be28f230-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/82e9e7a12665240d13d0b928be28f230-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/82e9e7a12665240d13d0b928be28f230-Supplemental.pdf | In recent years, artificial neural networks have achieved state-of-the-art performance for predicting the responses of neurons in the visual cortex to natural stimuli. However, they require a time consuming parameter optimization process for accurately modeling the tuning function of newly observed neurons, which prohibits many applications including real-time, closed-loop experiments. We overcome this limitation by formulating the problem as $K$-shot prediction to directly infer a neuron's tuning function from a small set of stimulus-response pairs using a Neural Process. This required us to developed a Factorized Neural Process, which embeds the observed set into a latent space partitioned into the receptive field location and the tuning function properties. We show on simulated responses that the predictions and reconstructed receptive fields from the Factorized Neural Process approach ground truth with increasing number of trials.
Critically, the latent representation that summarizes the tuning function of a neuron is inferred in a quick, single forward pass through the network. Finally, we validate this approach on real neural data from visual cortex and find that the predictive accuracy is comparable to --- and for small $K$ even greater than --- optimization based approaches, while being substantially faster. We believe this novel deep learning systems identification framework will facilitate better real-time integration of artificial neural network modeling into neuroscience experiments. |
Winning the Lottery with Continuous Sparsification | https://papers.nips.cc/paper_files/paper/2020/hash/83004190b1793d7aa15f8d0d49a13eba-Abstract.html | Pedro Savarese, Hugo Silva, Michael Maire | https://papers.nips.cc/paper_files/paper/2020/hash/83004190b1793d7aa15f8d0d49a13eba-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/83004190b1793d7aa15f8d0d49a13eba-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10679-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/83004190b1793d7aa15f8d0d49a13eba-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/83004190b1793d7aa15f8d0d49a13eba-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/83004190b1793d7aa15f8d0d49a13eba-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/83004190b1793d7aa15f8d0d49a13eba-Supplemental.pdf | The search for efficient, sparse deep neural network models is most prominently performed by pruning: training a dense, overparameterized network and removing parameters, usually via following a manually-crafted heuristic. Additionally, the recent Lottery Ticket Hypothesis conjectures that, for a typically-sized neural network, it is possible to find small sub-networks which, when trained from scratch on a comparable budget, match the performance of the original dense counterpart. We revisit fundamental aspects of pruning algorithms, pointing out missing ingredients in previous approaches, and develop a method, Continuous Sparsification, which searches for sparse networks based on a novel approximation of an intractable $\ell_0$ regularization. We compare against dominant heuristic-based methods on pruning as well as ticket search -- finding sparse subnetworks that can be successfully re-trained from an early iterate. Empirical results show that we surpass the state-of-the-art for both objectives, across models and datasets, including VGG trained on CIFAR-10 and ResNet-50 trained on ImageNet. In addition to setting a new standard for pruning, Continuous Sparsification also offers fast parallel ticket search, opening doors to new applications of the Lottery Ticket Hypothesis. |
Adversarial robustness via robust low rank representations | https://papers.nips.cc/paper_files/paper/2020/hash/837a7924b8c0aa866e41b2721f66135c-Abstract.html | Pranjal Awasthi, Himanshu Jain, Ankit Singh Rawat, Aravindan Vijayaraghavan | https://papers.nips.cc/paper_files/paper/2020/hash/837a7924b8c0aa866e41b2721f66135c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/837a7924b8c0aa866e41b2721f66135c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10680-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/837a7924b8c0aa866e41b2721f66135c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/837a7924b8c0aa866e41b2721f66135c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/837a7924b8c0aa866e41b2721f66135c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/837a7924b8c0aa866e41b2721f66135c-Supplemental.pdf | A key technical ingredient for our certification guarantees is a fast algorithm with provable guarantees based on the multiplicative weights update method to provide upper bounds on the above matrix norm. Our algorithmic guarantees improve upon the state of the art for this problem, and may be of independent interest. |
Joints in Random Forests | https://papers.nips.cc/paper_files/paper/2020/hash/8396b14c5dff55d13eea57487bf8ed26-Abstract.html | Alvaro Correia, Robert Peharz, Cassio P. de Campos | https://papers.nips.cc/paper_files/paper/2020/hash/8396b14c5dff55d13eea57487bf8ed26-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8396b14c5dff55d13eea57487bf8ed26-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10681-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8396b14c5dff55d13eea57487bf8ed26-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8396b14c5dff55d13eea57487bf8ed26-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8396b14c5dff55d13eea57487bf8ed26-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8396b14c5dff55d13eea57487bf8ed26-Supplemental.pdf | Decision Trees (DTs) and Random Forests (RFs) are powerful discriminative learners and tools of central importance to the everyday machine learning practitioner and data scientist. Due to their discriminative nature, however, they lack principled methods to process inputs with missing features or to detect outliers, which requires pairing them with imputation techniques or a separate generative model. In this paper, we demonstrate that DTs and RFs can naturally be interpreted as generative models, by drawing a connection to Probabilistic Circuits, a prominent class of tractable probabilistic models. This reinterpretation equips them with a full joint distribution over the feature space and leads to Generative Decision Trees (GeDTs) and Generative Forests (GeFs), a family of novel hybrid generative-discriminative models. This family of models retains the overall characteristics of DTs and RFs while additionally being able to handle missing features by means of marginalisation. Under certain assumptions, frequently made for Bayes consistency results, we show that consistency in GeDTs and GeFs extend to any pattern of missing input features, if missing at random. Empirically, we show that our models often outperform common routines to treat missing data, such as K-nearest neighbour imputation, and moreover, that our models can naturally detect outliers by monitoring the marginal probability of input features. |
Compositional Generalization by Learning Analytical Expressions | https://papers.nips.cc/paper_files/paper/2020/hash/83adc9225e4deb67d7ce42d58fe5157c-Abstract.html | Qian Liu, Shengnan An, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao, Bin Zhou, Nanning Zheng, Dongmei Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/83adc9225e4deb67d7ce42d58fe5157c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/83adc9225e4deb67d7ce42d58fe5157c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10682-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/83adc9225e4deb67d7ce42d58fe5157c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/83adc9225e4deb67d7ce42d58fe5157c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/83adc9225e4deb67d7ce42d58fe5157c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/83adc9225e4deb67d7ce42d58fe5157c-Supplemental.pdf | Compositional generalization is a basic and essential intellective capability of human beings, which allows us to recombine known parts readily. However, existing neural network based models have been proven to be extremely deficient in such a capability. Inspired by work in cognition which argues compositionality can be captured by variable slots with symbolic functions, we present a refreshing view that connects a memory-augmented neural model with analytical expressions, to achieve compositional generalization. Our model consists of two cooperative neural modules, Composer and Solver, fitting well with the cognitive argument while being able to be trained in an end-to-end manner via a hierarchical reinforcement learning algorithm. Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization, solving all challenges addressed by previous works with 100% accuracies. |
JAX MD: A Framework for Differentiable Physics | https://papers.nips.cc/paper_files/paper/2020/hash/83d3d4b6c9579515e1679aca8cbc8033-Abstract.html | Samuel Schoenholz, Ekin Dogus Cubuk | https://papers.nips.cc/paper_files/paper/2020/hash/83d3d4b6c9579515e1679aca8cbc8033-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/83d3d4b6c9579515e1679aca8cbc8033-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10683-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/83d3d4b6c9579515e1679aca8cbc8033-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/83d3d4b6c9579515e1679aca8cbc8033-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/83d3d4b6c9579515e1679aca8cbc8033-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/83d3d4b6c9579515e1679aca8cbc8033-Supplemental.zip | We introduce JAX MD, a software package for performing differentiable physics simulations with a focus on molecular dynamics. JAX MD includes a number of statistical physics simulation environments as well as interaction potentials and neural networks that can be integrated into these environments without writing any additional code. Since the simulations themselves are differentiable functions, entire trajectories can be differentiated to perform meta-optimization. These features are built on primitive operations, such as spatial partitioning, that allow simulations to scale to hundreds-of-thousands of particles on a single GPU. These primitives are flexible enough that they can be used to scale up workloads outside of molecular dynamics. We present several examples that highlight the features of JAX MD including: integration of graph neural networks into traditional simulations, meta-optimization through minimization of particle packings, and a multi-agent flocking simulation. JAX MD is available at www.github.com/google/jax-md. |
An implicit function learning approach for parametric modal regression | https://papers.nips.cc/paper_files/paper/2020/hash/83eaa6722798a773dd55e8fc7443aa09-Abstract.html | Yangchen Pan, Ehsan Imani, Amir-massoud Farahmand, Martha White | https://papers.nips.cc/paper_files/paper/2020/hash/83eaa6722798a773dd55e8fc7443aa09-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/83eaa6722798a773dd55e8fc7443aa09-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10684-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/83eaa6722798a773dd55e8fc7443aa09-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/83eaa6722798a773dd55e8fc7443aa09-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/83eaa6722798a773dd55e8fc7443aa09-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/83eaa6722798a773dd55e8fc7443aa09-Supplemental.pdf | For multi-valued functions---such as when the conditional distribution on targets given the inputs is multi-modal---standard regression approaches are not always desirable because they provide the conditional mean. Modal regression algorithms address this issue by instead finding the conditional mode(s). Most, however, are nonparametric approaches and so can be difficult to scale. Further, parametric approximators, like neural networks, facilitate learning complex relationships between inputs and targets. In this work, we propose a parametric modal regression algorithm. We use the implicit function theorem to develop an objective, for learning a joint function over inputs and targets. We empirically demonstrate on several synthetic problems that our method (i) can learn multi-valued functions and produce the conditional modes, (ii) scales well to high-dimensional inputs, and (iii) can even be more effective for certain uni-modal problems, particularly for high-frequency functions. We demonstrate that our method is competitive in a real-world modal regression problem and two regular regression datasets. |
SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images | https://papers.nips.cc/paper_files/paper/2020/hash/83fa5a432ae55c253d0e60dbfa716723-Abstract.html | Chen-Hsuan Lin, Chaoyang Wang, Simon Lucey | https://papers.nips.cc/paper_files/paper/2020/hash/83fa5a432ae55c253d0e60dbfa716723-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/83fa5a432ae55c253d0e60dbfa716723-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10685-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/83fa5a432ae55c253d0e60dbfa716723-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/83fa5a432ae55c253d0e60dbfa716723-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/83fa5a432ae55c253d0e60dbfa716723-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/83fa5a432ae55c253d0e60dbfa716723-Supplemental.zip | Dense 3D object reconstruction from a single image has recently witnessed remarkable advances, but supervising neural networks with ground-truth 3D shapes is impractical due to the laborious process of creating paired image-shape datasets. Recent efforts have turned to learning 3D reconstruction without 3D supervision from RGB images with annotated 2D silhouettes, dramatically reducing the cost and effort of annotation. These techniques, however, remain impractical as they still require multi-view annotations of the same object instance during training. As a result, most experimental efforts to date have been limited to synthetic datasets.
In this paper, we address this issue and propose SDF-SRN, an approach that requires only a single view of objects at training time, offering greater utility for real-world scenarios. SDF-SRN learns implicit 3D shape representations to handle arbitrary shape topologies that may exist in the datasets. To this end, we derive a novel differentiable rendering formulation for learning signed distance functions (SDF) from 2D silhouettes. Our method outperforms the state of the art under challenging single-view supervision settings on both synthetic and real-world datasets. |
Coresets for Robust Training of Deep Neural Networks against Noisy Labels | https://papers.nips.cc/paper_files/paper/2020/hash/8493eeaccb772c0878f99d60a0bd2bb3-Abstract.html | Baharan Mirzasoleiman, Kaidi Cao, Jure Leskovec | https://papers.nips.cc/paper_files/paper/2020/hash/8493eeaccb772c0878f99d60a0bd2bb3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8493eeaccb772c0878f99d60a0bd2bb3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10686-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8493eeaccb772c0878f99d60a0bd2bb3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8493eeaccb772c0878f99d60a0bd2bb3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8493eeaccb772c0878f99d60a0bd2bb3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8493eeaccb772c0878f99d60a0bd2bb3-Supplemental.pdf | Modern neural networks have the capacity to overfit noisy labels frequently found in real-world datasets. Although great progress has been made, existing techniques are very limited in providing theoretical guarantees for the performance of the neural networks trained with noisy labels. To tackle this challenge, we propose a novel approach with strong theoretical guarantees for robust training of neural networks trained with noisy labels. The key idea behind our method is to select subsets of clean data points that provide an approximately low-rank Jacobian matrix. We then prove that gradient descent applied to the subsets cannot overfit the noisy labels, without regularization or early stopping. Our extensive experiments corroborate our theory and demonstrate that deep networks trained on our subsets achieve a significantly superior performance, e.g., 7% increase in accuracy on mini Webvision with 50% noisy labels, compared to state-of-the art. |
Adapting to Misspecification in Contextual Bandits | https://papers.nips.cc/paper_files/paper/2020/hash/84c230a5b1bc3495046ef916957c7238-Abstract.html | Dylan J. Foster, Claudio Gentile, Mehryar Mohri, Julian Zimmert | https://papers.nips.cc/paper_files/paper/2020/hash/84c230a5b1bc3495046ef916957c7238-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/84c230a5b1bc3495046ef916957c7238-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10687-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/84c230a5b1bc3495046ef916957c7238-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/84c230a5b1bc3495046ef916957c7238-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/84c230a5b1bc3495046ef916957c7238-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/84c230a5b1bc3495046ef916957c7238-Supplemental.pdf | A major research direction in contextual bandits is to
develop algorithms that are computationally efficient, yet support
flexible, general-purpose function approximation. Algorithms based on
modeling rewards have shown strong empirical performance, yet
typically require a well-specified model, and can fail when this
assumption does not hold. Can we design algorithms that are efficient
and flexible, yet degrade gracefully in the face of model
misspecification? We introduce a new family of oracle-efficient algorithms for $\varepsilon$-misspecified contextual bandits
that adapt to unknown model misspecification---both for finite and
infinite action settings. Given access to an
\emph{online oracle} for square loss regression, our algorithm attains
optimal regret and---in particular---optimal dependence on the
misspecification level, with \emph{no prior knowledge}. Specializing
to linear contextual bandits with infinite actions in $d$ dimensions,
we obtain the first algorithm that achieves the optimal
$\bigoht(d\sqrt{T} + \varepsilon\sqrt{d}T)$ regret bound for unknown $\varepsilon$.
On a conceptual level, our results are enabled by a new
optimization-based perspective on the regression oracle reduction framework of
Foster and Rakhlin (2020), which we believe will be useful more broadly. |
Convergence of Meta-Learning with Task-Specific Adaptation over Partial Parameters | https://papers.nips.cc/paper_files/paper/2020/hash/84c578f202616448a2f80e6f56d5f16d-Abstract.html | Kaiyi Ji, Jason D. Lee, Yingbin Liang, H. Vincent Poor | https://papers.nips.cc/paper_files/paper/2020/hash/84c578f202616448a2f80e6f56d5f16d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/84c578f202616448a2f80e6f56d5f16d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10688-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/84c578f202616448a2f80e6f56d5f16d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/84c578f202616448a2f80e6f56d5f16d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/84c578f202616448a2f80e6f56d5f16d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/84c578f202616448a2f80e6f56d5f16d-Supplemental.pdf | Although model-agnostic meta-learning (MAML) is a very successful algorithm in meta-learning practice, it can have high computational cost because it updates all model parameters over both the inner loop of task-specific adaptation and the outer-loop of meta initialization training. A more efficient algorithm ANIL (which refers to almost no inner loop) was proposed recently by Raghu et al. 2019, which adapts only a small subset of parameters in the inner loop and thus has substantially less computational cost than MAML as demonstrated by extensive experiments. However, the theoretical convergence of ANIL has not been studied yet. In this paper, we characterize the convergence rate and the computational complexity for ANIL under two representative inner-loop loss geometries, i.e., strongly-convexity and nonconvexity. Our results show that such a geometric property can significantly affect the overall convergence performance of ANIL. For example, ANIL achieves a faster convergence rate for a strongly-convex inner-loop loss as the number $N$ of inner-loop gradient descent steps increases, but a slower convergence rate for a nonconvex inner-loop loss as $N$ increases. Moreover, our complexity analysis provides a theoretical quantification on the improved efficiency of ANIL over MAML. The experiments on standard few-shot meta-learning benchmarks validate our theoretical findings. |
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures | https://papers.nips.cc/paper_files/paper/2020/hash/84ddfb34126fc3a48ee38d7044e87276-Abstract.html | Jeong Un Ryu, JaeWoong Shin, Hae Beom Lee, Sung Ju Hwang | https://papers.nips.cc/paper_files/paper/2020/hash/84ddfb34126fc3a48ee38d7044e87276-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/84ddfb34126fc3a48ee38d7044e87276-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10689-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/84ddfb34126fc3a48ee38d7044e87276-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/84ddfb34126fc3a48ee38d7044e87276-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/84ddfb34126fc3a48ee38d7044e87276-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/84ddfb34126fc3a48ee38d7044e87276-Supplemental.pdf | Regularization and transfer learning are two popular techniques to enhance model generalization on unseen data, which is a fundamental problem of machine learning. Regularization techniques are versatile, as they are task- and architecture-agnostic, but they do not exploit a large amount of data available. Transfer learning methods learn to transfer knowledge from one domain to another, but may not generalize across tasks and architectures, and may introduce new training cost for adapting to the target task. To bridge the gap between the two, we propose a transferable perturbation, MetaPerturb, which is meta-learned to improve generalization performance on unseen data. MetaPerturb is implemented as a set-based lightweight network that is agnostic to the size and the order of the input, which is shared across the layers. Then, we propose a meta-learning framework, to jointly train the perturbation function over heterogeneous tasks in parallel. As MetaPerturb is a set-function trained over diverse distributions across layers and tasks, it can generalize to heterogeneous tasks and architectures. We validate the efficacy and generality of MetaPerturb trained on a specific source domain and architecture, by applying it to the training of diverse neural architectures on heterogeneous target datasets against various regularizers and fine-tuning. The results show that the networks trained with MetaPerturb significantly outperform the baselines on most of the tasks and architectures, with a negligible increase in the parameter size and no hyperparameters to tune. |
Learning to solve TV regularised problems with unrolled algorithms | https://papers.nips.cc/paper_files/paper/2020/hash/84fec9a8e45846340fdf5c7c9f7ed66c-Abstract.html | Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau | https://papers.nips.cc/paper_files/paper/2020/hash/84fec9a8e45846340fdf5c7c9f7ed66c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/84fec9a8e45846340fdf5c7c9f7ed66c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10690-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/84fec9a8e45846340fdf5c7c9f7ed66c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/84fec9a8e45846340fdf5c7c9f7ed66c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/84fec9a8e45846340fdf5c7c9f7ed66c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/84fec9a8e45846340fdf5c7c9f7ed66c-Supplemental.pdf | Total Variation (TV) is a popular regularization strategy that promotes piece-wise constant signals by constraining the ℓ1-norm of the first order derivative of the estimated signal. The resulting optimization problem is usually solved using iterative algorithms such as proximal gradient descent, primal-dual algorithms or ADMM. However, such methods can require a very large number of iterations to converge to a suitable solution. In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems. While this could be done using the synthesis formulation, we demonstrate that this leads to slower performances. The main difficulty in applying such methods in the analysis formulation lies in proposing a way to compute the derivatives through the proximal operator. As our main contribution, we develop and characterize two approaches to do so, describe their benefits and limitations, and discuss the regime where they can actually improve over iterative procedures. We validate those findings with experiments on synthetic and real data. |
Object-Centric Learning with Slot Attention | https://papers.nips.cc/paper_files/paper/2020/hash/8511df98c02ab60aea1b2356c013bc0f-Abstract.html | Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, Thomas Kipf | https://papers.nips.cc/paper_files/paper/2020/hash/8511df98c02ab60aea1b2356c013bc0f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8511df98c02ab60aea1b2356c013bc0f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10691-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8511df98c02ab60aea1b2356c013bc0f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8511df98c02ab60aea1b2356c013bc0f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8511df98c02ab60aea1b2356c013bc0f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8511df98c02ab60aea1b2356c013bc0f-Supplemental.pdf | Learning object-centric representations of complex scenes is a promising step towards enabling efficient abstract reasoning from low-level perceptual features. Yet, most deep learning approaches learn distributed representations that do not capture the compositional properties of natural scenes. In this paper, we present the Slot Attention module, an architectural component that interfaces with perceptual representations such as the output of a convolutional neural network and produces a set of task-dependent abstract representations which we call slots. These slots are exchangeable and can bind to any object in the input by specializing through a competitive procedure over multiple rounds of attention. We empirically demonstrate that Slot Attention can extract object-centric representations that enable generalization to unseen compositions when trained on unsupervised object discovery and supervised property prediction tasks. |
Improving robustness against common corruptions by covariate shift adaptation | https://papers.nips.cc/paper_files/paper/2020/hash/85690f81aadc1749175c187784afc9ee-Abstract.html | Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, Matthias Bethge | https://papers.nips.cc/paper_files/paper/2020/hash/85690f81aadc1749175c187784afc9ee-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/85690f81aadc1749175c187784afc9ee-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10692-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/85690f81aadc1749175c187784afc9ee-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/85690f81aadc1749175c187784afc9ee-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/85690f81aadc1749175c187784afc9ee-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/85690f81aadc1749175c187784afc9ee-Supplemental.pdf | Today’s state-of-the-art machine vision models are vulnerable to image corruptions like blurring or compression artefacts, limiting their performance in many real-world applications. We here argue that popular benchmarks to measure model robustness against common corruptions (like ImageNet-C) underestimate model robustness in many (but not all) application scenarios. The key insight is that in many scenarios, multiple unlabeled examples of the corruptions are available and can be used for unsupervised online adaptation. Replacing the activation statistics estimated by batch normalization on the training set with the statistics of the corrupted images consistently improves the robustness across 25 different popular computer vision models. Using the corrected statistics, ResNet-50 reaches 62.2% mCE on ImageNet-C compared to 76.7% without adaptation. With the more robust DeepAugment+AugMix model, we improve the state of the art achieved by a ResNet50 model up to date from 53.6% mCE to 45.4% mCE. Even adapting to a single sample improves robustness for the ResNet-50 and AugMix models, and 32 samples are sufficient to improve the current state of the art for a ResNet-50 architecture. We argue that results with adapted statistics should be included whenever reporting scores in corruption benchmarks and other out-of-distribution generalization settings. |
Deep Smoothing of the Implied Volatility Surface | https://papers.nips.cc/paper_files/paper/2020/hash/858e47701162578e5e627cd93ab0938a-Abstract.html | Damien Ackerer, Natasa Tagasovska, Thibault Vatter | https://papers.nips.cc/paper_files/paper/2020/hash/858e47701162578e5e627cd93ab0938a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/858e47701162578e5e627cd93ab0938a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10693-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/858e47701162578e5e627cd93ab0938a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/858e47701162578e5e627cd93ab0938a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/858e47701162578e5e627cd93ab0938a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/858e47701162578e5e627cd93ab0938a-Supplemental.zip | We present a neural network (NN) approach to fit and predict implied volatility surfaces (IVSs).
Atypically to standard NN applications, financial industry practitioners use such models equally to replicate market prices and to value other financial instruments.
In other words, low training losses are as important as generalization capabilities.
Importantly, IVS models need to generate realistic arbitrage-free option prices, meaning that no portfolio can lead to risk-free profits.
We propose an approach guaranteeing the absence of arbitrage opportunities by penalizing the loss using soft constraints.
Furthermore, our method can be combined with standard IVS models in quantitative finance, thus providing a NN-based correction when such models fail at replicating observed market prices.
This lets practitioners use our approach as a plug-in on top of classical methods.
Empirical results show that this approach is particularly useful when only sparse or erroneous data are available.
We also quantify the uncertainty of the model predictions in regions with few or no observations.
We further explore how deeper NNs improve over shallower ones, as well as other properties of the network architecture.
We benchmark our method against standard IVS models.
By evaluating our method on both training sets, and testing sets, namely, we highlight both their capacity to reproduce observed prices and predict new ones. |
Probabilistic Inference with Algebraic Constraints: Theoretical Limits and Practical Approximations | https://papers.nips.cc/paper_files/paper/2020/hash/85934679f30131d812a8c7475a7d0f74-Abstract.html | Zhe Zeng, Paolo Morettin, Fanqi Yan, Antonio Vergari, Guy Van den Broeck | https://papers.nips.cc/paper_files/paper/2020/hash/85934679f30131d812a8c7475a7d0f74-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/85934679f30131d812a8c7475a7d0f74-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10694-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/85934679f30131d812a8c7475a7d0f74-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/85934679f30131d812a8c7475a7d0f74-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/85934679f30131d812a8c7475a7d0f74-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/85934679f30131d812a8c7475a7d0f74-Supplemental.pdf | Weighted model integration (WMI) is a framework to perform advanced probabilistic inference on hybrid domains, i.e., on distributions over mixed continuous-discrete random variables and in presence of complex logical and arithmetic constraints. In this work, we advance the WMI framework on both the theoretical and algorithmic side. First, we exactly trace the boundaries of tractability for WMI inference by proving that to be amenable to exact and efficient inference a WMI problem has to posses a tree-shaped structure with logarithmic diameter. While this result deepens our theoretical understanding of WMI it hinders the practical applicability of exact WMI solvers to real-world problems. To overcome this, we propose the first approximate WMI solver that does not resort to sampling, but performs exact inference on one approximate models. Our solution performs message passing in a relaxed problem structure iteratively to recover certain lost dependencies and, as our experiments suggest, is competitive with other SOTA WMI solvers. |
Provable Online CP/PARAFAC Decomposition of a Structured Tensor via Dictionary Learning | https://papers.nips.cc/paper_files/paper/2020/hash/85b42dd8aae56e01379be5736db5b496-Abstract.html | Sirisha Rambhatla, Xingguo Li, Jarvis Haupt | https://papers.nips.cc/paper_files/paper/2020/hash/85b42dd8aae56e01379be5736db5b496-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/85b42dd8aae56e01379be5736db5b496-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10695-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/85b42dd8aae56e01379be5736db5b496-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/85b42dd8aae56e01379be5736db5b496-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/85b42dd8aae56e01379be5736db5b496-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/85b42dd8aae56e01379be5736db5b496-Supplemental.zip | We consider the problem of factorizing a structured 3-way tensor into its constituent Canonical Polyadic (CP) factors. This decomposition, which can be viewed as a generalization of singular value decomposition (SVD) for tensors, reveals how the tensor dimensions (features) interact with each other. However, since the factors are a priori unknown, the corresponding optimization problems are inherently non-convex. The existing guaranteed algorithms which handle this non-convexity incur an irreducible error (bias), and only apply to cases where all factors have the same structure. To this end, we develop a provable algorithm for online structured tensor factorization, wherein one of the factors obeys some incoherence conditions, and the others are sparse. Specifically we show that, under some relatively mild conditions on initialization, rank, and sparsity, our algorithm recovers the factors exactly (up to scaling and permutation) at a linear rate. Complementary to our theoretical results, our synthetic and real-world data evaluations showcase superior performance compared to related techniques. |
Look-ahead Meta Learning for Continual Learning | https://papers.nips.cc/paper_files/paper/2020/hash/85b9a5ac91cd629bd3afe396ec07270a-Abstract.html | Gunshi Gupta, Karmesh Yadav, Liam Paull | https://papers.nips.cc/paper_files/paper/2020/hash/85b9a5ac91cd629bd3afe396ec07270a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/85b9a5ac91cd629bd3afe396ec07270a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10696-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/85b9a5ac91cd629bd3afe396ec07270a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/85b9a5ac91cd629bd3afe396ec07270a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/85b9a5ac91cd629bd3afe396ec07270a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/85b9a5ac91cd629bd3afe396ec07270a-Supplemental.zip | The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks.
While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. By incorporating the modulation of per-parameter learning rates in our meta-learning update, our approach also allows us to draw connections to and exploit prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods.
La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks. |
A polynomial-time algorithm for learning nonparametric causal graphs | https://papers.nips.cc/paper_files/paper/2020/hash/85c9f9efab89cee90a95cb98f15feacd-Abstract.html | Ming Gao, Yi Ding, Bryon Aragam | https://papers.nips.cc/paper_files/paper/2020/hash/85c9f9efab89cee90a95cb98f15feacd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/85c9f9efab89cee90a95cb98f15feacd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10697-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/85c9f9efab89cee90a95cb98f15feacd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/85c9f9efab89cee90a95cb98f15feacd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/85c9f9efab89cee90a95cb98f15feacd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/85c9f9efab89cee90a95cb98f15feacd-Supplemental.pdf | We establish finite-sample guarantees for a polynomial-time algorithm for learning a nonlinear, nonparametric directed acyclic graphical (DAG) model from data. The analysis is model-free and does not assume linearity, additivity, independent noise, or faithfulness. Instead, we impose a condition on the residual variances that is closely related to previous work on linear models with equal variances. Compared to an optimal algorithm with oracle knowledge of the variable ordering, the additional cost of the algorithm is linear in the dimension $d$ and the number of samples $n$. Finally, we compare the proposed algorithm to existing approaches in a simulation study. |
Sparse Learning with CART | https://papers.nips.cc/paper_files/paper/2020/hash/85fc37b18c57097425b52fc7afbb6969-Abstract.html | Jason Klusowski | https://papers.nips.cc/paper_files/paper/2020/hash/85fc37b18c57097425b52fc7afbb6969-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/85fc37b18c57097425b52fc7afbb6969-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10698-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/85fc37b18c57097425b52fc7afbb6969-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/85fc37b18c57097425b52fc7afbb6969-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/85fc37b18c57097425b52fc7afbb6969-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/85fc37b18c57097425b52fc7afbb6969-Supplemental.pdf | Decision trees with binary splits are popularly constructed using Classification and Regression Trees (CART) methodology. For regression models, this approach recursively divides the data into two near-homogenous daughter nodes according to a split point that maximizes the reduction in sum of squares error (the impurity) along a particular variable. This paper aims to study the statistical properties of regression trees constructed with CART. In doing so, we find that the training error is governed by the Pearson correlation between the optimal decision stump and response data in each node, which we bound by constructing a prior distribution on the split points and solving a nonlinear optimization problem. We leverage this connection between the training error and Pearson correlation to show that CART with cost-complexity pruning achieves an optimal complexity/goodness-of-fit tradeoff when the depth scales with the logarithm of the sample size. Data dependent quantities, which adapt to the dimensionality and latent structure of the regression model, are seen to govern the rates of convergence of the prediction error. |
Proximal Mapping for Deep Regularization | https://papers.nips.cc/paper_files/paper/2020/hash/8606bdb6f1fa707fc6ca309943eea443-Abstract.html | Mao Li, Yingyi Ma, Xinhua Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/8606bdb6f1fa707fc6ca309943eea443-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8606bdb6f1fa707fc6ca309943eea443-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10699-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8606bdb6f1fa707fc6ca309943eea443-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8606bdb6f1fa707fc6ca309943eea443-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8606bdb6f1fa707fc6ca309943eea443-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8606bdb6f1fa707fc6ca309943eea443-Supplemental.pdf | Underpinning the success of deep learning is effective regularizations that allow a variety of priors in data to be modeled. For example, robustness to adversarial perturbations, and correlations between multiple modalities. However, most regularizers are specified in terms of hidden layer outputs, which are not themselves optimization variables. In contrast to prevalent methods that optimize them indirectly through model weights, we propose inserting proximal mapping as a new layer to the deep network, which directly and explicitly produces well regularized hidden layer outputs. The resulting technique is shown well connected to kernel warping and dropout, and novel algorithms were developed for robust temporal learning and multiview modeling, both outperforming state-of-the-art methods. |
Identifying Causal-Effect Inference Failure with Uncertainty-Aware Models | https://papers.nips.cc/paper_files/paper/2020/hash/860b37e28ec7ba614f00f9246949561d-Abstract.html | Andrew Jesson, Sören Mindermann, Uri Shalit, Yarin Gal | https://papers.nips.cc/paper_files/paper/2020/hash/860b37e28ec7ba614f00f9246949561d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/860b37e28ec7ba614f00f9246949561d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10700-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/860b37e28ec7ba614f00f9246949561d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/860b37e28ec7ba614f00f9246949561d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/860b37e28ec7ba614f00f9246949561d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/860b37e28ec7ba614f00f9246949561d-Supplemental.zip | Recommending the best course of action for an individual is a major application of individual-level causal effect estimation. This application is often needed in safety-critical domains such as healthcare, where estimating and communicating uncertainty to decision-makers is crucial. We introduce a practical approach for integrating uncertainty estimation into a class of state-of-the-art neural network methods used for individual-level causal estimates. We show that our methods enable us to deal gracefully with situations of "no-overlap", common in high-dimensional data, where standard applications of causal effect approaches fail. Further, our methods allow us to handle covariate shift, where the train and test distributions differ, common when systems are deployed in practice. We show that when such a covariate shift occurs, correctly modeling uncertainty can keep us from giving overconfident and potentially harmful recommendations. We demonstrate our methodology with a range of state-of-the-art models. Under both covariate shift and lack of overlap, our uncertainty-equipped methods can alert decision makers when predictions are not to be trusted while outperforming standard methods that use the propensity score to identify lack of overlap. |
Hierarchical Granularity Transfer Learning | https://papers.nips.cc/paper_files/paper/2020/hash/861637a425ef06e6d539aaaff113d1d5-Abstract.html | Shaobo Min, Hongtao Xie, Hantao Yao, Xuran Deng, Zheng-Jun Zha, Yongdong Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/861637a425ef06e6d539aaaff113d1d5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/861637a425ef06e6d539aaaff113d1d5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10701-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/861637a425ef06e6d539aaaff113d1d5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/861637a425ef06e6d539aaaff113d1d5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/861637a425ef06e6d539aaaff113d1d5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/861637a425ef06e6d539aaaff113d1d5-Supplemental.zip | In the real world, object categories usually have a hierarchical granularity tree.
Nowadays, most researchers focus on recognizing categories in a specific granularity, \emph{e.g.,} basic-level or sub(ordinate)-level. Compared with basic-level categories, the sub-level categories provide more valuable information, but its training annotations are harder to acquire. Therefore, an attractive problem is how to transfer the knowledge learned from basic-level annotations to sub-level recognition. In this paper, we introduce a new task, named Hierarchical Granularity Transfer Learning (HGTL), to recognize sub-level categories with basic-level annotations and semantic descriptions for hierarchical categories. Different from other recognition tasks, HGTL has a serious granularity gap,~\emph{i.e.,} the two granularities share an image space but have different category domains, which impede the knowledge transfer. To this end, we propose a novel Bi-granularity Semantic Preserving Network (BigSPN) to bridge the granularity gap for robust knowledge transfer. Explicitly, BigSPN constructs specific visual encoders for different granularities, which are aligned with a shared semantic interpreter via a novel subordinate entropy loss. Experiments on three benchmarks with hierarchical granularities show that BigSPN is an effective framework for Hierarchical Granularity Transfer Learning. |
Deep active inference agents using Monte-Carlo methods | https://papers.nips.cc/paper_files/paper/2020/hash/865dfbde8a344b44095495f3591f7407-Abstract.html | Zafeirios Fountas, Noor Sajid, Pedro Mediano, Karl Friston | https://papers.nips.cc/paper_files/paper/2020/hash/865dfbde8a344b44095495f3591f7407-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/865dfbde8a344b44095495f3591f7407-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10702-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/865dfbde8a344b44095495f3591f7407-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/865dfbde8a344b44095495f3591f7407-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/865dfbde8a344b44095495f3591f7407-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/865dfbde8a344b44095495f3591f7407-Supplemental.pdf | Active inference is a Bayesian framework for understanding biological intelligence. The underlying theory brings together perception and action under one single imperative: minimizing free energy. However, despite its theoretical utility in explaining intelligence, computational implementations have been restricted to low-dimensional and idealized situations. In this paper, we present a neural architecture for building deep active inference agents operating in complex, continuous state-spaces using multiple forms of Monte-Carlo (MC) sampling. For this, we introduce a number of techniques, novel to active inference. These include: i) selecting free-energy-optimal policies via MC tree search, ii) approximating this optimal policy distribution via a feed-forward `habitual' network, iii) predicting future parameter belief updates using MC dropouts and, finally, iv) optimizing state transition precision (a high-end form of attention). Our approach enables agents to learn environmental dynamics efficiently, while maintaining task performance, in relation to reward-based counterparts. We illustrate this in a new toy environment, based on the dSprites data-set, and demonstrate that active inference agents automatically create disentangled representations that are apt for modeling state transitions. In a more complex Animal-AI environment, our agents (using the same neural architecture) are able to simulate future state transitions and actions (i.e., plan), to evince reward-directed navigation - despite temporary suspension of visual input. These results show that deep active inference - equipped with MC methods - provides a flexible framework to develop biologically-inspired intelligent agents, with applications in both machine learning and cognitive science. |
Consistent Estimation of Identifiable Nonparametric Mixture Models from Grouped Observations | https://papers.nips.cc/paper_files/paper/2020/hash/866d90e0921ac7b024b47d672445a086-Abstract.html | Alexander Ritchie, Robert A. Vandermeulen, Clayton Scott | https://papers.nips.cc/paper_files/paper/2020/hash/866d90e0921ac7b024b47d672445a086-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/866d90e0921ac7b024b47d672445a086-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10703-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/866d90e0921ac7b024b47d672445a086-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/866d90e0921ac7b024b47d672445a086-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/866d90e0921ac7b024b47d672445a086-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/866d90e0921ac7b024b47d672445a086-Supplemental.zip | Recent research has established sufficient conditions for finite mixture models to be identifiable from grouped observations. These conditions allow the mixture components to be nonparametric and have substantial (or even total) overlap. This work proposes an algorithm that consistently estimates any identifiable mixture model from grouped observations. Our analysis leverages an oracle inequality for weighted kernel density estimators of the distribution on groups, together with a general result showing that consistent estimation of the distribution on groups implies consistent estimation of mixture components. A practical implementation is provided for paired observations, and the approach is shown to outperform existing methods, especially when mixture components overlap significantly. |
Manifold structure in graph embeddings | https://papers.nips.cc/paper_files/paper/2020/hash/8682cc30db9c025ecd3fee433f8ab54c-Abstract.html | Patrick Rubin-Delanchy | https://papers.nips.cc/paper_files/paper/2020/hash/8682cc30db9c025ecd3fee433f8ab54c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8682cc30db9c025ecd3fee433f8ab54c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10704-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8682cc30db9c025ecd3fee433f8ab54c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8682cc30db9c025ecd3fee433f8ab54c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8682cc30db9c025ecd3fee433f8ab54c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8682cc30db9c025ecd3fee433f8ab54c-Supplemental.pdf | Statistical analysis of a graph often starts with embedding, the process of representing its nodes as points in space. How to choose the embedding dimension is a nuanced decision in practice, but in theory a notion of true dimension is often available. In spectral embedding, this dimension may be very high. However, this paper shows that existing random graph models, including graphon and other latent position models, predict the data should live near a much lower-dimensional set. One may therefore circumvent the curse of dimensionality by employing methods which exploit hidden manifold structure. |
Adaptive Learned Bloom Filter (Ada-BF): Efficient Utilization of the Classifier with Application to Real-Time Information Filtering on the Web | https://papers.nips.cc/paper_files/paper/2020/hash/86b94dae7c6517ec1ac767fd2c136580-Abstract.html | Zhenwei Dai, Anshumali Shrivastava | https://papers.nips.cc/paper_files/paper/2020/hash/86b94dae7c6517ec1ac767fd2c136580-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/86b94dae7c6517ec1ac767fd2c136580-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10705-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/86b94dae7c6517ec1ac767fd2c136580-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/86b94dae7c6517ec1ac767fd2c136580-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/86b94dae7c6517ec1ac767fd2c136580-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/86b94dae7c6517ec1ac767fd2c136580-Supplemental.zip | Recent work suggests improving the performance of Bloom filter by incorporating a machine learning model as a binary classifier. However, such learned Bloom filter does not take full advantage of the predicted probability scores. We propose new algorithms that generalize the learned Bloom filter by using the complete spectrum of the score regions. We prove our algorithms have lower false positive rate (FPR) and memory usage compared with the existing approaches to learned Bloom filter. We also demonstrate the improved performance of our algorithms on real-world information filtering tasks over the web. |
MCUNet: Tiny Deep Learning on IoT Devices | https://papers.nips.cc/paper_files/paper/2020/hash/86c51678350f656dcc7f490a43946ee5-Abstract.html | Ji Lin, Wei-Ming Chen, Yujun Lin, john cohn, Chuang Gan, Song Han | https://papers.nips.cc/paper_files/paper/2020/hash/86c51678350f656dcc7f490a43946ee5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/86c51678350f656dcc7f490a43946ee5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10706-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/86c51678350f656dcc7f490a43946ee5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/86c51678350f656dcc7f490a43946ee5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/86c51678350f656dcc7f490a43946ee5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/86c51678350f656dcc7f490a43946ee5-Supplemental.zip | Machine learning on tiny IoT devices based on microcontroller units (MCU) is appealing but challenging: the memory of microcontrollers is 2-3 orders of magnitude smaller even than mobile phones. We propose MCUNet, a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine), enabling ImageNet-scale inference on microcontrollers. TinyNAS adopts a two-stage neural architecture search approach that first optimizes the search space to fit the resource constraints, then specializes the network architecture in the optimized search space. TinyNAS can automatically handle diverse constraints (i.e. device, latency, energy, memory) under low search costs. TinyNAS is co-designed with TinyEngine, a memory-efficient inference library to expand the search space and fit a larger model. TinyEngine adapts the memory scheduling according to the overall network topology rather than layer-wise optimization, reducing the memory usage by 3.4×, and accelerating the inference by 1.7-3.3× compared to TF-Lite Micro [3] and CMSIS-NN [28]. MCUNet is the first to achieves >70% ImageNet top1 accuracy on an off-the-shelf commercial microcontroller, using 3.5× less SRAM and 5.7× less Flash compared to quantized MobileNetV2 and ResNet-18. On visual&audio wake words tasks, MCUNet achieves state-of-the-art accuracy and runs 2.4-3.4× faster than Mo- bileNetV2 and ProxylessNAS-based solutions with 3.7-4.1× smaller peak SRAM. Our study suggests that the era of always-on tiny machine learning on IoT devices has arrived. |
In search of robust measures of generalization | https://papers.nips.cc/paper_files/paper/2020/hash/86d7c8a08b4aaa1bc7c599473f5dddda-Abstract.html | Gintare Karolina Dziugaite, Alexandre Drouin, Brady Neal, Nitarshan Rajkumar, Ethan Caballero, Linbo Wang, Ioannis Mitliagkas, Daniel M. Roy | https://papers.nips.cc/paper_files/paper/2020/hash/86d7c8a08b4aaa1bc7c599473f5dddda-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/86d7c8a08b4aaa1bc7c599473f5dddda-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10707-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/86d7c8a08b4aaa1bc7c599473f5dddda-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/86d7c8a08b4aaa1bc7c599473f5dddda-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/86d7c8a08b4aaa1bc7c599473f5dddda-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/86d7c8a08b4aaa1bc7c599473f5dddda-Supplemental.pdf | One of the principal scientific challenges in deep learning is explaining generalization, i.e., why the particular way the community now trains networks to achieve small training error also leads to small error on held-out data from the same population. It is widely appreciated that some worst-case theories -- such as those based on the VC dimension of the class of predictors induced by modern neural network architectures -- are unable to explain empirical performance. A large volume of work aims to close this gap, primarily by developing bounds on generalization error, optimization error, and excess risk. When evaluated empirically, however, most of these bounds are numerically vacuous. Focusing on generalization bounds, this work addresses the question of how to evaluate such bounds empirically. Jiang et al. (2020) recently described a large-scale empirical study aimed at uncovering potential causal relationships between bounds/measures and generalization. Building on their study, we highlight where their proposed methods can obscure failures and successes of generalization measures in explaining generalization. We argue that generalization measures should instead be evaluated within the framework of distributional robustness. |
Task-agnostic Exploration in Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/8763d72bba4a7ade23f9ae1f09f4efc7-Abstract.html | Xuezhou Zhang, Yuzhe Ma, Adish Singla | https://papers.nips.cc/paper_files/paper/2020/hash/8763d72bba4a7ade23f9ae1f09f4efc7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8763d72bba4a7ade23f9ae1f09f4efc7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10708-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8763d72bba4a7ade23f9ae1f09f4efc7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8763d72bba4a7ade23f9ae1f09f4efc7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8763d72bba4a7ade23f9ae1f09f4efc7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8763d72bba4a7ade23f9ae1f09f4efc7-Supplemental.pdf | Efficient exploration is one of the main challenges in reinforcement learning (RL). Most existing sample-efficient algorithms assume the existence of a single reward function during exploration. In many practical scenarios, however, there is not a single underlying reward function to guide the exploration, for instance, when an agent needs to learn many skills simultaneously, or multiple conflicting objectives need to be balanced. To address these challenges, we propose the \textit{task-agnostic RL} framework: In the exploration phase, the agent first collects trajectories by exploring the MDP without the guidance of a reward function. After exploration, it aims at finding near-optimal policies for $N$ tasks, given the collected trajectories augmented with \textit{sampled rewards} for each task. We present an efficient task-agnostic RL algorithm, \textsc{UCBZero}, that finds $\epsilon$-optimal policies for $N$ arbitrary tasks after at most $\tilde O(\log(N)H^5SA/\epsilon^2)$ exploration episodes. We also provide an $\Omega(\log (N)H^2SA/\epsilon^2)$ lower bound, showing that the $\log$ dependency on $N$ is unavoidable. Furthermore, we provide an $N$-independent sample complexity bound of \textsc{UCBZero} in the statistically easier setting when the ground truth reward functions are known. |
Multi-task Additive Models for Robust Estimation and Automatic Structure Discovery | https://papers.nips.cc/paper_files/paper/2020/hash/8767bccb1ff4231a9962e3914f4f1f8f-Abstract.html | Yingjie Wang, Hong Chen, Feng Zheng, Chen Xu, Tieliang Gong, Yanhong Chen | https://papers.nips.cc/paper_files/paper/2020/hash/8767bccb1ff4231a9962e3914f4f1f8f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8767bccb1ff4231a9962e3914f4f1f8f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10709-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8767bccb1ff4231a9962e3914f4f1f8f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8767bccb1ff4231a9962e3914f4f1f8f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8767bccb1ff4231a9962e3914f4f1f8f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8767bccb1ff4231a9962e3914f4f1f8f-Supplemental.pdf | Additive models have attracted much attention for high-dimensional regression estimation and variable selection. However, the existing models are usually limited to the single-task learning framework under the mean squared error (MSE) criterion, where the utilization of variable structure depends heavily on priori knowledge among variables. For high-dimensional observations in real environment, e.g., Coronal Mass Ejections (CMEs) data, the learning performance of previous methods may be degraded seriously due to the complex non-Gaussian noise and the insufficiency of prior knowledge on variable structure. To tackle this problem, we propose a new class of additive models, called Multi-task Additive Models (MAM), by integrating the mode-induced metric, the structure-based regularizer, and additive hypothesis spaces into a bilevel optimization framework. Our approach does not require any priori knowledge of variable structure and suits for high-dimensional data with complex noise, e.g., skewed noise, heavy-tailed noise, and outliers. A smooth iterative optimization algorithm with convergence guarantees is provided to implement MAM efficiently. Experiments on simulations and the CMEs analysis demonstrate the competitive performance of our approach for robust estimation and automatic structure discovery. |
Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration | https://papers.nips.cc/paper_files/paper/2020/hash/87736972ed2fb48230f1052699dedbe7-Abstract.html | Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill | https://papers.nips.cc/paper_files/paper/2020/hash/87736972ed2fb48230f1052699dedbe7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/87736972ed2fb48230f1052699dedbe7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10710-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/87736972ed2fb48230f1052699dedbe7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/87736972ed2fb48230f1052699dedbe7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/87736972ed2fb48230f1052699dedbe7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/87736972ed2fb48230f1052699dedbe7-Supplemental.pdf | There has been growing progress on theoretical analyses for provably efficient learning in MDPs with linear function approximation, but much of the existing work has made strong assumptions to enable exploration by conventional exploration frameworks. Typically these assumptions are stronger than what is needed to find good solutions in the batch setting. In this work, we show how under a more standard notion of low inherent Bellman error, typically employed in least-square value iteration-style algorithms, we can provide strong PAC guarantees on learning a near optimal value function provided that the linear space is sufficiently ``explorable''.
We present a computationally tractable algorithm for the reward-free setting and show how it can be used to learn a near optimal policy for any (linear) reward function, which is revealed only once learning has completed. If this reward function is also estimated from the samples gathered during pure exploration, our results also provide same-order PAC guarantees on the performance of the resulting policy for this setting. |
Softmax Deep Double Deterministic Policy Gradients | https://papers.nips.cc/paper_files/paper/2020/hash/884d247c6f65a96a7da4d1105d584ddd-Abstract.html | Ling Pan, Qingpeng Cai, Longbo Huang | https://papers.nips.cc/paper_files/paper/2020/hash/884d247c6f65a96a7da4d1105d584ddd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/884d247c6f65a96a7da4d1105d584ddd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10711-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/884d247c6f65a96a7da4d1105d584ddd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/884d247c6f65a96a7da4d1105d584ddd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/884d247c6f65a96a7da4d1105d584ddd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/884d247c6f65a96a7da4d1105d584ddd-Supplemental.pdf | A widely-used actor-critic reinforcement learning algorithm for continuous control, Deep Deterministic Policy Gradients (DDPG), suffers from the overestimation problem, which can negatively affect the performance. Although the state-of-the-art Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm mitigates the overestimation issue, it can lead to a large underestimation bias. In this paper, we propose to use the Boltzmann softmax operator for value function estimation in continuous control. We first theoretically analyze the softmax operator in continuous action space. Then, we uncover an important property of the softmax operator in actor-critic algorithms, i.e., it helps to smooth the optimization landscape, which sheds new light on the benefits of the operator. We also design two new algorithms, Softmax Deep Deterministic Policy Gradients (SD2) and Softmax Deep Double Deterministic Policy Gradients (SD3), by building the softmax operator upon single and double estimators, which can effectively improve the overestimation and underestimation bias. We conduct extensive experiments on challenging continuous control tasks, and results show that SD3 outperforms state-of-the-art methods. |
Online Decision Based Visual Tracking via Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/885b2c7a6deb4fea10f319c4ce993e02-Abstract.html | ke Song, Wei Zhang, Ran Song, Yibin Li | https://papers.nips.cc/paper_files/paper/2020/hash/885b2c7a6deb4fea10f319c4ce993e02-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/885b2c7a6deb4fea10f319c4ce993e02-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10712-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/885b2c7a6deb4fea10f319c4ce993e02-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/885b2c7a6deb4fea10f319c4ce993e02-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/885b2c7a6deb4fea10f319c4ce993e02-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/885b2c7a6deb4fea10f319c4ce993e02-Supplemental.zip | A deep visual tracker is typically based on either object detection or template matching while each of them is only suitable for a particular group of scenes. It is straightforward to consider fusing them together to pursue more reliable tracking. However, this is not wise as they follow different tracking principles. Unlike previous fusion-based methods, we propose a novel ensemble framework, named DTNet, with an online decision mechanism for visual tracking based on hierarchical reinforcement learning. The decision mechanism substantiates an intelligent switching strategy where the detection and the template trackers have to compete with each other to conduct tracking within different scenes that they are adept in. Besides, we present a novel detection tracker which avoids the common issue of incorrect proposal. Extensive results show that our DTNet achieves state-of-the-art tracking performance as well as good balance between accuracy and efficiency. The project website is available at https://vsislab.github.io/DTNet/. |
Efficient Marginalization of Discrete and Structured Latent Variables via Sparsity | https://papers.nips.cc/paper_files/paper/2020/hash/887caadc3642e304ede659b734f79b00-Abstract.html | Gonçalo Correia, Vlad Niculae, Wilker Aziz, André Martins | https://papers.nips.cc/paper_files/paper/2020/hash/887caadc3642e304ede659b734f79b00-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/887caadc3642e304ede659b734f79b00-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10713-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/887caadc3642e304ede659b734f79b00-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/887caadc3642e304ede659b734f79b00-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/887caadc3642e304ede659b734f79b00-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/887caadc3642e304ede659b734f79b00-Supplemental.pdf | Training neural network models with discrete (categorical or structured) latent variables can be computationally challenging, due to the need for marginalization over large or combinatorial sets. To circumvent this issue, one typically resorts to sampling-based approximations of the true marginal, requiring noisy gradient estimators (e.g., score function estimator) or continuous relaxations with lower-variance reparameterized gradients (e.g., Gumbel-Softmax). In this paper, we propose a new training strategy which replaces these estimators by an exact yet efficient marginalization. To achieve this, we parameterize discrete distributions over latent assignments using differentiable sparse mappings: sparsemax and its structured counterparts. In effect, the support of these distributions is greatly reduced, which enables efficient marginalization. We report successful results in three tasks covering a range of latent variable modeling applications: a semisupervised deep generative model, a latent communication game, and a generative model with a bit-vector latent representation. In all cases, we obtain good performance while still achieving the practicality of sampling-based approximations. |
DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs | https://papers.nips.cc/paper_files/paper/2020/hash/88855547570f7ff053fff7c54e5148cc-Abstract.html | yaxing wang, Lu Yu, Joost van de Weijer | https://papers.nips.cc/paper_files/paper/2020/hash/88855547570f7ff053fff7c54e5148cc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/88855547570f7ff053fff7c54e5148cc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10714-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/88855547570f7ff053fff7c54e5148cc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/88855547570f7ff053fff7c54e5148cc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/88855547570f7ff053fff7c54e5148cc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/88855547570f7ff053fff7c54e5148cc-Supplemental.pdf | Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods.
Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I. We learn a model by leveraging hierarchical features: (a) structural information contained in the bottom layers and (b) semantic information extracted from the top layers. To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs. Specifically, we leverage the discriminator of a pre-trained GANs (i.e. BigGAN or StyleGAN) to initialize both the encoder and the discriminator and the pre-trained generator to initialize the generator of our model. Applying knowledge transfer leads to an alignment problem between the encoder and generator. We introduce an adaptor network to address this. On many-class image-to-image translation on three datasets (Animal faces, Birds, and Foods) we decrease mFID by at least 35% when compared to the state-of-the-art. Furthermore, we qualitatively and quantitatively demonstrate that transfer learning significantly improves the performance of I2I systems, especially for small datasets.
Finally, we are the first to perform I2I translations for domains with over 100 classes. |
Distributional Robustness with IPMs and links to Regularization and GANs | https://papers.nips.cc/paper_files/paper/2020/hash/8929c70f8d710e412d38da624b21c3c8-Abstract.html | Hisham Husain | https://papers.nips.cc/paper_files/paper/2020/hash/8929c70f8d710e412d38da624b21c3c8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8929c70f8d710e412d38da624b21c3c8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10715-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8929c70f8d710e412d38da624b21c3c8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8929c70f8d710e412d38da624b21c3c8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8929c70f8d710e412d38da624b21c3c8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8929c70f8d710e412d38da624b21c3c8-Supplemental.pdf | Robustness to adversarial attacks is an important concern due to the fragility of deep neural networks to small perturbations, and has received an abundance of attention in recent years. Distributional Robust Optimization (DRO), a particularly promising way of addressing this challenge, studies robustness via divergence-based uncertainty sets and has provided valuable insights into robustification strategies such as regularisation. In the context of machine learning, majority of existing results have chosen $f$-divergences, Wasserstein distances and more recently, the Maximum Mean Discrepancy (MMD) to construct uncertainty sets. We extend this line of work for the purposes of understanding robustness via regularization by studying uncertainty sets constructed with Integral Probability Metrics (IPMs) - a large family of divergences including the MMD, Total Variation and Wasserstein distances. Our main result shows that DRO under \textit{any} choice of IPM corresponds to a family of regularization penalties, which recover and improve upon existing results in the setting of MMD and Wasserstein distances. Due to the generality of our result, we show that other choices of IPMs correspond to other commonly used penalties in machine learning. Furthermore, we extend our results to shed light on adversarial generative modelling via $f$-GANs, constituting the first study of distributional robustness for the $f$-GAN objective. Our results unveil the inductive properties of the discriminator set with regards to robustness, allowing us to give positive comments for a number of existing penalty-based GAN methods such as Wasserstein-, MMD- and Sobolev-GANs. In summary, our results intimately link GANs to distributional robustness, extend previous results on DRO and contribute to our understanding of the link between regularization and robustness at large. |
A shooting formulation of deep learning | https://papers.nips.cc/paper_files/paper/2020/hash/89562dccfeb1d0394b9ae7e09544dc70-Abstract.html | François-Xavier Vialard, Roland Kwitt, Susan Wei, Marc Niethammer | https://papers.nips.cc/paper_files/paper/2020/hash/89562dccfeb1d0394b9ae7e09544dc70-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/89562dccfeb1d0394b9ae7e09544dc70-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10716-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/89562dccfeb1d0394b9ae7e09544dc70-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/89562dccfeb1d0394b9ae7e09544dc70-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/89562dccfeb1d0394b9ae7e09544dc70-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/89562dccfeb1d0394b9ae7e09544dc70-Supplemental.pdf | A residual network may be regarded as a discretization of an ordinary differential equation (ODE) which, in the limit of time discretization, defines a continuous-depth network. Although important steps have been taken to realize the advantages of such continuous formulations, most current techniques assume identical layers. Indeed, existing works throw into relief the myriad difficulties of learning an infinite-dimensional parameter in a continuous-depth neural network. To this end, we introduce a shooting formulation which shifts the perspective from parameterizing a network layer-by-layer to parameterizing over optimal networks described only by a set of initial conditions. For scalability, we propose a novel particle-ensemble parameterization which fully specifies the optimal weight trajectory of the continuous-depth neural network. Our experiments show that our particle-ensemble shooting formulation can achieve competitive performance. Finally, though the current work is inspired by continuous-depth neural networks, the particle-ensemble shooting formulation also applies to discrete-time networks and may lead to a new fertile area of research in deep learning parameterization. |
CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances | https://papers.nips.cc/paper_files/paper/2020/hash/8965f76632d7672e7d3cf29c87ecaa0c-Abstract.html | Jihoon Tack, Sangwoo Mo, Jongheon Jeong, Jinwoo Shin | https://papers.nips.cc/paper_files/paper/2020/hash/8965f76632d7672e7d3cf29c87ecaa0c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8965f76632d7672e7d3cf29c87ecaa0c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10717-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8965f76632d7672e7d3cf29c87ecaa0c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8965f76632d7672e7d3cf29c87ecaa0c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8965f76632d7672e7d3cf29c87ecaa0c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8965f76632d7672e7d3cf29c87ecaa0c-Supplemental.pdf | Novelty detection, i.e., identifying whether a given sample is drawn from outside the training distribution, is essential for reliable machine learning. To this end, there have been many attempts at learning a representation well-suited for novelty detection and designing a score based on such representation. In this paper, we propose a simple, yet effective method named contrasting shifted instances (CSI), inspired by the recent success on contrastive learning of visual representations. Specifically, in addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself. Based on this, we propose a new detection score that is specific to the proposed training scheme. Our experiments demonstrate the superiority of our method under various novelty detection scenarios, including unlabeled one-class, unlabeled multi-class and labeled multi-class settings, with various image benchmark datasets. Code and pre-trained models are available at https://github.com/alinlab/CSI. |
Learning Implicit Credit Assignment for Cooperative Multi-Agent Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/8977ecbb8cb82d77fb091c7a7f186163-Abstract.html | Meng Zhou, Ziyu Liu, Pengwei Sui, Yixuan Li, Yuk Ying Chung | https://papers.nips.cc/paper_files/paper/2020/hash/8977ecbb8cb82d77fb091c7a7f186163-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8977ecbb8cb82d77fb091c7a7f186163-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10718-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8977ecbb8cb82d77fb091c7a7f186163-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8977ecbb8cb82d77fb091c7a7f186163-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8977ecbb8cb82d77fb091c7a7f186163-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8977ecbb8cb82d77fb091c7a7f186163-Supplemental.pdf | We present a multi-agent actor-critic method that aims to implicitly address the credit assignment problem under fully cooperative settings. Our key motivation is that credit assignment among agents may not require an explicit formulation as long as (1) the policy gradients derived from a centralized critic carry sufficient information for the decentralized agents to maximize their joint action value through optimal cooperation and (2) a sustained level of exploration is enforced throughout training. Under the centralized training with decentralized execution (CTDE) paradigm, we achieve the former by formulating the centralized critic as a hypernetwork such that a latent state representation is integrated into the policy gradients through its multiplicative association with the stochastic policies; to achieve the latter, we derive a simple technique called adaptive entropy regularization where magnitudes of the entropy gradients are dynamically rescaled based on the current policy stochasticity to encourage consistent levels of exploration. Our algorithm, referred to as LICA, is evaluated on several benchmarks including the multi-agent particle environments and a set of challenging StarCraft II micromanagement tasks, and we show that LICA significantly outperforms previous methods. |
MATE: Plugging in Model Awareness to Task Embedding for Meta Learning | https://papers.nips.cc/paper_files/paper/2020/hash/8989e07fc124e7a9bcbdebcc8ace2bc0-Abstract.html | Xiaohan Chen, Zhangyang Wang, Siyu Tang, Krikamol Muandet | https://papers.nips.cc/paper_files/paper/2020/hash/8989e07fc124e7a9bcbdebcc8ace2bc0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8989e07fc124e7a9bcbdebcc8ace2bc0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10719-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8989e07fc124e7a9bcbdebcc8ace2bc0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8989e07fc124e7a9bcbdebcc8ace2bc0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8989e07fc124e7a9bcbdebcc8ace2bc0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8989e07fc124e7a9bcbdebcc8ace2bc0-Supplemental.pdf | Meta-learning improves generalization of machine learning models when faced with previously unseen tasks by leveraging experiences from different, yet related prior tasks. To allow for better generalization, we propose a novel task representation called model-aware task embedding (MATE) that incorporates not only the data distributions of different tasks, but also the complexity of the tasks through the models used. The task complexity is taken into account by a novel variant of kernel mean embedding, combined with an instance-adaptive attention mechanism inspired by an SVM-based feature selection algorithm. Together with conditioning layers in deep neural networks, MATE can be easily incorporated into existing meta learners as a plug-and-play module. While MATE is widely applicable to general tasks where the concept of task/environment is involved, we demonstrate its effectiveness in few-shot learning by improving a state-of-the-art model consistently on two benchmarks. Source codes for this paper are available at https://github.com/VITA-Group/MATE. |
Restless-UCB, an Efficient and Low-complexity Algorithm for Online Restless Bandits | https://papers.nips.cc/paper_files/paper/2020/hash/89ae0fe22c47d374bc9350ef99e01685-Abstract.html | Siwei Wang, Longbo Huang, John C. S. Lui | https://papers.nips.cc/paper_files/paper/2020/hash/89ae0fe22c47d374bc9350ef99e01685-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/89ae0fe22c47d374bc9350ef99e01685-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10720-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/89ae0fe22c47d374bc9350ef99e01685-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/89ae0fe22c47d374bc9350ef99e01685-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/89ae0fe22c47d374bc9350ef99e01685-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/89ae0fe22c47d374bc9350ef99e01685-Supplemental.pdf | We study the online restless bandit problem, where the state of each arm evolves according to a Markov chain, and the reward of pulling an arm depends on both the pulled arm and the current state of the corresponding Markov chain. In this paper, we propose Restless-UCB, a learning policy that follows the explore-then-commit framework. In Restless-UCB, we present a novel method to construct offline instances, which only requires $O(N)$ time-complexity ($N$ is the number of arms) and is exponentially better than the complexity of existing learning policy. We also prove that Restless-UCB achieves a regret upper bound of $\tilde{O}((N+M^3)T^{2\over 3})$, where $M$ is the Markov chain state space size and $T$ is the time horizon. Compared to existing algorithms, our result eliminates the exponential factor (in $M,N$) in the regret upper bound, due to a novel exploitation of the sparsity in transitions in general restless bandit problems. As a result, our analysis technique can also be adopted to tighten the regret bounds of existing algorithms. Finally, we conduct experiments based on real-world dataset, to compare the Restless-UCB policy with state-of-the-art benchmarks. Our results show that Restless-UCB outperforms existing algorithms in regret, and significantly reduces the running time. |
Predictive Information Accelerates Learning in RL | https://papers.nips.cc/paper_files/paper/2020/hash/89b9e0a6f6d1505fe13dea0f18a2dcfa-Abstract.html | Kuang-Huei Lee, Ian Fischer, Anthony Liu, Yijie Guo, Honglak Lee, John Canny, Sergio Guadarrama | https://papers.nips.cc/paper_files/paper/2020/hash/89b9e0a6f6d1505fe13dea0f18a2dcfa-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/89b9e0a6f6d1505fe13dea0f18a2dcfa-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10721-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/89b9e0a6f6d1505fe13dea0f18a2dcfa-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/89b9e0a6f6d1505fe13dea0f18a2dcfa-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/89b9e0a6f6d1505fe13dea0f18a2dcfa-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/89b9e0a6f6d1505fe13dea0f18a2dcfa-Supplemental.pdf | The Predictive Information is the mutual information between the past and the future, I(Xpast; Xfuture). We hypothesize that capturing the predictive information is useful in RL, since the ability to model what will happen next is necessary for success on many tasks. To test our hypothesis, we train Soft Actor-Critic (SAC) agents from pixels with an auxiliary task that learns a compressed representation of the predictive information of the RL environment dynamics using a contrastive version of the Conditional Entropy Bottleneck (CEB) objective. We refer to these as Predictive Information SAC (PI-SAC) agents. We show that PI-SAC agents can substantially improve sample efficiency over challenging baselines on tasks from the DM Control suite of continuous control environments. We evaluate PI-SAC agents by comparing against uncompressed PI-SAC agents, other compressed and uncompressed agents, and SAC agents directly trained from pixels. Our implementation is given on GitHub. |
Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization | https://papers.nips.cc/paper_files/paper/2020/hash/8a1276c25f5efe85f0fc4020fbf5b4f8-Abstract.html | Sam Hopkins, Jerry Li, Fred Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/8a1276c25f5efe85f0fc4020fbf5b4f8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8a1276c25f5efe85f0fc4020fbf5b4f8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10722-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8a1276c25f5efe85f0fc4020fbf5b4f8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8a1276c25f5efe85f0fc4020fbf5b4f8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8a1276c25f5efe85f0fc4020fbf5b4f8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8a1276c25f5efe85f0fc4020fbf5b4f8-Supplemental.pdf | Our analysis of Filter is through the classic regret bound of the multiplicative weights update method. This connection allows us to avoid the technical complications in previous works and improve upon the run-time analysis of a gradient-descent-based algorithm for robust mean estimation by Cheng, Diakonikolas, Ge and Soltanolkotabi (ICML '20). |
High-Fidelity Generative Image Compression | https://papers.nips.cc/paper_files/paper/2020/hash/8a50bae297807da9e97722a0b3fd8f27-Abstract.html | Fabian Mentzer, George D. Toderici, Michael Tschannen, Eirikur Agustsson | https://papers.nips.cc/paper_files/paper/2020/hash/8a50bae297807da9e97722a0b3fd8f27-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8a50bae297807da9e97722a0b3fd8f27-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10723-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8a50bae297807da9e97722a0b3fd8f27-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8a50bae297807da9e97722a0b3fd8f27-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8a50bae297807da9e97722a0b3fd8f27-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8a50bae297807da9e97722a0b3fd8f27-Supplemental.pdf | We extensively study how to combine Generative Adversarial Networks and learned compression to obtain a state-of-the-art generative lossy compression system. In particular, we investigate normalization layers, generator and discriminator architectures, training strategies, as well as perceptual losses. In contrast to previous work, i) we obtain visually pleasing reconstructions that are perceptually similar to the input, ii) we operate in a broad range of bitrates, and iii) our approach can be applied to high-resolution images. We bridge the gap between rate-distortion-perception theory and practice by evaluating our approach both quantitatively with various perceptual metrics, and with a user study. The study shows that our method is preferred to previous approaches even if they use more than 2x the bitrate. |
A Statistical Mechanics Framework for Task-Agnostic Sample Design in Machine Learning | https://papers.nips.cc/paper_files/paper/2020/hash/8a7129b8f3edd95b7d969dfc2c8e9d9d-Abstract.html | Bhavya Kailkhura, Jayaraman Thiagarajan, Qunwei Li, Jize Zhang, Yi Zhou, Timo Bremer | https://papers.nips.cc/paper_files/paper/2020/hash/8a7129b8f3edd95b7d969dfc2c8e9d9d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8a7129b8f3edd95b7d969dfc2c8e9d9d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10724-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8a7129b8f3edd95b7d969dfc2c8e9d9d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8a7129b8f3edd95b7d969dfc2c8e9d9d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8a7129b8f3edd95b7d969dfc2c8e9d9d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8a7129b8f3edd95b7d969dfc2c8e9d9d-Supplemental.pdf | In this paper, we present a statistical mechanics framework to understand the effect of sampling properties of training data on the generalization gap of machine learning (ML) algorithms. We connect the generalization gap to the spatial properties of a sample design characterized by the pair correlation function (PCF). In particular, we express generalization gap in terms of the power spectra of the sample design and that of the function to be learned. Using this framework, we show that space-filling sample designs, such as blue noise and Poisson disk sampling, which optimize spectral properties, outperform random designs in terms of the generalization gap and characterize this gain in a closed-form. Our analysis also sheds light on design principles for constructing optimal task-agnostic sample designs that minimize the generalization gap. We corroborate our findings using regression experiments with neural networks on: a) synthetic functions, and b) a complex scientific simulator for inertial confinement fusion (ICF). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.