title
stringlengths
15
153
url
stringlengths
97
97
authors
stringlengths
6
328
detail_url
stringlengths
97
97
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
93
93
Reviews And Public Comment »
stringlengths
63
65
Supplemental
stringlengths
100
100
abstract
stringlengths
310
2.42k
Supplemental Errata
stringclasses
1 value
Grounding Spatio-Temporal Language with Transformers
https://papers.nips.cc/paper_files/paper/2021/hash/29daf9442f3c0b60642b14c081b4a556-Abstract.html
Tristan Karch, Laetitia Teodorescu, Katja Hofmann, Clément Moulin-Frier, Pierre-Yves Oudeyer
https://papers.nips.cc/paper_files/paper/2021/hash/29daf9442f3c0b60642b14c081b4a556-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12024-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/29daf9442f3c0b60642b14c081b4a556-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ZQQqo8H1qjC
null
Language is an interface to the outside world. In order for embodied agents to use it, language must be grounded in other, sensorimotor modalities. While there is an extended literature studying how machines can learn grounded language, the topic of how to learn spatio-temporal linguistic concepts is still largely uncharted. To make progress in this direction, we here introduce a novel spatio-temporal language grounding task where the goal is to learn the meaning of spatio-temporal descriptions of behavioral traces of an embodied agent. This is achieved by training a truth function that predicts if a description matches a given history of observations. The descriptions involve time-extended predicates in past and present tense as well as spatio-temporal references to objects in the scene. To study the role of architectural biases in this task, we train several models including multimodal Transformer architectures; the latter implement different attention computations between words and objects across space and time. We test models on two classes of generalization: 1) generalization to new sentences, 2) generalization to grammar primitives. We observe that maintaining object identity in the attention computation of our Transformers is instrumental to achieving good performance on generalization overall, and that summarizing object traces in a single token has little influence on performance. We then discuss how this opens new perspectives for language-guided autonomous embodied agents.
null
Learning where to learn: Gradient sparsity in meta and continual learning
https://papers.nips.cc/paper_files/paper/2021/hash/2a10665525774fa2501c2c8c4985ce61-Abstract.html
Johannes von Oswald, Dominic Zhao, Seijin Kobayashi, Simon Schug, Massimo Caccia, Nicolas Zucchet, João Sacramento
https://papers.nips.cc/paper_files/paper/2021/hash/2a10665525774fa2501c2c8c4985ce61-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12025-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2a10665525774fa2501c2c8c4985ce61-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CxefshFHEqh
null
Finding neural network weights that generalize well from small datasets is difficult. A promising approach is to learn a weight initialization such that a small number of weight changes results in low generalization error. We show that this form of meta-learning can be improved by letting the learning algorithm decide which weights to change, i.e., by learning where to learn. We find that patterned sparsity emerges from this process, with the pattern of sparsity varying on a problem-by-problem basis. This selective sparsity results in better generalization and less interference in a range of few-shot and continual learning problems. Moreover, we find that sparse learning also emerges in a more expressive model where learning rates are meta-learned. Our results shed light on an ongoing debate on whether meta-learning can discover adaptable features and suggest that learning by sparse gradient descent is a powerful inductive bias for meta-learning systems.
null
Domain Invariant Representation Learning with Domain Density Transformations
https://papers.nips.cc/paper_files/paper/2021/hash/2a2717956118b4d223ceca17ce3865e2-Abstract.html
A. Tuan Nguyen, Toan Tran, Yarin Gal, Atilim Gunes Baydin
https://papers.nips.cc/paper_files/paper/2021/hash/2a2717956118b4d223ceca17ce3865e2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12026-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2a2717956118b4d223ceca17ce3865e2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=l3vp7IDY6PZ
https://papers.nips.cc/paper_files/paper/2021/file/2a2717956118b4d223ceca17ce3865e2-Supplemental.zip
Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains. Naively training a model on the aggregate set of data (pooled from all source domains) has been shown to perform suboptimally, since the information learned by that model might be domain-specific and generalize imperfectly to target domains. To tackle this problem, a predominant domain generalization approach is to learn some domain-invariant information for the prediction task, aiming at a good generalization across domains. In this paper, we propose a theoretically grounded method to learn a domain-invariant representation by enforcing the representation network to be invariant under all transformation functions among domains. We next introduce the use of generative adversarial networks to learn such domain transformations in a possible implementation of our method in practice. We demonstrate the effectiveness of our method on several widely used datasets for the domain generalization problem, on all of which we achieve competitive results with state-of-the-art models.
null
PlayVirtual: Augmenting Cycle-Consistent Virtual Trajectories for Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/2a38a4a9316c49e5a833517c45d31070-Abstract.html
Tao Yu, Cuiling Lan, Wenjun Zeng, Mingxiao Feng, Zhizheng Zhang, Zhibo Chen
https://papers.nips.cc/paper_files/paper/2021/hash/2a38a4a9316c49e5a833517c45d31070-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12027-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2a38a4a9316c49e5a833517c45d31070-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=InYbKA26YG2
null
Learning good feature representations is important for deep reinforcement learning (RL). However, with limited experience, RL often suffers from data inefficiency for training. For un-experienced or less-experienced trajectories (i.e., state-action sequences), the lack of data limits the use of them for better feature learning. In this work, we propose a novel method, dubbed PlayVirtual, which augments cycle-consistent virtual trajectories to enhance the data efficiency for RL feature representation learning. Specifically, PlayVirtual predicts future states in a latent space based on the current state and action by a dynamics model and then predicts the previous states by a backward dynamics model, which forms a trajectory cycle. Based on this, we augment the actions to generate a large amount of virtual state-action trajectories. Being free of groudtruth state supervision, we enforce a trajectory to meet the cycle consistency constraint, which can significantly enhance the data efficiency. We validate the effectiveness of our designs on the Atari and DeepMind Control Suite benchmarks. Our method achieves the state-of-the-art performance on both benchmarks. Our code is available at https://github.com/microsoft/Playvirtual.
null
Efficient Equivariant Network
https://papers.nips.cc/paper_files/paper/2021/hash/2a79ea27c279e471f4d180b08d62b00a-Abstract.html
Lingshen He, Yuxuan Chen, zhengyang shen, Yiming Dong, Yisen Wang, Zhouchen Lin
https://papers.nips.cc/paper_files/paper/2021/hash/2a79ea27c279e471f4d180b08d62b00a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12028-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2a79ea27c279e471f4d180b08d62b00a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4-Py8BiJwHI
https://papers.nips.cc/paper_files/paper/2021/file/2a79ea27c279e471f4d180b08d62b00a-Supplemental.pdf
Convolutional neural networks (CNNs) have dominated the field of Computer Vision and achieved great success due to their built-in translation equivariance. Group equivariant CNNs (G-CNNs) that incorporate more equivariance can significantly improve the performance of conventional CNNs. However, G-CNNs are faced with two major challenges: \emph{spatial-agnostic problem} and \emph{expensive computational cost}. In this work, we propose a general framework of previous equivariant models, which includes G-CNNs and equivariant self-attention layers as special cases. Under this framework, we explicitly decompose the feature aggregation operation into a kernel generator and an encoder, and decouple the spatial and extra geometric dimensions in the computation. Therefore, our filters are essentially dynamic rather than being spatial-agnostic. We further show that our \emph{E}quivariant model is parameter \emph{E}fficient and computation \emph{E}fficient by complexity analysis, and also data \emph{E}fficient by experiments, so we call our model $E^4$-Net. Extensive experiments verify that our model can significantly improve previous works with smaller model size.Especially, under the setting of training on $1/5$ data of CIFAR10, our model improves G-CNNs by $5\%+$ accuracy,while using only $56\%$ parameters and $68\%$ FLOPs.
null
Unifying Gradient Estimators for Meta-Reinforcement Learning via Off-Policy Evaluation
https://papers.nips.cc/paper_files/paper/2021/hash/2a8009525763356ad5e3bb48b7475b4d-Abstract.html
Yunhao Tang, Tadashi Kozuno, Mark Rowland, Remi Munos, Michal Valko
https://papers.nips.cc/paper_files/paper/2021/hash/2a8009525763356ad5e3bb48b7475b4d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12029-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2a8009525763356ad5e3bb48b7475b4d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=GPYHMC-MXl
https://papers.nips.cc/paper_files/paper/2021/file/2a8009525763356ad5e3bb48b7475b4d-Supplemental.pdf
Model-agnostic meta-reinforcement learning requires estimating the Hessian matrix of value functions. This is challenging from an implementation perspective, as repeatedly differentiating policy gradient estimates may lead to biased Hessian estimates. In this work, we provide a unifying framework for estimating higher-order derivatives of value functions, based on off-policy evaluation. Our framework interprets a number of prior approaches as special cases and elucidates the bias and variance trade-off of Hessian estimates. This framework also opens the door to a new family of estimates, which can be easily implemented with auto-differentiation libraries, and lead to performance gains in practice.
null
Even your Teacher Needs Guidance: Ground-Truth Targets Dampen Regularization Imposed by Self-Distillation
https://papers.nips.cc/paper_files/paper/2021/hash/2adcefe38fbcd3dcd45908fbab1bf628-Abstract.html
Kenneth Borup, Lars N Andersen
https://papers.nips.cc/paper_files/paper/2021/hash/2adcefe38fbcd3dcd45908fbab1bf628-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12030-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2adcefe38fbcd3dcd45908fbab1bf628-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yTJtgA1Gh2
https://papers.nips.cc/paper_files/paper/2021/file/2adcefe38fbcd3dcd45908fbab1bf628-Supplemental.pdf
Knowledge distillation is classically a procedure where a neural network is trained on the output of another network along with the original targets in order to transfer knowledge between the architectures. The special case of self-distillation, where the network architectures are identical, has been observed to improve generalization accuracy. In this paper, we consider an iterative variant of self-distillation in a kernel regression setting, in which successive steps incorporate both model outputs and the ground-truth targets. This allows us to provide the first theoretical results on the importance of using the weighted ground-truth targets in self-distillation. Our focus is on fitting nonlinear functions to training data with a weighted mean square error objective function suitable for distillation, subject to $\ell_2$ regularization of the model parameters. We show that any such function obtained with self-distillation can be calculated directly as a function of the initial fit, and that infinite distillation steps yields the same optimization problem as the original with amplified regularization. Furthermore, we provide a closed form solution for the optimal choice of weighting parameter at each step, and show how to efficiently estimate this weighting parameter for deep learning and significantly reduce the computational requirements compared to a grid search.
null
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
https://papers.nips.cc/paper_files/paper/2021/hash/2adcfc3929e7c03fac3100d3ad51da26-Abstract.html
Lucas Liebenwein, Alaa Maalouf, Dan Feldman, Daniela Rus
https://papers.nips.cc/paper_files/paper/2021/hash/2adcfc3929e7c03fac3100d3ad51da26-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12031-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2adcfc3929e7c03fac3100d3ad51da26-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=BvJkwMhyInm
https://papers.nips.cc/paper_files/paper/2021/file/2adcfc3929e7c03fac3100d3ad51da26-Supplemental.pdf
We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression. Our algorithm hinges on the idea of compressing each convolutional (or fully-connected) layer by slicing its channels into multiple groups and decomposing each group via low-rank decomposition. At the core of our algorithm is the derivation of layer-wise error bounds from the Eckart–Young–Mirsky theorem. We then leverage these bounds to frame the compression problem as an optimization problem where we wish to minimize the maximum compression error across layers and propose an efficient algorithm towards a solution. Our experiments indicate that our method outperforms existing low-rank compression approaches across a wide range of networks and data sets. We believe that our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks.
null
Equilibrium and non-Equilibrium regimes in the learning of Restricted Boltzmann Machines
https://papers.nips.cc/paper_files/paper/2021/hash/2aedcba61ca55ceb62d785c6b7f10a83-Abstract.html
Aurélien Decelle, Cyril Furtlehner, Beatriz Seoane
https://papers.nips.cc/paper_files/paper/2021/hash/2aedcba61ca55ceb62d785c6b7f10a83-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12032-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2aedcba61ca55ceb62d785c6b7f10a83-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Bq_RoftLEeN
https://papers.nips.cc/paper_files/paper/2021/file/2aedcba61ca55ceb62d785c6b7f10a83-Supplemental.pdf
Training Restricted Boltzmann Machines (RBMs) has been challenging for a long time due to the difficulty of computing precisely the log-likelihood gradient. Over the past decades, many works have proposed more or less successful recipes but without studying systematically the crucial quantity of the problem: the mixing time i.e. the number of MCMC iterations needed to sample completely new configurations from a model. In this work, we show that this mixing time plays a crucial role in the behavior and stability of the trained model, and that RBMs operate in two well-defined distinct regimes, namely equilibrium and out-of-equilibrium, depending on the interplay between this mixing time of the model and the number of MCMC steps, $k$, used to approximate the gradient. We further show empirically that this mixing time increases along the learning, which often implies a transition from one regime to another as soon as $k$ becomes smaller than this time.In particular, we show that using the popular $k$ (persistent) contrastive divergence approaches, with $k$ small, the dynamics of the fitted model are extremely slow and often dominated by strong out-of-equilibrium effects. On the contrary, RBMs trained in equilibrium display much faster dynamics, and a smooth convergence to dataset-like configurations during the sampling.Finally, we discuss how to exploit in practice both regimes depending on the task one aims to fulfill: (i) short $k$s can be used to generate convincing samples in short learning times, (ii) large $k$ (or increasingly large) must be used to learn the correct equilibrium distribution of the RBM. Finally, the existence of these two operational regimes seems to be a general property of energy based models trained via likelihood maximization.
null
Imitation with Neural Density Models
https://papers.nips.cc/paper_files/paper/2021/hash/2b0aa0d9e30ea3a55fc271ced8364536-Abstract.html
Kuno Kim, Akshat Jindal, Yang Song, Jiaming Song, Yanan Sui, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2021/hash/2b0aa0d9e30ea3a55fc271ced8364536-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12033-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2b0aa0d9e30ea3a55fc271ced8364536-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=cMv0gvg88a
https://papers.nips.cc/paper_files/paper/2021/file/2b0aa0d9e30ea3a55fc271ced8364536-Supplemental.pdf
We propose a new framework for Imitation Learning (IL) via density estimation of the expert's occupancy measure followed by Maximum Occupancy Entropy Reinforcement Learning (RL) using the density as a reward. Our approach maximizes a non-adversarial model-free RL objective that provably lower bounds reverse Kullback–Leibler divergence between occupancy measures of the expert and imitator. We present a practical IL algorithm, Neural Density Imitation (NDI), which obtains state-of-the-art demonstration efficiency on benchmark control tasks.
null
Accurate Point Cloud Registration with Robust Optimal Transport
https://papers.nips.cc/paper_files/paper/2021/hash/2b0f658cbffd284984fb11d90254081f-Abstract.html
Zhengyang Shen, Jean Feydy, Peirong Liu, Ariel H Curiale, Ruben San Jose Estepar, Raul San Jose Estepar, Marc Niethammer
https://papers.nips.cc/paper_files/paper/2021/hash/2b0f658cbffd284984fb11d90254081f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12034-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2b0f658cbffd284984fb11d90254081f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=TlE6Ar1sRsR
https://papers.nips.cc/paper_files/paper/2021/file/2b0f658cbffd284984fb11d90254081f-Supplemental.zip
This work investigates the use of robust optimal transport (OT) for shape matching. Specifically, we show that recent OT solvers improve both optimization-based and deep learning methods for point cloud registration, boosting accuracy at an affordable computational cost. This manuscript starts with a practical overview of modern OT theory. We then provide solutions to the main difficulties in using this framework for shape matching. Finally, we showcase the performance of transport-enhanced registration models on a wide range of challenging tasks: rigid registration for partial shapes; scene flow estimation on the Kitti dataset; and nonparametric registration of lung vascular trees between inspiration and expiration. Our OT-based methods achieve state-of-the-art results on Kitti and for the challenging lung registration task, both in terms of accuracy and scalability. We also release PVT1010, a new public dataset of 1,010 pairs of lung vascular trees with densely sampled points. This dataset provides a challenging use case for point cloud registration algorithms with highly complex shapes and deformations. Our work demonstrates that robust OT enables fast pre-alignment and fine-tuning for a wide range of registration models, thereby providing a new key method for the computer vision toolbox. Our code and dataset are available online at: https://github.com/uncbiag/robot.
null
Simple steps are all you need: Frank-Wolfe and generalized self-concordant functions
https://papers.nips.cc/paper_files/paper/2021/hash/2b323d6eb28422cef49b266557dd31ad-Abstract.html
Alejandro Carderera, Mathieu Besançon, Sebastian Pokutta
https://papers.nips.cc/paper_files/paper/2021/hash/2b323d6eb28422cef49b266557dd31ad-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12035-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2b323d6eb28422cef49b266557dd31ad-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rq_UD6IiBpX
https://papers.nips.cc/paper_files/paper/2021/file/2b323d6eb28422cef49b266557dd31ad-Supplemental.pdf
Generalized self-concordance is a key property present in the objective function of many important learning problems. We establish the convergence rate of a simple Frank-Wolfe variant that uses the open-loop step size strategy $\gamma_t = 2/(t+2)$, obtaining a $\mathcal{O}(1/t)$ convergence rate for this class of functions in terms of primal gap and Frank-Wolfe gap, where $t$ is the iteration count. This avoids the use of second-order information or the need to estimate local smoothness parameters of previous work. We also show improved convergence rates for various common cases, e.g., when the feasible region under consideration is uniformly convex or polyhedral.
null
Automatic Data Augmentation for Generalization in Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/2b38c2df6a49b97f706ec9148ce48d86-Abstract.html
Roberta Raileanu, Maxwell Goldstein, Denis Yarats, Ilya Kostrikov, Rob Fergus
https://papers.nips.cc/paper_files/paper/2021/hash/2b38c2df6a49b97f706ec9148ce48d86-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12036-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2b38c2df6a49b97f706ec9148ce48d86-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FChSjfcJZVW
https://papers.nips.cc/paper_files/paper/2021/file/2b38c2df6a49b97f706ec9148ce48d86-Supplemental.pdf
Deep reinforcement learning (RL) agents often fail to generalize beyond their training environments. To alleviate this problem, recent work has proposed the use of data augmentation. However, different tasks tend to benefit from different types of augmentations and selecting the right one typically requires expert knowledge. In this paper, we introduce three approaches for automatically finding an effective augmentation for any RL task. These are combined with two novel regularization terms for the policy and value function, required to make the use of data augmentation theoretically sound for actor-critic algorithms. Our method achieves a new state-of-the-art on the Procgen benchmark and outperforms popular RL algorithms on DeepMind Control tasks with distractors. In addition, our agent learns policies and representations which are more robust to changes in the environment that are irrelevant for solving the task, such as the background.
null
Blending Anti-Aliasing into Vision Transformer
https://papers.nips.cc/paper_files/paper/2021/hash/2b3bf3eee2475e03885a110e9acaab61-Abstract.html
Shengju Qian, Hao Shao, Yi Zhu, Mu Li, Jiaya Jia
https://papers.nips.cc/paper_files/paper/2021/hash/2b3bf3eee2475e03885a110e9acaab61-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12037-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2b3bf3eee2475e03885a110e9acaab61-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0-0Wk0t6A_Z
https://papers.nips.cc/paper_files/paper/2021/file/2b3bf3eee2475e03885a110e9acaab61-Supplemental.pdf
The transformer architectures, based on self-attention mechanism and convolution-free design, recently found superior performance and booming applications in computer vision. However, the discontinuous patch-wise tokenization process implicitly introduces jagged artifacts into attention maps, arising the traditional problem of aliasing for vision transformers. Aliasing effect occurs when discrete patterns are used to produce high frequency or continuous information, resulting in the indistinguishable distortions. Recent researches have found that modern convolution networks still suffer from this phenomenon. In this work, we analyze the uncharted problem of aliasing in vision transformer and explore to incorporate anti-aliasing properties. Specifically, we propose a plug-and-play Aliasing-Reduction Module (ARM) to alleviate the aforementioned issue. We investigate the effectiveness and generalization of the proposed method across multiple tasks and various vision transformer families. This lightweight design consistently attains a clear boost over several famous structures. Furthermore, our module also improves data efficiency and robustness of vision transformers.
null
A Trainable Spectral-Spatial Sparse Coding Model for Hyperspectral Image Restoration
https://papers.nips.cc/paper_files/paper/2021/hash/2b515e2bdd63b7f034269ad747c93a42-Abstract.html
Theo Bodrito, Alexandre Zouaoui, Jocelyn Chanussot, Julien Mairal
https://papers.nips.cc/paper_files/paper/2021/hash/2b515e2bdd63b7f034269ad747c93a42-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12038-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2b515e2bdd63b7f034269ad747c93a42-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=8Yrcy55iHE
https://papers.nips.cc/paper_files/paper/2021/file/2b515e2bdd63b7f034269ad747c93a42-Supplemental.pdf
Hyperspectral imaging offers new perspectives for diverse applications, ranging from the monitoring of the environment using airborne or satellite remote sensing, precision farming, food safety, planetary exploration, or astrophysics. Unfortunately, the spectral diversity of information comes at the expense of various sources of degradation, and the lack of accurate ground-truth "clean" hyperspectral signals acquired on the spot makes restoration tasks challenging. In particular, training deep neural networks for restoration is difficult, in contrast to traditional RGB imaging problems where deep models tend to shine. In this paper, we advocate instead for a hybrid approach based on sparse coding principles that retain the interpretability of classical techniques encoding domain knowledge with handcrafted image priors, while allowing to train model parameters end-to-end without massive amounts of data. We show on various denoising benchmarks that our method is computationally efficient and significantly outperforms the state of the art.
null
Posterior Collapse and Latent Variable Non-identifiability
https://papers.nips.cc/paper_files/paper/2021/hash/2b6921f2c64dee16ba21ebf17f3c2c92-Abstract.html
Yixin Wang, David Blei, John P. Cunningham
https://papers.nips.cc/paper_files/paper/2021/hash/2b6921f2c64dee16ba21ebf17f3c2c92-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12039-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2b6921f2c64dee16ba21ebf17f3c2c92-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ejAu7ugNj_M
https://papers.nips.cc/paper_files/paper/2021/file/2b6921f2c64dee16ba21ebf17f3c2c92-Supplemental.pdf
Variational autoencoders model high-dimensional data by positinglow-dimensional latent variables that are mapped through a flexibledistribution parametrized by a neural network. Unfortunately,variational autoencoders often suffer from posterior collapse: theposterior of the latent variables is equal to its prior, rendering thevariational autoencoder useless as a means to produce meaningfulrepresentations. Existing approaches to posterior collapse oftenattribute it to the use of neural networks or optimization issues dueto variational approximation. In this paper, we consider posteriorcollapse as a problem of latent variable non-identifiability. We provethat the posterior collapses if and only if the latent variables arenon-identifiable in the generative model. This fact implies thatposterior collapse is not a phenomenon specific to the use of flexibledistributions or approximate inference. Rather, it can occur inclassical probabilistic models even with exact inference, which wealso demonstrate. Based on these results, we propose a class oflatent-identifiable variational autoencoders, deep generative modelswhich enforce identifiability without sacrificing flexibility. Thismodel class resolves the problem of latent variablenon-identifiability by leveraging bijective Brenier maps andparameterizing them with input convex neural networks, without specialvariational inference objectives or optimization tricks. Acrosssynthetic and real datasets, latent-identifiable variationalautoencoders outperform existing methods in mitigating posteriorcollapse and providing meaningful representations of the data.
null
The Benefits of Implicit Regularization from SGD in Least Squares Problems
https://papers.nips.cc/paper_files/paper/2021/hash/2b6bb5354a56ce256116b6b307a1ea10-Abstract.html
Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P. Foster, Sham Kakade
https://papers.nips.cc/paper_files/paper/2021/hash/2b6bb5354a56ce256116b6b307a1ea10-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12040-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2b6bb5354a56ce256116b6b307a1ea10-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4XOrn_Y-dqp
https://papers.nips.cc/paper_files/paper/2021/file/2b6bb5354a56ce256116b6b307a1ea10-Supplemental.pdf
Stochastic gradient descent (SGD) exhibits strong algorithmic regularization effects in practice, which has been hypothesized to play an important role in the generalization of modern machine learning approaches. In this work, we seek to understand these issues in the simpler setting of linear regression (including both underparameterized and overparameterized regimes), where our goal is to make sharp instance-based comparisons of the implicit regularization afforded by (unregularized) average SGD with the explicit regularization of ridge regression. For a broad class of least squares problem instances (that are natural in high-dimensional settings), we show: (1) for every problem instance and for every ridge parameter, (unregularized) SGD, when provided with \emph{logarithmically} more samples than that provided to the ridge algorithm, generalizes no worse than the ridge solution (provided SGD uses a tuned constant stepsize); (2) conversely, there exist instances (in this wide problem class) where optimally-tuned ridge regression requires \emph{quadratically} more samples than SGD in order to have the same generalization performance. Taken together, our results show that, up to the logarithmic factors, the generalization performance of SGD is always no worse than that of ridge regression in a wide range of overparameterized problems, and, in fact, could be much better for some problem instances. More generally, our results show how algorithmic regularization has important consequences even in simpler (overparameterized) convex settings.
null
Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks
https://papers.nips.cc/paper_files/paper/2021/hash/2b763288faedb7707c0748abe015ab6c-Abstract.html
Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
https://papers.nips.cc/paper_files/paper/2021/hash/2b763288faedb7707c0748abe015ab6c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12041-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2b763288faedb7707c0748abe015ab6c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=09-zkOYoVof
https://papers.nips.cc/paper_files/paper/2021/file/2b763288faedb7707c0748abe015ab6c-Supplemental.pdf
In this paper, we study the generalization properties of Model-Agnostic Meta-Learning (MAML) algorithms for supervised learning problems. We focus on the setting in which we train the MAML model over $m$ tasks, each with $n$ data points, and characterize its generalization error from two points of view: First, we assume the new task at test time is one of the training tasks, and we show that, for strongly convex objective functions, the expected excess population loss is bounded by $\mathcal{O}(1/mn)$. Second, we consider the MAML algorithm's generalization to an unseen task and show that the resulting generalization error depends on the total variation distance between the underlying distributions of the new task and the tasks observed during the training process. Our proof techniques rely on the connections between algorithmic stability and generalization bounds of algorithms. In particular, we propose a new definition of stability for meta-learning algorithms, which allows us to capture the role of both the number of tasks $m$ and number of samples per task $n$ on the generalization error of MAML.
null
Factored Policy Gradients: Leveraging Structure for Efficient Learning in MOMDPs
https://papers.nips.cc/paper_files/paper/2021/hash/2ba8698b79439589fdd2b0f7218d8b07-Abstract.html
Thomas Spooner, Nelson Vadori, Sumitra Ganesh
https://papers.nips.cc/paper_files/paper/2021/hash/2ba8698b79439589fdd2b0f7218d8b07-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12042-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2ba8698b79439589fdd2b0f7218d8b07-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NXGnwTLlWiR
https://papers.nips.cc/paper_files/paper/2021/file/2ba8698b79439589fdd2b0f7218d8b07-Supplemental.pdf
Policy gradient methods can solve complex tasks but often fail when the dimensionality of the action-space or objective multiplicity grow very large. This occurs, in part, because the variance on score-based gradient estimators scales quadratically. In this paper, we address this problem through a factor baseline which exploits independence structure encoded in a novel action-target influence network. Factored policy gradients (FPGs), which follow, provide a common framework for analysing key state-of-the-art algorithms, are shown to generalise traditional policy gradients, and yield a principled way of incorporating prior knowledge of a problem domain's generative processes. We provide an analysis of the proposed estimator and identify the conditions under which variance is reduced. The algorithmic aspects of FPGs are discussed, including optimal policy factorisation, as characterised by minimum biclique coverings, and the implications for the bias variance trade-off of incorrectly specifying the network. Finally, we demonstrate the performance advantages of our algorithm on large-scale bandit and traffic intersection problems, providing a novel contribution to the latter in the form of a spatial approximation.
null
MarioNette: Self-Supervised Sprite Learning
https://papers.nips.cc/paper_files/paper/2021/hash/2bcab9d935d219641434683dd9d18a03-Abstract.html
Dmitriy Smirnov, MICHAEL GHARBI, Matthew Fisher, Vitor Guizilini, Alexei Efros, Justin M. Solomon
https://papers.nips.cc/paper_files/paper/2021/hash/2bcab9d935d219641434683dd9d18a03-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12043-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2bcab9d935d219641434683dd9d18a03-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=3zP6RrQtNa
null
Artists and video game designers often construct 2D animations using libraries of sprites---textured patches of objects and characters. We propose a deep learning approach that decomposes sprite-based video animations into a disentangled representation of recurring graphic elements in a self-supervised manner. By jointly learning a dictionary of possibly transparent patches and training a network that places them onto a canvas, we deconstruct sprite-based content into a sparse, consistent, and explicit representation that can be easily used in downstream tasks, like editing or analysis. Our framework offers a promising approach for discovering recurring visual patterns in image collections without supervision.
null
RLlib Flow: Distributed Reinforcement Learning is a Dataflow Problem
https://papers.nips.cc/paper_files/paper/2021/hash/2bce32ed409f5ebcee2a7b417ad9beed-Abstract.html
Eric Liang, Zhanghao Wu, Michael Luo, Sven Mika, Joseph E. Gonzalez, Ion Stoica
https://papers.nips.cc/paper_files/paper/2021/hash/2bce32ed409f5ebcee2a7b417ad9beed-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12044-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2bce32ed409f5ebcee2a7b417ad9beed-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=trNDfee72NQ
https://papers.nips.cc/paper_files/paper/2021/file/2bce32ed409f5ebcee2a7b417ad9beed-Supplemental.pdf
Researchers and practitioners in the field of reinforcement learning (RL) frequently leverage parallel computation, which has led to a plethora of new algorithms and systems in the last few years. In this paper, we re-examine the challenges posed by distributed RL and try to view it through the lens of an old idea: distributed dataflow. We show that viewing RL as a dataflow problem leads to highly composable and performant implementations. We propose RLlib Flow, a hybrid actor-dataflow programming model for distributed RL, and validate its practicality by porting the full suite of algorithms in RLlib, a widely adopted distributed RL library. Concretely, RLlib Flow provides 2-9$\times$ code savings in real production code and enables the composition of multi-agent algorithms not possible by end users before. The open-source code is available as part of RLlib at https://github.com/ray-project/ray/tree/master/rllib.
null
Improve Agents without Retraining: Parallel Tree Search with Off-Policy Correction
https://papers.nips.cc/paper_files/paper/2021/hash/2bd235c31c97855b7ef2dc8b414779af-Abstract.html
Gal Dalal, Assaf Hallak, Steven Dalton, iuri frosio, Shie Mannor, Gal Chechik
https://papers.nips.cc/paper_files/paper/2021/hash/2bd235c31c97855b7ef2dc8b414779af-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12045-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2bd235c31c97855b7ef2dc8b414779af-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=VjC4uY3_3I
https://papers.nips.cc/paper_files/paper/2021/file/2bd235c31c97855b7ef2dc8b414779af-Supplemental.pdf
Tree Search (TS) is crucial to some of the most influential successes in reinforcement learning. Here, we tackle two major challenges with TS that limit its usability: \textit{distribution shift} and \textit{scalability}. We first discover and analyze a counter-intuitive phenomenon: action selection through TS and a pre-trained value function often leads to lower performance compared to the original pre-trained agent, even when having access to the exact state and reward in future steps. We show this is due to a distribution shift to areas where value estimates are highly inaccurate and analyze this effect using Extreme Value theory. To overcome this problem, we introduce a novel off-policy correction term that accounts for the mismatch between the pre-trained value and its corresponding TS policy by penalizing under-sampled trajectories. We prove that our correction eliminates the above mismatch and bound the probability of sub-optimal action selection. Our correction significantly improves pre-trained Rainbow agents without any further training, often more than doubling their scores on Atari games. Next, we address the scalability issue given by the computational complexity of exhaustive TS that scales exponentially with the tree depth. We introduce Batch-BFS: a GPU breadth-first search that advances all nodes in each depth of the tree simultaneously. Batch-BFS reduces runtime by two orders of magnitude and, beyond inference, enables also training with TS of depths that were not feasible before. We train DQN agents from scratch using TS and show improvement in several Atari games compared to both the original DQN and the more advanced Rainbow. We will share the code upon publication.
null
Redesigning the Transformer Architecture with Insights from Multi-particle Dynamical Systems
https://papers.nips.cc/paper_files/paper/2021/hash/2bd388f731f26312bfc0fe30da009595-Abstract.html
Subhabrata Dutta, Tanya Gautam, Soumen Chakrabarti, Tanmoy Chakraborty
https://papers.nips.cc/paper_files/paper/2021/hash/2bd388f731f26312bfc0fe30da009595-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12046-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2bd388f731f26312bfc0fe30da009595-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=e2gqGkFjDHg
https://papers.nips.cc/paper_files/paper/2021/file/2bd388f731f26312bfc0fe30da009595-Supplemental.pdf
The Transformer and its variants have been proven to be efficient sequence learners in many different domains. Despite their staggering success, a critical issue has been the enormous number of parameters that must be trained (ranging from $10^7$ to $10^{11}$) along with the quadratic complexity of dot-product attention. In this work, we investigate the problem of approximating the two central components of the Transformer --- multi-head self-attention and point-wise feed-forward transformation, with reduced parameter space and computational complexity. We build upon recent developments in analyzing deep neural networks as numerical solvers of ordinary differential equations. Taking advantage of an analogy between Transformer stages and the evolution of a dynamical system of multiple interacting particles, we formulate a temporal evolution scheme, \name, to bypass costly dot-product attention over multiple stacked layers. We perform exhaustive experiments with \name\ on well-known encoder-decoder as well as encoder-only tasks. We observe that the degree of approximation (or inversely, the degree of parameter reduction) has different effects on the performance, depending on the task. While in the encoder-decoder regime, \name\ delivers performances comparable to the original Transformer, in encoder-only tasks it consistently outperforms Transformer along with several subsequent variants.
null
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/2bd7f907b7f5b6bbd91822c0c7b835f6-Abstract.html
Hanxun Huang, Yisen Wang, Sarah Erfani, Quanquan Gu, James Bailey, Xingjun Ma
https://papers.nips.cc/paper_files/paper/2021/hash/2bd7f907b7f5b6bbd91822c0c7b835f6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12047-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2bd7f907b7f5b6bbd91822c0c7b835f6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=OdklztJBBYH
https://papers.nips.cc/paper_files/paper/2021/file/2bd7f907b7f5b6bbd91822c0c7b835f6-Supplemental.pdf
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks. A range of defense methods have been proposed to train adversarially robust DNNs, among which adversarial training has demonstrated promising results. However, despite preliminary understandings developed for adversarial training, it is still not clear, from the architectural perspective, what configurations can lead to more robust DNNs. In this paper, we address this gap via a comprehensive investigation on the impact of network width and depth on the robustness of adversarially trained DNNs. Specifically, we make the following key observations: 1) more parameters (higher model capacity) does not necessarily help adversarial robustness; 2) reducing capacity at the last stage (the last group of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness. We also provide a theoretical analysis explaning why such network configuration can help robustness. These architectural insights can help design adversarially robust DNNs.
null
Center Smoothing: Certified Robustness for Networks with Structured Outputs
https://papers.nips.cc/paper_files/paper/2021/hash/2be8328f41144106f7144802f2367487-Abstract.html
Aounon Kumar, Tom Goldstein
https://papers.nips.cc/paper_files/paper/2021/hash/2be8328f41144106f7144802f2367487-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12048-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2be8328f41144106f7144802f2367487-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sxjpM-kvVv_
https://papers.nips.cc/paper_files/paper/2021/file/2be8328f41144106f7144802f2367487-Supplemental.pdf
The study of provable adversarial robustness has mostly been limited to classification tasks and models with one-dimensional real-valued outputs. We extend the scope of certifiable robustness to problems with more general and structured outputs like sets, images, language, etc. We model the output space as a metric space under a distance/similarity function, such as intersection-over-union, perceptual similarity, total variation distance, etc. Such models are used in many machine learning problems like image segmentation, object detection, generative models, image/audio-to-text systems, etc. Based on a robustness technique called randomized smoothing, our center smoothing procedure can produce models with the guarantee that the change in the output, as measured by the distance metric, remains small for any norm-bounded adversarial perturbation of the input. We apply our method to create certifiably robust models with disparate output spaces -- from sets to images -- and show that it yields meaningful certificates without significantly degrading the performance of the base model.
null
Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures
https://papers.nips.cc/paper_files/paper/2021/hash/2c27a260f16ad3098393cc529f391f4a-Abstract.html
Zhaozhuo Xu, Zhao Song, Anshumali Shrivastava
https://papers.nips.cc/paper_files/paper/2021/hash/2c27a260f16ad3098393cc529f391f4a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12049-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2c27a260f16ad3098393cc529f391f4a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=TrgTdDW4ta
https://papers.nips.cc/paper_files/paper/2021/file/2c27a260f16ad3098393cc529f391f4a-Supplemental.pdf
Conditional gradient methods (CGM) are widely used in modern machine learning. CGM's overall running time usually consists of two parts: the number of iterations and the cost of each iteration. Most efforts focus on reducing the number of iterations as a means to reduce the overall running time. In this work, we focus on improving the per iteration cost of CGM. The bottleneck step in most CGM is maximum inner product search (MaxIP), which requires a linear scan over the parameters. In practice, approximate MaxIP data-structures are found to be helpful heuristics. However, theoretically, nothing is known about the combination of approximate MaxIP data-structures and CGM. In this work, we answer this question positively by providing a formal framework to combine the locality sensitive hashing type approximate MaxIP data-structures with CGM algorithms. As a result, we show the first algorithm, where the cost per iteration is sublinear in the number of parameters, for many fundamental optimization algorithms, e.g., Frank-Wolfe, Herding algorithm, and policy gradient.
null
Neural Regression, Representational Similarity, Model Zoology & Neural Taskonomy at Scale in Rodent Visual Cortex
https://papers.nips.cc/paper_files/paper/2021/hash/2c29d89cc56cdb191c60db2f0bae796b-Abstract.html
Colin Conwell, David Mayo, Andrei Barbu, Michael Buice, George Alvarez, Boris Katz
https://papers.nips.cc/paper_files/paper/2021/hash/2c29d89cc56cdb191c60db2f0bae796b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12050-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2c29d89cc56cdb191c60db2f0bae796b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rJwDMui8DI
https://papers.nips.cc/paper_files/paper/2021/file/2c29d89cc56cdb191c60db2f0bae796b-Supplemental.pdf
How well do deep neural networks fare as models of mouse visual cortex? A majority of research to date suggests results far more mixed than those produced in the modeling of primate visual cortex. Here, we perform a large-scale benchmarking of dozens of deep neural network models in mouse visual cortex with both representational similarity analysis and neural regression. Using the Allen Brain Observatory's 2-photon calcium-imaging dataset of activity in over 6,000 reliable rodent visual cortical neurons recorded in response to natural scenes, we replicate previous findings and resolve previous discrepancies, ultimately demonstrating that modern neural networks can in fact be used to explain activity in the mouse visual cortex to a more reasonable degree than previously suggested. Using our benchmark as an atlas, we offer preliminary answers to overarching questions about levels of analysis (e.g. do models that better predict the representations of individual neurons also predict representational similarity across neural populations?); questions about the properties of models that best predict the visual system overall (e.g. is convolution or category-supervision necessary to better predict neural activity?); and questions about the mapping between biological and artificial representations (e.g. does the information processing hierarchy in deep nets match the anatomical hierarchy of mouse visual cortex?). Along the way, we catalogue a number of models (including vision transformers, MLP-Mixers, normalization free networks, Taskonomy encoders and self-supervised models) outside the traditional circuit of convolutional object recognition. Taken together, our results provide a reference point for future ventures in the deep neural network modeling of mouse visual cortex, hinting at novel combinations of mapping method, architecture, and task to more fully characterize the computational motifs of visual representation in a species so central to neuroscience, but with a perceptual physiology and ecology markedly different from the ones we study in primates.
null
A Topological Perspective on Causal Inference
https://papers.nips.cc/paper_files/paper/2021/hash/2c463dfdde588f3bfc60d53118c10d6b-Abstract.html
Duligur Ibeling, Thomas Icard
https://papers.nips.cc/paper_files/paper/2021/hash/2c463dfdde588f3bfc60d53118c10d6b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12051-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2c463dfdde588f3bfc60d53118c10d6b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=pMvBiSLGTeU
https://papers.nips.cc/paper_files/paper/2021/file/2c463dfdde588f3bfc60d53118c10d6b-Supplemental.pdf
This paper presents a topological learning-theoretic perspective on causal inference by introducing a series of topologies defined on general spaces of structural causal models (SCMs). As an illustration of the framework we prove a topological causal hierarchy theorem, showing that substantive assumption-free causal inference is possible only in a meager set of SCMs. Thanks to a known correspondence between open sets in the weak topology and statistically verifiable hypotheses, our results show that inductive assumptions sufficient to license valid causal inferences are statistically unverifiable in principle. Similar to no-free-lunch theorems for statistical inference, the present results clarify the inevitability of substantial assumptions for causal inference. An additional benefit of our topological approach is that it easily accommodates SCMs with infinitely many variables. We finally suggest that our framework may be helpful for the positive project of exploring and assessing alternative causal-inductive assumptions.
null
Parameter Inference with Bifurcation Diagrams
https://papers.nips.cc/paper_files/paper/2021/hash/2c6ae45a3e88aee548c0714fad7f8269-Abstract.html
Gregory Szep, Neil Dalchau, Attila Csikász-Nagy
https://papers.nips.cc/paper_files/paper/2021/hash/2c6ae45a3e88aee548c0714fad7f8269-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12052-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2c6ae45a3e88aee548c0714fad7f8269-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MvTnc_c4xYj
https://papers.nips.cc/paper_files/paper/2021/file/2c6ae45a3e88aee548c0714fad7f8269-Supplemental.pdf
Estimation of parameters in differential equation models can be achieved by applying learning algorithms to quantitative time-series data. However, sometimes it is only possible to measure qualitative changes of a system in response to a controlled condition. In dynamical systems theory, such change points are known as bifurcations and lie on a function of the controlled condition called the bifurcation diagram. In this work, we propose a gradient-based approach for inferring the parameters of differential equations that produce a user-specified bifurcation diagram. The cost function contains an error term that is minimal when the model bifurcations match the specified targets and a bifurcation measure which has gradients that push optimisers towards bifurcating parameter regimes. The gradients can be computed without the need to differentiate through the operations of the solver that was used to compute the diagram. We demonstrate parameter inference with minimal models which explore the space of saddle-node and pitchfork diagrams and the genetic toggle switch from synthetic biology. Furthermore, the cost landscape allows us to organise models in terms of topological and geometric equivalence.
null
Scalable Thompson Sampling using Sparse Gaussian Process Models
https://papers.nips.cc/paper_files/paper/2021/hash/2c7f9ccb5a39073e24babc3a4cb45e60-Abstract.html
Sattar Vakili, Henry Moss, Artem Artemev, Vincent Dutordoir, Victor Picheny
https://papers.nips.cc/paper_files/paper/2021/hash/2c7f9ccb5a39073e24babc3a4cb45e60-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12053-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2c7f9ccb5a39073e24babc3a4cb45e60-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=A3TwMRCqWUn
https://papers.nips.cc/paper_files/paper/2021/file/2c7f9ccb5a39073e24babc3a4cb45e60-Supplemental.pdf
Thompson Sampling (TS) from Gaussian Process (GP) models is a powerful tool for the optimization of black-box functions. Although TS enjoys strong theoretical guarantees and convincing empirical performance, it incurs a large computational overhead that scales polynomially with the optimization budget. Recently, scalable TS methods based on sparse GP models have been proposed to increase the scope of TS, enabling its application to problems that are sufficiently multi-modal, noisy or combinatorial to require more than a few hundred evaluations to be solved. However, the approximation error introduced by sparse GPs invalidates all existing regret bounds. In this work, we perform a theoretical and empirical analysis of scalable TS. We provide theoretical guarantees and show that the drastic reduction in computational complexity of scalable TS can be enjoyed without loss in the regret performance over the standard TS. These conceptual claims are validated for practical implementations of scalable TS on synthetic benchmarks and as part of a real-world high-throughput molecular design task.
null
Robust Counterfactual Explanations on Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/2c8c3a57383c63caef6724343eb62257-Abstract.html
Mohit Bajaj, Lingyang Chu, Zi Yu Xue, Jian Pei, Lanjun Wang, Peter Cho-Ho Lam, Yong Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/2c8c3a57383c63caef6724343eb62257-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12054-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2c8c3a57383c63caef6724343eb62257-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wGmOLwb8ClT
https://papers.nips.cc/paper_files/paper/2021/file/2c8c3a57383c63caef6724343eb62257-Supplemental.pdf
Massive deployment of Graph Neural Networks (GNNs) in high-stake applications generates a strong demand for explanations that are robust to noise and align well with human intuition. Most existing methods generate explanations by identifying a subgraph of an input graph that has a strong correlation with the prediction. These explanations are not robust to noise because independently optimizing the correlation for a single input can easily overfit noise. Moreover, they are not counterfactual because removing an identified subgraph from an input graph does not necessarily change the prediction result. In this paper, we propose a novel method to generate robust counterfactual explanations on GNNs by explicitly modelling the common decision logic of GNNs on similar input graphs. Our explanations are naturally robust to noise because they are produced from the common decision boundaries of a GNN that govern the predictions of many similar input graphs. The explanations are also counterfactual because removing the set of edges identified by an explanation from the input graph changes the prediction significantly. Exhaustive experiments on many public datasets demonstrate the superior performance of our method.
null
Similarity and Matching of Neural Network Representations
https://papers.nips.cc/paper_files/paper/2021/hash/2cb274e6ce940f47beb8011d8ecb1462-Abstract.html
Adrián Csiszárik, Péter Kőrösi-Szabó, Ákos Matszangosz, Gergely Papp, Dániel Varga
https://papers.nips.cc/paper_files/paper/2021/hash/2cb274e6ce940f47beb8011d8ecb1462-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12055-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2cb274e6ce940f47beb8011d8ecb1462-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=aedFIIRRfXr
https://papers.nips.cc/paper_files/paper/2021/file/2cb274e6ce940f47beb8011d8ecb1462-Supplemental.pdf
We employ a toolset --- dubbed Dr. Frankenstein --- to analyse the similarity of representations in deep neural networks. With this toolset we aim to match the activations on given layers of two trained neural networks by joining them with a stitching layer. We demonstrate that the inner representations emerging in deep convolutional neural networks with the same architecture but different initialisations can be matched with a surprisingly high degree of accuracy even with a single, affine stitching layer. We choose the stitching layer from several possible classes of linear transformations and investigate their performance and properties. The task of matching representations is closely related to notions of similarity. Using this toolset we also provide a novel viewpoint on the current line of research regarding similarity indices of neural network representations: the perspective of the performance on a task.
null
DOCTOR: A Simple Method for Detecting Misclassification Errors
https://papers.nips.cc/paper_files/paper/2021/hash/2cb6b10338a7fc4117a80da24b582060-Abstract.html
Federica Granese, Marco Romanelli, Daniele Gorla, Catuscia Palamidessi, Pablo Piantanida
https://papers.nips.cc/paper_files/paper/2021/hash/2cb6b10338a7fc4117a80da24b582060-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12056-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2cb6b10338a7fc4117a80da24b582060-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FHQBDiMwvK
https://papers.nips.cc/paper_files/paper/2021/file/2cb6b10338a7fc4117a80da24b582060-Supplemental.pdf
Deep neural networks (DNNs) have shown to perform very well on large scale object recognition problems and lead to widespread use for real-world applications, including situations where DNN are implemented as “black boxes”. A promising approach to secure their use is to accept decisions that are likely to be correct while discarding the others. In this work, we propose DOCTOR, a simple method that aims to identify whether the prediction of a DNN classifier should (or should not) be trusted so that, consequently, it would be possible to accept it or to reject it. Two scenarios are investigated: Totally Black Box (TBB) where only the soft-predictions are available and Partially Black Box (PBB) where gradient-propagation to perform input pre-processing is allowed. Empirically, we show that DOCTOR outperforms all state-of-the-art methods on various well-known images and sentiment analysis datasets. In particular, we observe a reduction of up to 4% of the false rejection rate (FRR) in the PBB scenario. DOCTOR can be applied to any pre-trained model, it does not require prior information about the underlying dataset and is as simple as the simplest available methods in the literature.
null
Contrastive Laplacian Eigenmaps
https://papers.nips.cc/paper_files/paper/2021/hash/2d1b2a5ff364606ff041650887723470-Abstract.html
Hao Zhu, Ke Sun, Peter Koniusz
https://papers.nips.cc/paper_files/paper/2021/hash/2d1b2a5ff364606ff041650887723470-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12057-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2d1b2a5ff364606ff041650887723470-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iLn-bhP-kKH
https://papers.nips.cc/paper_files/paper/2021/file/2d1b2a5ff364606ff041650887723470-Supplemental.pdf
Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. In this paper, we extend the celebrated Laplacian Eigenmaps with contrastive learning, and call them COntrastive Laplacian EigenmapS (COLES). Starting from a GAN-inspired contrastive formulation, we show that the Jensen-Shannon divergence underlying many contrastive graph embedding models fails under disjoint positive and negative distributions, which may naturally emerge during sampling in the contrastive setting. In contrast, we demonstrate analytically that COLES essentially minimizes a surrogate of Wasserstein distance, which is known to cope well under disjoint distributions. Moreover, we show that the loss of COLES belongs to the family of so-called block-contrastive losses, previously shown to be superior compared to pair-wise losses typically used by contrastive methods. We show on popular benchmarks/backbones that COLES offers favourable accuracy/scalability compared to DeepWalk, GCN, Graph2Gauss, DGI and GRACE baselines.
null
Machine learning structure preserving brackets for forecasting irreversible processes
https://papers.nips.cc/paper_files/paper/2021/hash/2d1bcedd27b586d2a9562a0f8e076b41-Abstract.html
Kookjin Lee, Nathaniel Trask, Panos Stinis
https://papers.nips.cc/paper_files/paper/2021/hash/2d1bcedd27b586d2a9562a0f8e076b41-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12058-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2d1bcedd27b586d2a9562a0f8e076b41-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ntAkYRaIfox
https://papers.nips.cc/paper_files/paper/2021/file/2d1bcedd27b586d2a9562a0f8e076b41-Supplemental.pdf
Forecasting of time-series data requires imposition of inductive biases to obtain predictive extrapolation, and recent works have imposed Hamiltonian/Lagrangian form to preserve structure for systems with \emph{reversible} dynamics. In this work we present a novel parameterization of dissipative brackets from metriplectic dynamical systems appropriate for learning \emph{irreversible} dynamics with unknown a priori model form. The process learns generalized Casimirs for energy and entropy guaranteed to be conserved and nondecreasing, respectively. Furthermore, for the case of added thermal noise, we guarantee exact preservation of a fluctuation-dissipation theorem, ensuring thermodynamic consistency. We provide benchmarks for dissipative systems demonstrating learned dynamics are more robust and generalize better than either "black-box" or penalty-based approaches.
null
On the Variance of the Fisher Information for Deep Learning
https://papers.nips.cc/paper_files/paper/2021/hash/2d290e496d16c9dcaa9b4ded5cac10cc-Abstract.html
Alexander Soen, Ke Sun
https://papers.nips.cc/paper_files/paper/2021/hash/2d290e496d16c9dcaa9b4ded5cac10cc-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12059-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2d290e496d16c9dcaa9b4ded5cac10cc-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=XGSQfOVxVp4
https://papers.nips.cc/paper_files/paper/2021/file/2d290e496d16c9dcaa9b4ded5cac10cc-Supplemental.pdf
In the realm of deep learning, the Fisher information matrix (FIM) gives novel insights and useful tools to characterize the loss landscape, perform second-order optimization, and build geometric learning theories. The exact FIM is either unavailable in closed form or too expensive to compute. In practice, it is almost always estimated based on empirical samples. We investigate two such estimators based on two equivalent representations of the FIM --- both unbiased and consistent. Their estimation quality is naturally gauged by their variance given in closed form. We analyze how the parametric structure of a deep neural network can affect the variance. The meaning of this variance measure and its upper bounds are then discussed in the context of deep learning.
null
A$^2$-Net: Learning Attribute-Aware Hash Codes for Large-Scale Fine-Grained Image Retrieval
https://papers.nips.cc/paper_files/paper/2021/hash/2d3acd3e240c61820625fff66a19938f-Abstract.html
Xiu-Shen Wei, Yang Shen, Xuhao Sun, Han-Jia Ye, Jian Yang
https://papers.nips.cc/paper_files/paper/2021/hash/2d3acd3e240c61820625fff66a19938f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12060-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2d3acd3e240c61820625fff66a19938f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=mIki_kyHpLb
https://papers.nips.cc/paper_files/paper/2021/file/2d3acd3e240c61820625fff66a19938f-Supplemental.pdf
Our work focuses on tackling large-scale fine-grained image retrieval as ranking the images depicting the concept of interests (i.e., the same sub-category labels) highest based on the fine-grained details in the query. It is desirable to alleviate the challenges of both fine-grained nature of small inter-class variations with large intra-class variations and explosive growth of fine-grained data for such a practical task. In this paper, we propose an Attribute-Aware hashing Network (A$^2$-Net) for generating attribute-aware hash codes to not only make the retrieval process efficient, but also establish explicit correspondences between hash codes and visual attributes. Specifically, based on the captured visual representations by attention, we develop an encoder-decoder structure network of a reconstruction task to unsupervisedly distill high-level attribute-specific vectors from the appearance-specific visual representations without attribute annotations. A$^2$-Net is also equipped with a feature decorrelation constraint upon these attribute vectors to enhance their representation abilities. Finally, the required hash codes are generated by the attribute vectors driven by preserving original similarities. Qualitative experiments on five benchmark fine-grained datasets show our superiority over competing methods. More importantly, quantitative results demonstrate the obtained hash codes can strongly correspond to certain kinds of crucial properties of fine-grained objects.
null
Shape Registration in the Time of Transformers
https://papers.nips.cc/paper_files/paper/2021/hash/2d3d9d5373f378108cdbd30a3c52bd3e-Abstract.html
Giovanni Trappolini, Luca Cosmo, Luca Moschella, Riccardo Marin, Simone Melzi, Emanuele Rodolà
https://papers.nips.cc/paper_files/paper/2021/hash/2d3d9d5373f378108cdbd30a3c52bd3e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12061-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2d3d9d5373f378108cdbd30a3c52bd3e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ui4xChWcA4R
https://papers.nips.cc/paper_files/paper/2021/file/2d3d9d5373f378108cdbd30a3c52bd3e-Supplemental.pdf
In this paper, we propose a transformer-based procedure for the efficient registration of non-rigid 3D point clouds. The proposed approach is data-driven and adopts for the first time the transformers architecture in the registration task. Our method is general and applies to different settings. Given a fixed template with some desired properties (e.g. skinning weights or other animation cues), we can register raw acquired data to it, thereby transferring all the template properties to the input geometry. Alternatively, given a pair of shapes, our method can register the first onto the second (or vice-versa), obtaining a high-quality dense correspondence between the two.In both contexts, the quality of our results enables us to target real applications such as texture transfer and shape interpolation.Furthermore, we also show that including an estimation of the underlying density of the surface eases the learning process. By exploiting the potential of this architecture, we can train our model requiring only a sparse set of ground truth correspondences ($10\sim20\%$ of the total points). The proposed model and the analysis that we perform pave the way for future exploration of transformer-based architectures for registration and matching applications. Qualitative and quantitative evaluations demonstrate that our pipeline outperforms state-of-the-art methods for deformable and unordered 3D data registration on different datasets and scenarios.
null
Brick-by-Brick: Combinatorial Construction with Deep Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/2d4027d6df9c0256b8d4474ce88f8c88-Abstract.html
Hyunsoo Chung, Jungtaek Kim, Boris Knyazev, Jinhwi Lee, Graham W. Taylor, Jaesik Park, Minsu Cho
https://papers.nips.cc/paper_files/paper/2021/hash/2d4027d6df9c0256b8d4474ce88f8c88-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12062-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2d4027d6df9c0256b8d4474ce88f8c88-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=c1p817YZAx6
https://papers.nips.cc/paper_files/paper/2021/file/2d4027d6df9c0256b8d4474ce88f8c88-Supplemental.pdf
Discovering a solution in a combinatorial space is prevalent in many real-world problems but it is also challenging due to diverse complex constraints and the vast number of possible combinations. To address such a problem, we introduce a novel formulation, combinatorial construction, which requires a building agent to assemble unit primitives (i.e., LEGO bricks) sequentially -- every connection between two bricks must follow a fixed rule, while no bricks mutually overlap. To construct a target object, we provide incomplete knowledge about the desired target (i.e., 2D images) instead of exact and explicit volumetric information to the agent. This problem requires a comprehensive understanding of partial information and long-term planning to append a brick sequentially, which leads us to employ reinforcement learning. The approach has to consider a variable-sized action space where a large number of invalid actions, which would cause overlap between bricks, exist. To resolve these issues, our model, dubbed Brick-by-Brick, adopts an action validity prediction network that efficiently filters invalid actions for an actor-critic network. We demonstrate that the proposed method successfully learns to construct an unseen object conditioned on a single image or multiple views of a target object.
null
Dissecting the Diffusion Process in Linear Graph Convolutional Networks
https://papers.nips.cc/paper_files/paper/2021/hash/2d95666e2649fcfc6e3af75e09f5adb9-Abstract.html
Yifei Wang, Yisen Wang, Jiansheng Yang, Zhouchen Lin
https://papers.nips.cc/paper_files/paper/2021/hash/2d95666e2649fcfc6e3af75e09f5adb9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12063-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2d95666e2649fcfc6e3af75e09f5adb9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=N51zJ7F3mw
https://papers.nips.cc/paper_files/paper/2021/file/2d95666e2649fcfc6e3af75e09f5adb9-Supplemental.pdf
Graph Convolutional Networks (GCNs) have attracted more and more attentions in recent years. A typical GCN layer consists of a linear feature propagation step and a nonlinear transformation step. Recent works show that a linear GCN can achieve comparable performance to the original non-linear GCN while being much more computationally efficient. In this paper, we dissect the feature propagation steps of linear GCNs from a perspective of continuous graph diffusion, and analyze why linear GCNs fail to benefit from more propagation steps. Following that, we propose Decoupled Graph Convolution (DGC) that decouples the terminal time and the feature propagation steps, making it more flexible and capable of exploiting a very large number of feature propagation steps. Experiments demonstrate that our proposed DGC improves linear GCNs by a large margin and makes them competitive with many modern variants of non-linear GCNs.
null
Dynamic Grained Encoder for Vision Transformers
https://papers.nips.cc/paper_files/paper/2021/hash/2d969e2cee8cfa07ce7ca0bb13c7a36d-Abstract.html
Lin Song, Songyang Zhang, Songtao Liu, Zeming Li, Xuming He, Hongbin Sun, Jian Sun, Nanning Zheng
https://papers.nips.cc/paper_files/paper/2021/hash/2d969e2cee8cfa07ce7ca0bb13c7a36d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12064-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2d969e2cee8cfa07ce7ca0bb13c7a36d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gnAIV-EKw2
https://papers.nips.cc/paper_files/paper/2021/file/2d969e2cee8cfa07ce7ca0bb13c7a36d-Supplemental.pdf
Transformers, the de-facto standard for language modeling, have been recently applied for vision tasks. This paper introduces sparse queries for vision transformers to exploit the intrinsic spatial redundancy of natural images and save computational costs. Specifically, we propose a Dynamic Grained Encoder for vision transformers, which can adaptively assign a suitable number of queries to each spatial region. Thus it achieves a fine-grained representation in discriminative regions while keeping high efficiency. Besides, the dynamic grained encoder is compatible with most vision transformer frameworks. Without bells and whistles, our encoder allows the state-of-the-art vision transformers to reduce computational complexity by 40%-60% while maintaining comparable performance on image classification. Extensive experiments on object detection and segmentation further demonstrate the generalizability of our approach. Code is available at https://github.com/StevenGrove/vtpack.
null
Understanding Negative Samples in Instance Discriminative Self-supervised Representation Learning
https://papers.nips.cc/paper_files/paper/2021/hash/2dace78f80bc92e6d7493423d729448e-Abstract.html
Kento Nozawa, Issei Sato
https://papers.nips.cc/paper_files/paper/2021/hash/2dace78f80bc92e6d7493423d729448e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12065-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2dace78f80bc92e6d7493423d729448e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=pZ5X_svdPQ
https://papers.nips.cc/paper_files/paper/2021/file/2dace78f80bc92e6d7493423d729448e-Supplemental.pdf
Instance discriminative self-supervised representation learning has been attracted attention thanks to its unsupervised nature and informative feature representation for downstream tasks. In practice, it commonly uses a larger number of negative samples than the number of supervised classes. However, there is an inconsistency in the existing analysis; theoretically, a large number of negative samples degrade classification performance on a downstream supervised task, while empirically, they improve the performance. We provide a novel framework to analyze this empirical result regarding negative samples using the coupon collector's problem. Our bound can implicitly incorporate the supervised loss of the downstream task in the self-supervised loss by increasing the number of negative samples. We confirm that our proposed analysis holds on real-world benchmark datasets.
null
On UMAP's True Loss Function
https://papers.nips.cc/paper_files/paper/2021/hash/2de5d16682c3c35007e4e92982f1a2ba-Abstract.html
Sebastian Damrich, Fred A. Hamprecht
https://papers.nips.cc/paper_files/paper/2021/hash/2de5d16682c3c35007e4e92982f1a2ba-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12066-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2de5d16682c3c35007e4e92982f1a2ba-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DKRcikndMGC
https://papers.nips.cc/paper_files/paper/2021/file/2de5d16682c3c35007e4e92982f1a2ba-Supplemental.pdf
UMAP has supplanted $t$-SNE as state-of-the-art for visualizing high-dimensional datasets in many disciplines, but the reason for its success is not well understood. In this work, we investigate UMAP's sampling based optimization scheme in detail. We derive UMAP's true loss function in closed form and find that it differs from the published one in a dataset size dependent way. As a consequence, we show that UMAP does not aim to reproduce its theoretically motivated high-dimensional UMAP similarities. Instead, it tries to reproduce similarities that only encode the $k$ nearest neighbor graph, thereby challenging the previous understanding of UMAP's effectiveness. Alternatively, we consider the implicit balancing of attraction and repulsion due to the negative sampling to be key to UMAP's success. We corroborate our theoretical findings on toy and single cell RNA sequencing data.
null
Fast Pure Exploration via Frank-Wolfe
https://papers.nips.cc/paper_files/paper/2021/hash/2dffbc474aa176b6dc957938c15d0c8b-Abstract.html
Po-An Wang, Ruo-Chun Tzeng, Alexandre Proutiere
https://papers.nips.cc/paper_files/paper/2021/hash/2dffbc474aa176b6dc957938c15d0c8b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12067-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2dffbc474aa176b6dc957938c15d0c8b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=cD2Ls4qXTc
https://papers.nips.cc/paper_files/paper/2021/file/2dffbc474aa176b6dc957938c15d0c8b-Supplemental.pdf
We study the problem of active pure exploration with fixed confidence in generic stochastic bandit environments. The goal of the learner is to answer a query about the environment with a given level of certainty while minimizing her sampling budget. For this problem, instance-specific lower bounds on the expected sample complexity reveal the optimal proportions of arm draws an Oracle algorithm would apply. These proportions solve an optimization problem whose tractability strongly depends on the structural properties of the environment, but may be instrumental in the design of efficient learning algorithms. We devise Frank-Wolfe-based Sampling (FWS), a simple algorithm whose sample complexity matches the lower bounds for a wide class of pure exploration problems. The algorithm is computationally efficient as, to learn and track the optimal proportion of arm draws, it relies on a single iteration of Frank-Wolfe algorithm applied to the lower-bound optimization problem. We apply FWS to various pure exploration tasks, including best arm identification in unstructured, thresholded, linear, and Lipschitz bandits. Despite its simplicity, FWS is competitive compared to state-of-art algorithms.
null
iFlow: Numerically Invertible Flows for Efficient Lossless Compression via a Uniform Coder
https://papers.nips.cc/paper_files/paper/2021/hash/2e3d2c4f33a7a1f58bc6c81cacd21e9c-Abstract.html
Shifeng Zhang, Ning Kang, Tom Ryder, Zhenguo Li
https://papers.nips.cc/paper_files/paper/2021/hash/2e3d2c4f33a7a1f58bc6c81cacd21e9c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12068-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2e3d2c4f33a7a1f58bc6c81cacd21e9c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=UQsbDkuGM0N
https://papers.nips.cc/paper_files/paper/2021/file/2e3d2c4f33a7a1f58bc6c81cacd21e9c-Supplemental.pdf
It was estimated that the world produced $59 ZB$ ($5.9 \times 10^{13} GB$) of data in 2020, resulting in the enormous costs of both data storage and transmission. Fortunately, recent advances in deep generative models have spearheaded a new class of so-called "neural compression" algorithms, which significantly outperform traditional codecs in terms of compression ratio. Unfortunately, the application of neural compression garners little commercial interest due to its limited bandwidth; therefore, developing highly efficient frameworks is of critical practical importance. In this paper, we discuss lossless compression using normalizing flows which have demonstrated a great capacity for achieving high compression ratios. As such, we introduce iFlow, a new method for achieving efficient lossless compression. We first propose Modular Scale Transform (MST) and a novel family of numerically invertible flow transformations based on MST. Then we introduce the Uniform Base Conversion System (UBCS), a fast uniform-distribution codec incorporated into iFlow, enabling efficient compression. iFlow achieves state-of-the-art compression ratios and is $5 \times$ quicker than other high-performance schemes. Furthermore, the techniques presented in this paper can be used to accelerate coding time for a broad class of flow-based algorithms.
null
History Aware Multimodal Transformer for Vision-and-Language Navigation
https://papers.nips.cc/paper_files/paper/2021/hash/2e5c2cb8d13e8fba78d95211440ba326-Abstract.html
Shizhe Chen, Pierre-Louis Guhur, Cordelia Schmid, Ivan Laptev
https://papers.nips.cc/paper_files/paper/2021/hash/2e5c2cb8d13e8fba78d95211440ba326-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12069-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2e5c2cb8d13e8fba78d95211440ba326-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SQxuiYf2TT
https://papers.nips.cc/paper_files/paper/2021/file/2e5c2cb8d13e8fba78d95211440ba326-Supplemental.pdf
Vision-and-language navigation (VLN) aims to build autonomous visual agents that follow instructions and navigate in real scenes. To remember previously visited locations and actions taken, most approaches to VLN implement memory using recurrent states. Instead, we introduce a History Aware Multimodal Transformer (HAMT) to incorporate a long-horizon history into multimodal decision making. HAMT efficiently encodes all the past panoramic observations via a hierarchical vision transformer (ViT), which first encodes individual images with ViT, then models spatial relation between images in a panoramic observation and finally takes into account temporal relation between panoramas in the history. It, then, jointly combines text, history and current observation to predict the next action. We first train HAMT end-to-end using several proxy tasks including single step action prediction and spatial relation prediction, and then use reinforcement learning to further improve the navigation policy. HAMT achieves new state of the art on a broad range of VLN tasks, including VLN with fine-grained instructions (R2R, RxR), high-level instructions (R2R-Last, REVERIE), dialogs (CVDN) as well as long-horizon VLN (R4R, R2R-Back). We demonstrate HAMT to be particularly effective for navigation tasks with longer trajectories.
null
Meta Two-Sample Testing: Learning Kernels for Testing with Limited Data
https://papers.nips.cc/paper_files/paper/2021/hash/2e6d9c6052e99fcdfa61d9b9da273ca2-Abstract.html
Feng Liu, Wenkai Xu, Jie Lu, Danica J. Sutherland
https://papers.nips.cc/paper_files/paper/2021/hash/2e6d9c6052e99fcdfa61d9b9da273ca2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12070-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2e6d9c6052e99fcdfa61d9b9da273ca2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=EUlAerrk47Y
https://papers.nips.cc/paper_files/paper/2021/file/2e6d9c6052e99fcdfa61d9b9da273ca2-Supplemental.pdf
Modern kernel-based two-sample tests have shown great success in distinguishing complex, high-dimensional distributions by learning appropriate kernels (or, as a special case, classifiers). Previous work, however, has assumed that many samples are observed from both of the distributions being distinguished. In realistic scenarios with very limited numbers of data samples, it can be challenging to identify a kernel powerful enough to distinguish complex distributions. We address this issue by introducing the problem of meta two-sample testing (M2ST), which aims to exploit (abundant) auxiliary data on related tasks to find an algorithm that can quickly identify a powerful test on new target tasks. We propose two specific algorithms for this task: a generic scheme which improves over baselines, and a more tailored approach which performs even better. We provide both theoretical justification and empirical evidence that our proposed meta-testing schemes outperform learning kernel-based tests directly from scarce observations, and identify when such schemes will be successful.
null
Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets
https://papers.nips.cc/paper_files/paper/2021/hash/2e855f9489df0712b4bd8ea9e2848c5a-Abstract.html
Irene Solaiman, Christy Dennison
https://papers.nips.cc/paper_files/paper/2021/hash/2e855f9489df0712b4bd8ea9e2848c5a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12071-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2e855f9489df0712b4bd8ea9e2848c5a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=k-ghaB9VZBw
https://papers.nips.cc/paper_files/paper/2021/file/2e855f9489df0712b4bd8ea9e2848c5a-Supplemental.pdf
Language models can generate harmful and biased outputs and exhibit undesirable behavior according to a given cultural context. We propose a Process for Adapting Language Models to Society (PALMS) with Values-Targeted Datasets, an iterative process to significantly change model behavior by crafting and fine-tuning on a dataset that reflects a predetermined set of target values. We evaluate our process using three metrics: quantitative metrics with human evaluations that score output adherence to a target value, toxicity scoring on outputs; and qualitative metrics analyzing the most common word associated with a given social category. Through each iteration, we add additional training dataset examples based on observed shortcomings from evaluations. PALMS performs significantly better on all metrics compared to baseline and control models for a broad range of GPT-3 language model sizes without compromising capability integrity. We find that the effectiveness of PALMS increases with model size. We show that significantly adjusting language model behavior is feasible with a small, hand-curated dataset.
null
The Lazy Online Subgradient Algorithm is Universal on Strongly Convex Domains
https://papers.nips.cc/paper_files/paper/2021/hash/2e907f44e0a9616314cf3d964d4e3c93-Abstract.html
Daron Anderson, Douglas Leith
https://papers.nips.cc/paper_files/paper/2021/hash/2e907f44e0a9616314cf3d964d4e3c93-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12072-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2e907f44e0a9616314cf3d964d4e3c93-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=3YYmDQpT0p
https://papers.nips.cc/paper_files/paper/2021/file/2e907f44e0a9616314cf3d964d4e3c93-Supplemental.zip
We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve $O(\sqrt N)$ regret against adversarial opponents; here we show it is universal in the sense that it also achieves $O(\log N)$ expected regret against i.i.d opponents. This improves upon the more complex meta-algorithm of Huang et al \cite{FTLBall} that only gets $O(\sqrt {N \log N})$ and $ O(\log N)$ bounds. In addition we show that, unlike for the simplex, order bounds for pseudo-regret and expected regret are equivalent for strongly convex domains.
null
Computer-Aided Design as Language
https://papers.nips.cc/paper_files/paper/2021/hash/2e92962c0b6996add9517e4242ea9bdc-Abstract.html
Yaroslav Ganin, Sergey Bartunov, Yujia Li, Ethan Keller, Stefano Saliceti
https://papers.nips.cc/paper_files/paper/2021/hash/2e92962c0b6996add9517e4242ea9bdc-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12073-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2e92962c0b6996add9517e4242ea9bdc-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=z-X_PpwaroO
https://papers.nips.cc/paper_files/paper/2021/file/2e92962c0b6996add9517e4242ea9bdc-Supplemental.pdf
Computer-Aided Design (CAD) applications are used in manufacturing to model everything from coffee mugs to sports cars. These programs are complex and require years of training and experience to master. A component of all CAD models particularly difficult to make are the highly structured 2D sketches that lie at the heart of every 3D construction. In this work, we propose a machine learning model capable of automatically generating such sketches. Through this, we pave the way for developing intelligent tools that would help engineers create better designs with less effort. The core of our method is a combination of a general-purpose language modeling technique alongside an off-the-shelf data serialization protocol. Additionally, we explore several extensions allowing us to gain finer control over the generation process. We show that our approach has enough flexibility to accommodate the complexity of the domain and performs well for both unconditional synthesis and image-to-sketch translation.
null
COHESIV: Contrastive Object and Hand Embedding Segmentation In Video
https://papers.nips.cc/paper_files/paper/2021/hash/2e976ab88a42d723d9f2ee6027b707f5-Abstract.html
Dandan Shan, Richard Higgins, David Fouhey
https://papers.nips.cc/paper_files/paper/2021/hash/2e976ab88a42d723d9f2ee6027b707f5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12074-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2e976ab88a42d723d9f2ee6027b707f5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=D-ti-5lgbG
https://papers.nips.cc/paper_files/paper/2021/file/2e976ab88a42d723d9f2ee6027b707f5-Supplemental.pdf
In this paper we learn to segment hands and hand-held objects from motion. Our system takes a single RGB image and hand location as input to segment the hand and hand-held object. For learning, we generate responsibility maps that show how well a hand's motion explains other pixels' motion in video. We use these responsibility maps as pseudo-labels to train a weakly-supervised neural network using an attention-based similarity loss and contrastive loss. Our system outperforms alternate methods, achieving good performance on the 100DOH, EPIC-KITCHENS, and HO3D datasets.
null
ByPE-VAE: Bayesian Pseudocoresets Exemplar VAE
https://papers.nips.cc/paper_files/paper/2021/hash/2e9f978b222a956ba6bdf427efbd9ab3-Abstract.html
Qingzhong Ai, LIRONG HE, SHIYU LIU, Zenglin Xu
https://papers.nips.cc/paper_files/paper/2021/hash/2e9f978b222a956ba6bdf427efbd9ab3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12075-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2e9f978b222a956ba6bdf427efbd9ab3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SBiKnJW9fy
https://papers.nips.cc/paper_files/paper/2021/file/2e9f978b222a956ba6bdf427efbd9ab3-Supplemental.pdf
Recent studies show that advanced priors play a major role in deep generative models. Exemplar VAE, as a variant of VAE with an exemplar-based prior, has achieved impressive results. However, due to the nature of model design, an exemplar-based model usually requires vast amounts of data to participate in training, which leads to huge computational complexity. To address this issue, we propose Bayesian Pseudocoresets Exemplar VAE (ByPE-VAE), a new variant of VAE with a prior based on Bayesian pseudocoreset. The proposed prior is conditioned on a small-scale pseudocoreset rather than the whole dataset for reducing the computational cost and avoiding overfitting. Simultaneously, we obtain the optimal pseudocoreset via a stochastic optimization algorithm during VAE training aiming to minimize the Kullback-Leibler divergence between the prior based on the pseudocoreset and that based on the whole dataset. Experimental results show that ByPE-VAE can achieve competitive improvements over the state-of-the-art VAEs in the tasks of density estimation, representation learning, and generative data augmentation. Particularly, on a basic VAE architecture, ByPE-VAE is up to 3 times faster than Exemplar VAE while almost holding the performance. Code is available at \url{https://github.com/Aiqz/ByPE-VAE}.
null
Recovery Analysis for Plug-and-Play Priors using the Restricted Eigenvalue Condition
https://papers.nips.cc/paper_files/paper/2021/hash/2ea1202aed1e0ce30d41be4919b0cc99-Abstract.html
Jiaming Liu, Salman Asif, Brendt Wohlberg, Ulugbek Kamilov
https://papers.nips.cc/paper_files/paper/2021/hash/2ea1202aed1e0ce30d41be4919b0cc99-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12076-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2ea1202aed1e0ce30d41be4919b0cc99-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=a62JHQKHVv
https://papers.nips.cc/paper_files/paper/2021/file/2ea1202aed1e0ce30d41be4919b0cc99-Supplemental.pdf
The plug-and-play priors (PnP) and regularization by denoising (RED) methods have become widely used for solving inverse problems by leveraging pre-trained deep denoisers as image priors. While the empirical imaging performance and the theoretical convergence properties of these algorithms have been widely investigated, their recovery properties have not previously been theoretically analyzed. We address this gap by showing how to establish theoretical recovery guarantees for PnP/RED by assuming that the solution of these methods lies near the fixed-points of a deep neural network. We also present numerical results comparing the recovery performance of PnP/RED in compressive sensing against that of recent compressive sensing algorithms based on generative models. Our numerical results suggest that PnP with a pre-trained artifact removal network provides significantly better results compared to the existing state-of-the-art methods.
null
Group Equivariant Subsampling
https://papers.nips.cc/paper_files/paper/2021/hash/2ea6241cf767c279cf1e80a790df1885-Abstract.html
Jin Xu, Hyunjik Kim, Thomas Rainforth, Yee Teh
https://papers.nips.cc/paper_files/paper/2021/hash/2ea6241cf767c279cf1e80a790df1885-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12077-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2ea6241cf767c279cf1e80a790df1885-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CtaDl9L0bIQ
https://papers.nips.cc/paper_files/paper/2021/file/2ea6241cf767c279cf1e80a790df1885-Supplemental.pdf
Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions, to reduce the spatial dimensions of feature maps and to allow the receptive fields to grow exponentially with depth. However, it is known that such subsampling operations are not translation equivariant, unlike convolutions that are translation equivariant. Here, we first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs. We then generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling. We use these layers to construct group equivariant autoencoders (GAEs) that allow us to learn low-dimensional equivariant representations. We empirically verify on images that the representations are indeed equivariant to input translations and rotations, and thus generalise well to unseen positions and orientations. We further use GAEs in models that learn object-centric representations on multi-object datasets, and show improved data efficiency and decomposition compared to non-equivariant baselines.
null
Data Sharing and Compression for Cooperative Networked Control
https://papers.nips.cc/paper_files/paper/2021/hash/2eb5657d37f474e4c4cf01e4882b8962-Abstract.html
Jiangnan Cheng, Marco Pavone, Sachin Katti, Sandeep Chinchali, Ao Tang
https://papers.nips.cc/paper_files/paper/2021/hash/2eb5657d37f474e4c4cf01e4882b8962-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12078-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2eb5657d37f474e4c4cf01e4882b8962-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RpEANv3iv8
https://papers.nips.cc/paper_files/paper/2021/file/2eb5657d37f474e4c4cf01e4882b8962-Supplemental.pdf
Sharing forecasts of network timeseries data, such as cellular or electricity load patterns, can improve independent control applications ranging from traffic scheduling to power generation. Typically, forecasts are designed without knowledge of a downstream controller's task objective, and thus simply optimize for mean prediction error. However, such task-agnostic representations are often too large to stream over a communication network and do not emphasize salient temporal features for cooperative control. This paper presents a solution to learn succinct, highly-compressed forecasts that are co-designed with a modular controller's task objective. Our simulations with real cellular, Internet-of-Things (IoT), and electricity load data show we can improve a model predictive controller's performance by at least 25% while transmitting 80% less data than the competing method. Further, we present theoretical compression results for a networked variant of the classical linear quadratic regulator (LQR) control problem.
null
Hyperbolic Procrustes Analysis Using Riemannian Geometry
https://papers.nips.cc/paper_files/paper/2021/hash/2ed80f6311c1825feb854d78fa969d34-Abstract.html
Ya-Wei Eileen Lin, Yuval Kluger, Ronen Talmon
https://papers.nips.cc/paper_files/paper/2021/hash/2ed80f6311c1825feb854d78fa969d34-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12079-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2ed80f6311c1825feb854d78fa969d34-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Ai73e_POVd
https://papers.nips.cc/paper_files/paper/2021/file/2ed80f6311c1825feb854d78fa969d34-Supplemental.zip
Label-free alignment between datasets collected at different times, locations, or by different instruments is a fundamental scientific task. Hyperbolic spaces have recently provided a fruitful foundation for the development of informative representations of hierarchical data. Here, we take a purely geometric approach for label-free alignment of hierarchical datasets and introduce hyperbolic Procrustes analysis (HPA). HPA consists of new implementations of the three prototypical Procrustes analysis components: translation, scaling, and rotation, based on the Riemannian geometry of the Lorentz model of hyperbolic space. We analyze the proposed components, highlighting their useful properties for alignment. The efficacy of HPA, its theoretical properties, stability and computational efficiency are demonstrated in simulations. In addition, we showcase its performance on three batch correction tasks involving gene expression and mass cytometry data. Specifically, we demonstrate high-quality unsupervised batch effect removal from data acquired at different sites and with different technologies that outperforms recent methods for label-free alignment in hyperbolic spaces.
null
No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
https://papers.nips.cc/paper_files/paper/2021/hash/2f2b265625d76a6704b08093c652fd79-Abstract.html
Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, Jiashi Feng
https://papers.nips.cc/paper_files/paper/2021/hash/2f2b265625d76a6704b08093c652fd79-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12080-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2f2b265625d76a6704b08093c652fd79-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AFiH_CNnVhS
https://papers.nips.cc/paper_files/paper/2021/file/2f2b265625d76a6704b08093c652fd79-Supplemental.pdf
A central challenge in training classification models in the real-world federated system is learning with non-IID data. To cope with this, most of the existing works involve enforcing regularization in local optimization or improving the model aggregation scheme at the server. Other works also share public datasets or synthesized samples to supplement the training of under-represented classes or introduce a certain level of personalization. Though effective, they lack a deep understanding of how the data heterogeneity affects each layer of a deep classification model. In this paper, we bridge this gap by performing an experimental analysis of the representations learned by different layers. Our observations are surprising: (1) there exists a greater bias in the classifier than other layers, and (2) the classification performance can be significantly improved by post-calibrating the classifier after federated training. Motivated by the above findings, we propose a novel and simple algorithm called Classifier Calibration with Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated gaussian mixture model. Experimental results demonstrate that CCVR achieves state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10. We hope that our simple yet effective method can shed some light on the future research of federated learning with non-IID data.
null
Preconditioned Gradient Descent for Over-Parameterized Nonconvex Matrix Factorization
https://papers.nips.cc/paper_files/paper/2021/hash/2f2cd5c753d3cee48e47dbb5bbaed331-Abstract.html
Jialun Zhang, Salar Fattahi, Richard Y Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/2f2cd5c753d3cee48e47dbb5bbaed331-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12081-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2f2cd5c753d3cee48e47dbb5bbaed331-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=5-Of1DTlq
https://papers.nips.cc/paper_files/paper/2021/file/2f2cd5c753d3cee48e47dbb5bbaed331-Supplemental.pdf
In practical instances of nonconvex matrix factorization, the rank of the true solution $r^{\star}$ is often unknown, so the rank $r$of the model can be over-specified as $r>r^{\star}$. This over-parameterized regime of matrix factorization significantly slows down the convergence of local search algorithms, from a linear rate with $r=r^{\star}$ to a sublinear rate when $r>r^{\star}$. We propose an inexpensive preconditioner for the matrix sensing variant of nonconvex matrix factorization that restores the convergence rate of gradient descent back to linear, even in the over-parameterized case, while also making it agnostic to possible ill-conditioning in the ground truth. Classical gradient descent in a neighborhood of the solution slows down due to the need for the model matrix factor to become singular. Our key result is that this singularity can be corrected by $\ell_{2}$ regularization with a specific range of values for the damping parameter. In fact, a good damping parameter can be inexpensively estimated from the current iterate. The resulting algorithm, which we call preconditioned gradient descent or PrecGD, is stable under noise, and converges linearly to an information theoretically optimal error bound. Our numerical experiments find that PrecGD works equally well in restoring the linear convergence of other variants of nonconvex matrix factorization in the over-parameterized regime.
null
Improving Contrastive Learning on Imbalanced Data via Open-World Sampling
https://papers.nips.cc/paper_files/paper/2021/hash/2f37d10131f2a483a8dd005b3d14b0d9-Abstract.html
Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang
https://papers.nips.cc/paper_files/paper/2021/hash/2f37d10131f2a483a8dd005b3d14b0d9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12082-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2f37d10131f2a483a8dd005b3d14b0d9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=EIfV-XAggKo
https://papers.nips.cc/paper_files/paper/2021/file/2f37d10131f2a483a8dd005b3d14b0d9-Supplemental.pdf
Contrastive learning approaches have achieved great success in learning visual representations with few labels of the target classes. That implies a tantalizing possibility of scaling them up beyond a curated “seed" benchmark, to incorporating more unlabeled images from the internet-scale external sources to enhance its performance. However, in practice, larger amount of unlabeled data will require more computing resources due to the bigger model size and longer training needed. Moreover, open-world unlabeled data usually follows an implicit long-tail class or attribute distribution, many of which also do not belong to the target classes. Blindly leveraging all unlabeled data hence can lead to the data imbalance as well as distraction issues. This motivates us to seek a principled approach to strategically select unlabeled data from an external source, in order to learn generalizable, balanced and diverse representations for relevant classes. In this work, we present an open-world unlabeled data sampling framework called Model-Aware K-center (MAK), which follows three simple principles: (1) tailness, which encourages sampling of examples from tail classes, by sorting the empirical contrastive loss expectation (ECLE) of samples over random data augmentations; (2) proximity, which rejects the out-of-distribution outliers that may distract training; and (3) diversity, which ensures diversity in the set of sampled examples. Empirically, using ImageNet-100-LT (without labels) as the seed dataset and two “noisy” external data sources, we demonstrate that MAK can consistently improve both the overall representation quality and the class balancedness of the learned features, as evaluated via linear classifier evaluation on full-shot and few-shot settings. Thecode is available at: https://github.com/VITA-Group/MAK.
null
Searching for Efficient Transformers for Language Modeling
https://papers.nips.cc/paper_files/paper/2021/hash/2f3c6a4cd8af177f6456e7e51a916ff3-Abstract.html
David So, Wojciech Mańke, Hanxiao Liu, Zihang Dai, Noam Shazeer, Quoc V Le
https://papers.nips.cc/paper_files/paper/2021/hash/2f3c6a4cd8af177f6456e7e51a916ff3-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12083-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2f3c6a4cd8af177f6456e7e51a916ff3-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bzpkxS_JVsI
https://papers.nips.cc/paper_files/paper/2021/file/2f3c6a4cd8af177f6456e7e51a916ff3-Supplemental.pdf
Large Transformer models have been central to recent advances in natural language processing. The training and inference costs of these models, however, have grown rapidly and become prohibitively expensive. Here we aim to reduce the costs of Transformers by searching for a more efficient variant. Compared to previous approaches, our search is performed at a lower level, over the primitives that define a Transformer TensorFlow program. We identify an architecture, named Primer, that has a smaller training cost than the original Transformer and other variants for auto-regressive language modeling. Primer’s improvements can be mostly attributed to two simple modifications: squaring ReLU activations and adding a depthwise convolution layer after each Q, K, and V projection in self-attention.Experiments show Primer’s gains over Transformer increase as compute scale grows and follow a power law with respect to quality at optimal model sizes. We also verify empirically that Primer can be dropped into different codebases to significantly speed up training without additional tuning. For example, at a 500M parameter size, Primer improves the original T5 architecture on C4 auto-regressive language modeling, reducing the training cost by 4X. Furthermore, the reduced training cost means Primer needs much less compute to reach a target one-shot performance. For instance, in a 1.9B parameter configuration similar to GPT-3 XL, Primer uses 1/3 of the training compute to achieve the same one-shot performance as Transformer. We open source our models and several comparisons in T5 to help with reproducibility.
null
Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets
https://papers.nips.cc/paper_files/paper/2021/hash/2f4ccb0f7a84f335affb418aee08a6df-Abstract.html
Max Ryabinin, Andrey Malinin, Mark Gales
https://papers.nips.cc/paper_files/paper/2021/hash/2f4ccb0f7a84f335affb418aee08a6df-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12084-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2f4ccb0f7a84f335affb418aee08a6df-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7S3RMGVS5vO
https://papers.nips.cc/paper_files/paper/2021/file/2f4ccb0f7a84f335affb418aee08a6df-Supplemental.pdf
Ensembles of machine learning models yield improved system performance as well as robust and interpretable uncertainty estimates; however, their inference costs can be prohibitively high. Ensemble Distribution Distillation (EnD$^2$) is an approach that allows a single model to efficiently capture both the predictive performance and uncertainty estimates of an ensemble. For classification, this is achieved by training a Dirichlet distribution over the ensemble members' output distributions via the maximum likelihood criterion. Although theoretically principled, this work shows that the criterion exhibits poor convergence when applied to large-scale tasks where the number of classes is very high. Specifically, we show that for the Dirichlet log-likelihood criterion classes with low probability induce larger gradients than high-probability classes. Hence during training the model focuses on the distribution of the ensemble tail-class probabilities rather than the probability of the correct and closely related classes. We propose a new training objective which minimizes the reverse KL-divergence to a \emph{Proxy-Dirichlet} target derived from the ensemble. This loss resolves the gradient issues of EnD$^2$, as we demonstrate both theoretically and empirically on the ImageNet, LibriSpeech, and WMT17 En-De datasets containing 1000, 5000, and 40,000 classes, respectively.
null
Multi-Person 3D Motion Prediction with Multi-Range Transformers
https://papers.nips.cc/paper_files/paper/2021/hash/2fd5d41ec6cfab47e32164d5624269b1-Abstract.html
Jiashun Wang, Huazhe Xu, Medhini Narasimhan, Xiaolong Wang
https://papers.nips.cc/paper_files/paper/2021/hash/2fd5d41ec6cfab47e32164d5624269b1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12085-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/2fd5d41ec6cfab47e32164d5624269b1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rrf6XgIS_Ek
null
We propose a novel framework for multi-person 3D motion trajectory prediction. Our key observation is that a human's action and behaviors may highly depend on the other persons around. Thus, instead of predicting each human pose trajectory in isolation, we introduce a Multi-Range Transformers model which contains of a local-range encoder for individual motion and a global-range encoder for social interactions. The Transformer decoder then performs prediction for each person by taking a corresponding pose as a query which attends to both local and global-range encoder features. Our model not only outperforms state-of-the-art methods on long-term 3D motion prediction, but also generates diverse social interactions. More interestingly, our model can even predict 15-person motion simultaneously by automatically dividing the persons into different interaction groups. Project page with code is available at https://jiashunwang.github.io/MRT/.
null
STEM: A Stochastic Two-Sided Momentum Algorithm Achieving Near-Optimal Sample and Communication Complexities for Federated Learning
https://papers.nips.cc/paper_files/paper/2021/hash/3016a447172f3045b65f5fc83e04b554-Abstract.html
Prashant Khanduri, PRANAY SHARMA, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, Pramod Varshney
https://papers.nips.cc/paper_files/paper/2021/hash/3016a447172f3045b65f5fc83e04b554-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12086-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3016a447172f3045b65f5fc83e04b554-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=J28lNO4p3ki
https://papers.nips.cc/paper_files/paper/2021/file/3016a447172f3045b65f5fc83e04b554-Supplemental.pdf
Federated Learning (FL) refers to the paradigm where multiple worker nodes (WNs) build a joint model by using local data. Despite extensive research, for a generic non-convex FL problem, it is not clear, how to choose the WNs' and the server's update directions, the minibatch sizes, and the local update frequency, so that the WNs use the minimum number of samples and communication rounds to achieve the desired solution. This work addresses the above question and considers a class of stochastic algorithms where the WNs perform a few local updates before communication. We show that when both the WN's and the server's directions are chosen based on certain stochastic momentum estimator, the algorithm requires $\tilde{\mathcal{O}}(\epsilon^{-3/2})$ samples and $\tilde{\mathcal{O}}(\epsilon^{-1})$ communication rounds to compute an $\epsilon$-stationary solution. To the best of our knowledge, this is the first FL algorithm that achieves such {\it near-optimal} sample and communication complexities simultaneously. Further, we show that there is a trade-off curve between local update frequencies and local minibatch sizes, on which the above sample and communication complexities can be maintained. {Finally, we show that for the classical FedAvg (a.k.a. Local SGD, which is a momentum-less special case of the STEM), a similar trade-off curve exists, albeit with worse sample and communication complexities. Our insights on this trade-off provides guidelines for choosing the four important design elements for FL algorithms, the update frequency, directions, and minibatch sizes to achieve the best performance.}
null
Bubblewrap: Online tiling and real-time flow prediction on neural manifolds
https://papers.nips.cc/paper_files/paper/2021/hash/307eb8ee16198da891c521eca21464c1-Abstract.html
Anne Draelos, Pranjal Gupta, Na Young Jun, Chaichontat Sriworarat, John Pearson
https://papers.nips.cc/paper_files/paper/2021/hash/307eb8ee16198da891c521eca21464c1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12087-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/307eb8ee16198da891c521eca21464c1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SjxC07jABZ4
https://papers.nips.cc/paper_files/paper/2021/file/307eb8ee16198da891c521eca21464c1-Supplemental.pdf
While most classic studies of function in experimental neuroscience have focused on the coding properties of individual neurons, recent developments in recording technologies have resulted in an increasing emphasis on the dynamics of neural populations. This has given rise to a wide variety of models for analyzing population activity in relation to experimental variables, but direct testing of many neural population hypotheses requires intervening in the system based on current neural state, necessitating models capable of inferring neural state online. Existing approaches, primarily based on dynamical systems, require strong parametric assumptions that are easily violated in the noise-dominated regime and do not scale well to the thousands of data channels in modern experiments. To address this problem, we propose a method that combines fast, stable dimensionality reduction with a soft tiling of the resulting neural manifold, allowing dynamics to be approximated as a probability flow between tiles. This method can be fit efficiently using online expectation maximization, scales to tens of thousands of tiles, and outperforms existing methods when dynamics are noise-dominated or feature multi-modal transition probabilities. The resulting model can be trained at kiloHertz data rates, produces accurate approximations of neural dynamics within minutes, and generates predictions on submillisecond time scales. It retains predictive performance throughout many time steps into the future and is fast enough to serve as a component of closed-loop causal experiments.
null
The Semi-Random Satisfaction of Voting Axioms
https://papers.nips.cc/paper_files/paper/2021/hash/3083202a936b7d0ef8b680d7ae73fa1a-Abstract.html
Lirong Xia
https://papers.nips.cc/paper_files/paper/2021/hash/3083202a936b7d0ef8b680d7ae73fa1a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12088-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3083202a936b7d0ef8b680d7ae73fa1a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=uw4mcO8nz3n
https://papers.nips.cc/paper_files/paper/2021/file/3083202a936b7d0ef8b680d7ae73fa1a-Supplemental.pdf
We initiate the work towards a comprehensive picture of the worst average-case satisfaction of voting axioms in semi-random models, to provide a finer and more realistic foundation for comparing voting rules. We adopt the semi-random model and formulation in [Xia 2020], where an adversary chooses arbitrarily correlated ``ground truth'' preferences for the agents, on top of which random noises are added. We focus on characterizing the semi-random satisfaction of two well-studied voting axioms: Condorcet criterion and participation. We prove that for any fixed number of alternatives, when the number of voters $n$ is sufficiently large, the semi-random satisfaction of the Condorcet criterion under a wide range of voting rules is $1$, $1-\exp(-\Theta(n))$, $\Theta(n^{-0.5})$, $ \exp(-\Theta(n))$, or being $\Theta(1)$ and $1-\Theta(1)$ at the same time; and the semi-random satisfaction of participation is $1-\Theta(n^{-0.5})$. Our results address open questions by Berg and Lepelley in 1994, and also confirm the following high-level message: the Condorcet criterion is a bigger concern than participation under realistic models.
null
Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D Shape Synthesis
https://papers.nips.cc/paper_files/paper/2021/hash/30a237d18c50f563cba4531f1db44acf-Abstract.html
Tianchang Shen, Jun Gao, Kangxue Yin, Ming-Yu Liu, Sanja Fidler
https://papers.nips.cc/paper_files/paper/2021/hash/30a237d18c50f563cba4531f1db44acf-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12089-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/30a237d18c50f563cba4531f1db44acf-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=xN3XX6pKSD5
https://papers.nips.cc/paper_files/paper/2021/file/30a237d18c50f563cba4531f1db44acf-Supplemental.zip
We introduce DMTet, a deep 3D conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels. It marries the merits of implicit and explicit 3D representations by leveraging a novel hybrid 3D representation. Compared to the current implicit approaches, which are trained to regress the signed distance values, DMTet directly optimizes for the reconstructed surface, which enables us to synthesize finer geometric details with fewer artifacts. Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology. The core of DMTet includes a deformable tetrahedral grid that encodes a discretized signed distance function and a differentiable marching tetrahedra layer that converts the implicit signed distance representation to the explicit surface mesh representation. This combination allows joint optimization of the surface geometry and topology as well as generation of the hierarchy of subdivisions using reconstruction and adversarial losses defined explicitly on the surface mesh. Our approach significantly outperforms existing work on conditional shape synthesis from coarse voxel inputs, trained on a dataset of complex 3D animal shapes. Project page: https://nv-tlabs.github.io/DMTet/.
null
Learning to Combine Per-Example Solutions for Neural Program Synthesis
https://papers.nips.cc/paper_files/paper/2021/hash/30d411fdc0e6daf092a74354094359bb-Abstract.html
Disha Shrivastava, Hugo Larochelle, Daniel Tarlow
https://papers.nips.cc/paper_files/paper/2021/hash/30d411fdc0e6daf092a74354094359bb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12090-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/30d411fdc0e6daf092a74354094359bb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4PK-St2iVZn
https://papers.nips.cc/paper_files/paper/2021/file/30d411fdc0e6daf092a74354094359bb-Supplemental.pdf
The goal of program synthesis from examples is to find a computer program that is consistent with a given set of input-output examples. Most learning-based approaches try to find a program that satisfies all examples at once. Our work, by contrast, considers an approach that breaks the problem into two stages: (a) find programs that satisfy only one example, and (b) leverage these per-example solutions to yield a program that satisfies all examples. We introduce the Cross Aggregator neural network module based on a multi-head attention mechanism that learns to combine the cues present in these per-example solutions to synthesize a global solution. Evaluation across programs of different lengths and under two different experimental settings reveal that when given the same time budget, our technique significantly improves the success rate over PCCoder [Zohar et. al 2018] and other ablation baselines.
null
On Success and Simplicity: A Second Look at Transferable Targeted Attacks
https://papers.nips.cc/paper_files/paper/2021/hash/30d454f09b771b9f65e3eaf6e00fa7bd-Abstract.html
Zhengyu Zhao, Zhuoran Liu, Martha Larson
https://papers.nips.cc/paper_files/paper/2021/hash/30d454f09b771b9f65e3eaf6e00fa7bd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12091-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/30d454f09b771b9f65e3eaf6e00fa7bd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LVWcGZr-8h
null
Achieving transferability of targeted attacks is reputed to be remarkably difficult. The current state of the art has resorted to resource-intensive solutions that necessitate training model(s) for each target class with additional data. In our investigation, we find, however, that simple transferable attacks which require neither model training nor additional data can achieve surprisingly strong targeted transferability. This insight has been overlooked until now, mainly because the widespread practice of attacking with only few iterations has largely limited the attack convergence to optimal targeted transferability. In particular, we, for the first time, identify that a very simple logit loss can largely surpass the commonly adopted cross-entropy loss, and yield even better results than the resource-intensive state of the art. Our analysis spans a variety of transfer scenarios, especially including three new, realistic scenarios: an ensemble transfer scenario with little model similarity, a worse-case scenario with low-ranked target classes, and also a real-world attack on the Google Cloud Vision API. Results in these new transfer scenarios demonstrate that the commonly adopted, easy scenarios cannot fully reveal the actual strength of different attacks and may cause misleading comparative results. We also show the usefulness of the simple logit loss for generating targeted universal adversarial perturbations in a data-free manner. Overall, the aim of our analysis is to inspire a more meaningful evaluation on targeted transferability. Code is available at https://github.com/ZhengyuZhao/Targeted-Tansfer.
null
Provably efficient, succinct, and precise explanations
https://papers.nips.cc/paper_files/paper/2021/hash/30d4e6422cd65c7913bc9ce62e078b79-Abstract.html
Guy Blanc, Jane Lange, Li-Yang Tan
https://papers.nips.cc/paper_files/paper/2021/hash/30d4e6422cd65c7913bc9ce62e078b79-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12092-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/30d4e6422cd65c7913bc9ce62e078b79-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=9UjRw5bqURS
null
We consider the problem of explaining the predictions of an arbitrary blackbox model $f$: given query access to $f$ and an instance $x$, output a small set of $x$'s features that in conjunction essentially determines $f(x)$. We design an efficient algorithm with provable guarantees on the succinctness and precision of the explanations that it returns. Prior algorithms were either efficient but lacked such guarantees, or achieved such guarantees but were inefficient. We obtain our algorithm via a connection to the problem of {\sl implicitly} learning decision trees. The implicit nature of this learning task allows for efficient algorithms even when the complexity of~$f$ necessitates an intractably large surrogate decision tree. We solve the implicit learning problem by bringing together techniques from learning theory, local computation algorithms, and complexity theory. Our approach of “explaining by implicit learning” shares elements of two previously disparate methods for post-hoc explanations, global and local explanations, and we make the case that it enjoys advantages of both.
null
Refined Learning Bounds for Kernel and Approximate $k$-Means
https://papers.nips.cc/paper_files/paper/2021/hash/30f8f6b940d1073d8b6a5eebc46dd6e5-Abstract.html
Yong Liu
https://papers.nips.cc/paper_files/paper/2021/hash/30f8f6b940d1073d8b6a5eebc46dd6e5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12093-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/30f8f6b940d1073d8b6a5eebc46dd6e5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_tQns0wUl_3
https://papers.nips.cc/paper_files/paper/2021/file/30f8f6b940d1073d8b6a5eebc46dd6e5-Supplemental.pdf
Kernel $k$-means is one of the most popular approaches to clustering and its theoretical properties have been investigated for decades. However, the existing state-of-the-art risk bounds are of order $\mathcal{O}(k/\sqrt{n})$, which do not match with the stated lower bound $\Omega(\sqrt{k/n})$ in terms of $k$, where $k$ is the number of clusters and $n$ is the size of the training set. In this paper, we study the statistical properties of kernel $k$-means and Nystr\"{o}m-based kernel $k$-means, and obtain optimal clustering risk bounds, which improve the existing risk bounds. Particularly, based on a refined upper bound of Rademacher complexity [21], we first derive an optimal risk bound of rate $\mathcal{O}(\sqrt{k/n})$ for empirical risk minimizer (ERM), and further extend it to general cases beyond ERM. Then, we analyze the statistical effect of computational approximations of Nystr\"{o}m kernel $k$-means, and prove that it achieves the same statistical accuracy as the original kernel $k$-means considering only $\Omega(\sqrt{nk})$ Nystr\"{o}m landmark points. We further relax the restriction of landmark points from $\Omega(\sqrt{nk})$ to $\Omega(\sqrt{n})$ under a mild condition. Finally, we validate the theoretical findings via numerical experiments.
null
Learning Causal Semantic Representation for Out-of-Distribution Prediction
https://papers.nips.cc/paper_files/paper/2021/hash/310614fca8fb8e5491295336298c340f-Abstract.html
Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu
https://papers.nips.cc/paper_files/paper/2021/hash/310614fca8fb8e5491295336298c340f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12094-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/310614fca8fb8e5491295336298c340f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-msETI57gCH
https://papers.nips.cc/paper_files/paper/2021/file/310614fca8fb8e5491295336298c340f-Supplemental.pdf
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output. To address the problem, we propose a Causal Semantic Generative model (CSG) based on a causal reasoning so that the two factors are modeled separately, and develop methods for OOD prediction from a single training domain, which is common and challenging. The methods are based on the causal invariance principle, with a novel design in variational Bayes for both efficient learning and easy prediction. Theoretically, we prove that under certain conditions, CSG can identify the semantic factor by fitting training data, and this semantic-identification guarantees the boundedness of OOD generalization error and the success of adaptation. Empirical study shows improved OOD performance over prevailing baselines.
null
A first-order primal-dual method with adaptivity to local smoothness
https://papers.nips.cc/paper_files/paper/2021/hash/310b60949d2b6096903d7e8a539b20f5-Abstract.html
Maria-Luiza Vladarean, Yura Malitsky, Volkan Cevher
https://papers.nips.cc/paper_files/paper/2021/hash/310b60949d2b6096903d7e8a539b20f5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12095-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/310b60949d2b6096903d7e8a539b20f5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=DtXBYsSOxCD
https://papers.nips.cc/paper_files/paper/2021/file/310b60949d2b6096903d7e8a539b20f5-Supplemental.pdf
We consider the problem of finding a saddle point for the convex-concave objective $\min_x \max_y f(x) + \langle Ax, y\rangle - g^*(y)$, where $f$ is a convex function with locally Lipschitz gradient and $g$ is convex and possibly non-smooth. We propose an adaptive version of the Condat-Vũ algorithm, which alternates between primal gradient steps and dual proximal steps. The method achieves stepsize adaptivity through a simple rule involving $\|A\|$ and the norm of recently computed gradients of $f$. Under standard assumptions, we prove an $\mathcal{O}(k^{-1})$ ergodic convergence rate. Furthermore, when $f$ is also locally strongly convex and $A$ has full row rank we show that our method converges with a linear rate. Numerical experiments are provided for illustrating the practical performance of the algorithm.
null
A Theory-Driven Self-Labeling Refinement Method for Contrastive Representation Learning
https://papers.nips.cc/paper_files/paper/2021/hash/310ce61c90f3a46e340ee8257bc70e93-Abstract.html
Pan Zhou, Caiming Xiong, Xiaotong Yuan, Steven Chu Hong Hoi
https://papers.nips.cc/paper_files/paper/2021/hash/310ce61c90f3a46e340ee8257bc70e93-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12096-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/310ce61c90f3a46e340ee8257bc70e93-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=P84bifNCpFQ
https://papers.nips.cc/paper_files/paper/2021/file/310ce61c90f3a46e340ee8257bc70e93-Supplemental.pdf
For an image query, unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives. Although intuitive, such a native label assignment strategy cannot reveal the underlying semantic similarity between a query and its positives and negatives, and impairs performance, since some negatives are semantically similar to the query or even share the same semantic class as the query. In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination, while accurate labels benefit its generalization. Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning. It improves the label quality via two complementary modules: (i) self-labeling refinery (SLR) to generate accurate labels and (ii) momentum mixup (MM) to enhance similarity between query and its positive. SLR uses a positive of a query to estimate semantic similarity between a query and its positive and negatives, and combines estimated similarity with vanilla label assignment in contrastive learning to iteratively generate more accurate and informative soft labels. We theoretically show that our SLR can exactly recover the true semantic labels of label-corrupted data, and supervises networks to achieve zero prediction error on classification tasks. MM randomly combines queries and positives to increase semantic similarity between the generated virtual queries and their positives so as to improves label accuracy. Experimental results on CIFAR10, ImageNet, VOC and COCO show the effectiveness of our method.
null
Adversarial Robustness with Semi-Infinite Constrained Learning
https://papers.nips.cc/paper_files/paper/2021/hash/312ecfdfa8b239e076b114498ce21905-Abstract.html
Alexander Robey, Luiz Chamon, George J. Pappas, Hamed Hassani, Alejandro Ribeiro
https://papers.nips.cc/paper_files/paper/2021/hash/312ecfdfa8b239e076b114498ce21905-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12097-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/312ecfdfa8b239e076b114498ce21905-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=e5RK939Zz1S
https://papers.nips.cc/paper_files/paper/2021/file/312ecfdfa8b239e076b114498ce21905-Supplemental.pdf
Despite strong performance in numerous applications, the fragility of deep learning to input perturbations has raised serious questions about its use in safety-critical domains. While adversarial training can mitigate this issue in practice, state-of-the-art methods are increasingly application-dependent, heuristic in nature, and suffer from fundamental trade-offs between nominal performance and robustness. Moreover, the problem of finding worst-case perturbations is non-convex and underparameterized, both of which engender a non-favorable optimization landscape. Thus, there is a gap between the theory and practice of robust learning, particularly with respect to when and why adversarial training works. In this paper, we take a constrained learning approach to address these questions and to provide a theoretical foundation for robust learning. In particular, we leverage semi-infinite optimization and non-convex duality theory to show that adversarial training is equivalent to a statistical problem over perturbation distributions. Notably, we show that a myriad of previous robust training techniques can be recovered for particular, sub-optimal choices of these distributions. Using these insights, we then propose a hybrid Langevin Markov Chain Monte Carlo approach for which several common algorithms (e.g., PGD) are special cases. Finally, we show that our approach can mitigate the trade-off between nominal and robust performance, yielding state-of-the-art results on MNIST and CIFAR-10. Our code is available at: https://github.com/arobey1/advbench.
null
Conformal Time-series Forecasting
https://papers.nips.cc/paper_files/paper/2021/hash/312f1ba2a72318edaaa995a67835fad5-Abstract.html
Kamile Stankeviciute, Ahmed M. Alaa, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2021/hash/312f1ba2a72318edaaa995a67835fad5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12098-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/312f1ba2a72318edaaa995a67835fad5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Rx9dBZaV_IP
https://papers.nips.cc/paper_files/paper/2021/file/312f1ba2a72318edaaa995a67835fad5-Supplemental.pdf
Current approaches for multi-horizon time series forecasting using recurrent neural networks (RNNs) focus on issuing point estimates, which is insufficient for decision-making in critical application domains where an uncertainty estimate is also required. Existing approaches for uncertainty quantification in RNN-based time-series forecasts are limited as they may require significant alterations to the underlying model architecture, may be computationally complex, may be difficult to calibrate, may incur high sample complexity, and may not provide theoretical guarantees on frequentist coverage. In this paper, we extend the inductive conformal prediction framework to the time-series forecasting setup, and propose a lightweight algorithm to address all of the above limitations, providing uncertainty estimates with theoretical guarantees for any multi-horizon forecast predictor and any dataset with minimal exchangeability assumptions. We demonstrate the effectiveness of our approach by comparing it with existing benchmarks on a variety of synthetic and real-world datasets.
null
A 3D Generative Model for Structure-Based Drug Design
https://papers.nips.cc/paper_files/paper/2021/hash/314450613369e0ee72d0da7f6fee773c-Abstract.html
Shitong Luo, Jiaqi Guan, Jianzhu Ma, Jian Peng
https://papers.nips.cc/paper_files/paper/2021/hash/314450613369e0ee72d0da7f6fee773c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12099-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/314450613369e0ee72d0da7f6fee773c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yDwfVD_odRo
https://papers.nips.cc/paper_files/paper/2021/file/314450613369e0ee72d0da7f6fee773c-Supplemental.pdf
We study a fundamental problem in structure-based drug design --- generating molecules that bind to specific protein binding sites. While we have witnessed the great success of deep generative models in drug design, the existing methods are mostly string-based or graph-based. They are limited by the lack of spatial information and thus unable to be applied to structure-based design tasks. Particularly, such models have no or little knowledge of how molecules interact with their target proteins exactly in 3D space. In this paper, we propose a 3D generative model that generates molecules given a designated 3D protein binding site. Specifically, given a binding site as the 3D context, our model estimates the probability density of atom's occurrences in 3D space --- positions that are more likely to have atoms will be assigned higher probability. To generate 3D molecules, we propose an auto-regressive sampling scheme --- atoms are sampled sequentially from the learned distribution until there is no room for new atoms. Combined with this sampling scheme, our model can generate valid and diverse molecules, which could be applicable to various structure-based molecular design tasks such as molecule sampling and linker design. Experimental results demonstrate that molecules sampled from our model exhibit high binding affinity to specific targets and good drug properties such as drug-likeness even if the model is not explicitly optimized for them.
null
Bootstrapping the Error of Oja's Algorithm
https://papers.nips.cc/paper_files/paper/2021/hash/3152e3b1e52e2cb123363787d5f76c95-Abstract.html
Robert Lunde, Purnamrita Sarkar, Rachel Ward
https://papers.nips.cc/paper_files/paper/2021/hash/3152e3b1e52e2cb123363787d5f76c95-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12100-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3152e3b1e52e2cb123363787d5f76c95-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=GPwmbxtG9Ow
https://papers.nips.cc/paper_files/paper/2021/file/3152e3b1e52e2cb123363787d5f76c95-Supplemental.pdf
We consider the problem of quantifying uncertainty for the estimation error of the leading eigenvector from Oja's algorithm for streaming principal component analysis, where the data are generated IID from some unknown distribution. By combining classical tools from the U-statistics literature with recent results on high-dimensional central limit theorems for quadratic forms of random vectors and concentration of matrix products, we establish a weighted $\chi^2$ approximation result for the $\sin^2$ error between the population eigenvector and the output of Oja’s algorithm. Since estimating the covariance matrix associated with the approximating distribution requires knowledge of unknown model parameters, we propose a multiplier bootstrap algorithm that may be updated in an online manner. We establish conditions under which the bootstrap distribution is close to the corresponding sampling distribution with high probability, thereby establishing the bootstrap as a consistent inferential method in an appropriate asymptotic regime.
null
Landscape analysis of an improved power method for tensor decomposition
https://papers.nips.cc/paper_files/paper/2021/hash/31784d9fc1fa0d25d04eae50ac9bf787-Abstract.html
Joe Kileel, Timo Klock, João M Pereira
https://papers.nips.cc/paper_files/paper/2021/hash/31784d9fc1fa0d25d04eae50ac9bf787-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12101-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/31784d9fc1fa0d25d04eae50ac9bf787-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Aqzn23LfwT
https://papers.nips.cc/paper_files/paper/2021/file/31784d9fc1fa0d25d04eae50ac9bf787-Supplemental.pdf
In this work, we consider the optimization formulation for symmetric tensor decomposition recently introduced in the Subspace Power Method (SPM) of Kileel and Pereira. Unlike popular alternative functionals for tensor decomposition, the SPM objective function has the desirable properties that its maximal value is known in advance, and its global optima are exactly the rank-1 components of the tensor when the input is sufficiently low-rank. We analyze the non-convex optimization landscape associated with the SPM objective. Our analysis accounts for working with noisy tensors. We derive quantitative bounds such that any second-order critical point with SPM objective value exceeding the bound must equal a tensor component in the noiseless case, and must approximate a tensor component in the noisy case. For decomposing tensors of size $D^{\times m}$, we obtain a near-global guarantee up to rank $\widetilde{o}(D^{\lfloor m/2 \rfloor})$ under a random tensor model, and a global guarantee up to rank $\mathcal{O}(D)$ assuming deterministic frame conditions. This implies that SPM with suitable initialization is a provable, efficient, robust algorithm for low-rank symmetric tensor decomposition. We conclude with numerics that show a practical preferability for using the SPM functional over a more established counterpart.
null
Curriculum Offline Imitating Learning
https://papers.nips.cc/paper_files/paper/2021/hash/31839b036f63806cba3f47b93af8ccb5-Abstract.html
Minghuan Liu, Hanye Zhao, Zhengyu Yang, Jian Shen, Weinan Zhang, Li Zhao, Tie-Yan Liu
https://papers.nips.cc/paper_files/paper/2021/hash/31839b036f63806cba3f47b93af8ccb5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12102-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/31839b036f63806cba3f47b93af8ccb5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=q6Kknb68dQf
https://papers.nips.cc/paper_files/paper/2021/file/31839b036f63806cba3f47b93af8ccb5-Supplemental.pdf
Offline reinforcement learning (RL) tasks require the agent to learn from a pre-collected dataset with no further interactions with the environment. Despite the potential to surpass the behavioral policies, RL-based methods are generally impractical due to the training instability and bootstrapping the extrapolation errors, which always require careful hyperparameter tuning via online evaluation. In contrast, offline imitation learning (IL) has no such issues since it learns the policy directly without estimating the value function by bootstrapping. However, IL is usually limited in the capability of the behavioral policy and tends to learn a mediocre behavior from the dataset collected by the mixture of policies. In this paper, we aim to take advantage of IL but mitigate such a drawback. Observing that behavior cloning is able to imitate neighboring policies with less data, we propose \textit{Curriculum Offline Imitation Learning (COIL)}, which utilizes an experience picking strategy to make the agent imitate from adaptive neighboring policies with a higher return, and improves the current policy along curriculum stages. On continuous control benchmarks, we compare COIL against both imitation-based methods and RL-based methods, showing that COIL not only avoids just learning a mediocre behavior on mixed datasets but is also even competitive with state-of-the-art offline RL methods.
null
Robust Pose Estimation in Crowded Scenes with Direct Pose-Level Inference
https://papers.nips.cc/paper_files/paper/2021/hash/31857b449c407203749ae32dd0e7d64a-Abstract.html
Dongkai Wang, Shiliang Zhang, Gang Hua
https://papers.nips.cc/paper_files/paper/2021/hash/31857b449c407203749ae32dd0e7d64a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12103-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/31857b449c407203749ae32dd0e7d64a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AvHeCmK2fsE
https://papers.nips.cc/paper_files/paper/2021/file/31857b449c407203749ae32dd0e7d64a-Supplemental.pdf
Multi-person pose estimation in crowded scenes is challenging because overlapping and occlusions make it difficult to detect person bounding boxes and infer pose cues from individual keypoints. To address those issues, this paper proposes a direct pose-level inference strategy that is free of bounding box detection and keypoint grouping. Instead of inferring individual keypoints, the Pose-level Inference Network (PINet) directly infers the complete pose cues for a person from his/her visible body parts. PINet first applies the Part-based Pose Generation (PPG) to infer multiple coarse poses for each person from his/her body parts. Those coarse poses are refined by the Pose Refinement module through incorporating pose priors, and finally are fused in the Pose Fusion module. PINet relies on discriminative body parts to differentiate overlapped persons, and applies visual body cues to infer the global pose cues. Experiments on several crowded scenes pose estimation benchmarks demonstrate the superiority of PINet. For instance, it achieves 59.8% AP on the OCHuman dataset, outperforming the recent works by a large margin.
null
Ising Model Selection Using $\ell_{1}$-Regularized Linear Regression: A Statistical Mechanics Analysis
https://papers.nips.cc/paper_files/paper/2021/hash/31917677a66c6eddd3ab1f68b0679e2f-Abstract.html
Xiangming Meng, Tomoyuki Obuchi, Yoshiyuki Kabashima
https://papers.nips.cc/paper_files/paper/2021/hash/31917677a66c6eddd3ab1f68b0679e2f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12104-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/31917677a66c6eddd3ab1f68b0679e2f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yTXtUSV-gk4
https://papers.nips.cc/paper_files/paper/2021/file/31917677a66c6eddd3ab1f68b0679e2f-Supplemental.pdf
We theoretically analyze the typical learning performance of $\ell_{1}$-regularized linear regression ($\ell_1$-LinR) for Ising model selection using the replica method from statistical mechanics. For typical random regular graphs in the paramagnetic phase, an accurate estimate of the typical sample complexity of $\ell_1$-LinR is obtained. Remarkably, despite the model misspecification, $\ell_1$-LinR is model selection consistent with the same order of sample complexity as $\ell_{1}$-regularized logistic regression ($\ell_1$-LogR), i.e., $M=\mathcal{O}\left(\log N\right)$, where $N$ is the number of variables of the Ising model. Moreover, we provide an efficient method to accurately predict the non-asymptotic behavior of $\ell_1$-LinR for moderate $M, N$, such as precision and recall. Simulations show a fairly good agreement between theoretical predictions and experimental results, even for graphs with many loops, which supports our findings. Although this paper mainly focuses on $\ell_1$-LinR, our method is readily applicable for precisely characterizing the typical learning performances of a wide class of $\ell_{1}$-regularized $M$-estimators including $\ell_1$-LogR and interaction screening.
null
Conformal Prediction using Conditional Histograms
https://papers.nips.cc/paper_files/paper/2021/hash/31b3b31a1c2f8a370206f111127c0dbd-Abstract.html
Matteo Sesia, Yaniv Romano
https://papers.nips.cc/paper_files/paper/2021/hash/31b3b31a1c2f8a370206f111127c0dbd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12105-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/31b3b31a1c2f8a370206f111127c0dbd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=EvhsTX6GMyM
https://papers.nips.cc/paper_files/paper/2021/file/31b3b31a1c2f8a370206f111127c0dbd-Supplemental.pdf
This paper develops a conformal method to compute prediction intervals for non-parametric regression that can automatically adapt to skewed data. Leveraging black-box machine learning algorithms to estimate the conditional distribution of the outcome using histograms, it translates their output into the shortest prediction intervals with approximate conditional coverage. The resulting prediction intervals provably have marginal coverage in finite samples, while asymptotically achieving conditional coverage and optimal length if the black-box model is consistent. Numerical experiments with simulated and real data demonstrate improved performance compared to state-of-the-art alternatives, including conformalized quantile regression and other distributional conformal prediction approaches.
null
Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels
https://papers.nips.cc/paper_files/paper/2021/hash/31c0b36aef265d9221af80872ceb62f9-Abstract.html
Sheng Wan, Yibing Zhan, Liu Liu, Baosheng Yu, Shirui Pan, Chen Gong
https://papers.nips.cc/paper_files/paper/2021/hash/31c0b36aef265d9221af80872ceb62f9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12106-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/31c0b36aef265d9221af80872ceb62f9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ek0RuhPoGiD
https://papers.nips.cc/paper_files/paper/2021/file/31c0b36aef265d9221af80872ceb62f9-Supplemental.pdf
Graph Neural Networks (GNNs) have achieved remarkable performance in the task of semi-supervised node classification. However, most existing GNN models require sufficient labeled data for effective network training. Their performance can be seriously degraded when labels are extremely limited. To address this issue, we propose a new framework termed Contrastive Graph Poisson Networks (CGPN) for node classification under extremely limited labeled data. Specifically, our CGPN derives from variational inference; integrates a newly designed Graph Poisson Network (GPN) to effectively propagate the limited labels to the entire graph and a normal GNN, such as Graph Attention Network, that flexibly guides the propagation of GPN; applies a contrastive objective to further exploit the supervision information from the learning process of GPN and GNN models. Essentially, our CGPN can enhance the learning performance of GNNs under extremely limited labels by contrastively propagating the limited labels to the entire graph. We conducted extensive experiments on different types of datasets to demonstrate the superiority of CGPN.
null
Collaborative Uncertainty in Multi-Agent Trajectory Forecasting
https://papers.nips.cc/paper_files/paper/2021/hash/31ca0ca71184bbdb3de7b20a51e88e90-Abstract.html
Bohan Tang, Yiqi Zhong, Ulrich Neumann, Gang Wang, Siheng Chen, Ya Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/31ca0ca71184bbdb3de7b20a51e88e90-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12107-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/31ca0ca71184bbdb3de7b20a51e88e90-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sO4tOk2lg9I
https://papers.nips.cc/paper_files/paper/2021/file/31ca0ca71184bbdb3de7b20a51e88e90-Supplemental.zip
Uncertainty modeling is critical in trajectory-forecasting systems for both interpretation and safety reasons. To better predict the future trajectories of multiple agents, recent works have introduced interaction modules to capture interactions among agents. This approach leads to correlations among the predicted trajectories. However, the uncertainty brought by such correlations is neglected. To fill this gap, we propose a novel concept, collaborative uncertainty (CU), which models the uncertainty resulting from the interaction module. We build a general CU-based framework to make a prediction model learn the future trajectory and the corresponding uncertainty. The CU-based framework is integrated as a plugin module to current state-of-the-art (SOTA) systems and deployed in two special cases based on multivariate Gaussian and Laplace distributions. In each case, we conduct extensive experiments on two synthetic datasets and two public, large-scale benchmarks of trajectory forecasting. The results are promising: 1) The results of synthetic datasets show that CU-based framework allows the model to nicely rebuild the ground-truth distribution. 2) The results of trajectory forecasting benchmarks demonstrate that the CU-based framework steadily helps SOTA systems improve their performances. Specially, the proposed CU-based framework helps VectorNet improve by 57 cm regarding Final Displacement Error on nuScenes dataset. 3) The visualization results of CU illustrate that the value of CU is highly related to the amount of the interactive information among agents.
null
Network-to-Network Regularization: Enforcing Occam's Razor to Improve Generalization
https://papers.nips.cc/paper_files/paper/2021/hash/321cf86b4c9f5ddd04881a44067c2a5a-Abstract.html
Rohan Ghosh, Mehul Motani
https://papers.nips.cc/paper_files/paper/2021/hash/321cf86b4c9f5ddd04881a44067c2a5a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12108-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/321cf86b4c9f5ddd04881a44067c2a5a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=swdfQTe_X9
https://papers.nips.cc/paper_files/paper/2021/file/321cf86b4c9f5ddd04881a44067c2a5a-Supplemental.pdf
What makes a classifier have the ability to generalize? There have been a lot of important attempts to address this question, but a clear answer is still elusive. Proponents of complexity theory find that the complexity of the classifier's function space is key to deciding generalization, whereas other recent work reveals that classifiers which extract invariant feature representations are likely to generalize better. Recent theoretical and empirical studies, however, have shown that even within a classifier's function space, there can be significant differences in the ability to generalize. Specifically, empirical studies have shown that among functions which have a good training data fit, functions with lower Kolmogorov complexity (KC) are likely to generalize better, while the opposite is true for functions of higher KC. Motivated by these findings, we propose, in this work, a novel measure of complexity called Kolmogorov Growth (KG), which we use to derive new generalization error bounds that only depend on the final choice of the classification function. Guided by the bounds, we propose a novel way of regularizing neural networks by constraining the network trajectory to remain in the low KG zone during training. Minimizing KG while learning is akin to applying the Occam's razor to neural networks. The proposed approach, called network-to-network regularization, leads to clear improvements in the generalization ability of classifiers. We verify this for three popular image datasets (MNIST, CIFAR-10, CIFAR-100) across varying training data sizes. Empirical studies find that conventional training of neural networks, unlike network-to-network regularization, leads to networks of high KG and lower test accuracies. Furthermore, we present the benefits of N2N regularization in the scenario where the training data labels are noisy. Using N2N regularization, we achieve competitive performance on MNIST, CIFAR-10 and CIFAR-100 datasets with corrupted training labels, significantly improving network performance compared to standard cross-entropy baselines in most cases. These findings illustrate the many benefits obtained from imposing a function complexity prior like Kolmogorov Growth during the training process.
null
Generalized and Discriminative Few-Shot Object Detection via SVD-Dictionary Enhancement
https://papers.nips.cc/paper_files/paper/2021/hash/325995af77a0e8b06d1204a171010b3a-Abstract.html
Aming WU, Suqi Zhao, Cheng Deng, Wei Liu
https://papers.nips.cc/paper_files/paper/2021/hash/325995af77a0e8b06d1204a171010b3a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12109-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/325995af77a0e8b06d1204a171010b3a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Kr6jWI4PSRd
https://papers.nips.cc/paper_files/paper/2021/file/325995af77a0e8b06d1204a171010b3a-Supplemental.pdf
Few-shot object detection (FSOD) aims to detect new objects based on few annotated samples. To alleviate the impact of few samples, enhancing the generalization and discrimination abilities of detectors on new objects plays an important role. In this paper, we explore employing Singular Value Decomposition (SVD) to boost both the generalization and discrimination abilities. In specific, we propose a novel method, namely, SVD-Dictionary enhancement, to build two separated spaces based on the sorted singular values. Concretely, the eigenvectors corresponding to larger singular values are used to build the generalization space in which localization is performed, as these eigenvectors generally suppress certain variations (e.g., the variation of styles) and contain intrinsical characteristics of objects. Meanwhile, since the eigenvectors corresponding to relatively smaller singular values may contain richer category-related information, we can utilize them to build the discrimination space in which classification is performed. Dictionary learning is further leveraged to capture high-level discriminative information from the discrimination space, which is beneficial for improving detection accuracy. In the experiments, we separately verify the effectiveness of our method on PASCAL VOC and COCO benchmarks. Particularly, for the 2-shot case in VOC split1, our method significantly outperforms the baseline by 6.2\%. Moreover, visualization analysis shows that our method is instrumental in doing FSOD.
null
Conditioning Sparse Variational Gaussian Processes for Online Decision-making
https://papers.nips.cc/paper_files/paper/2021/hash/325eaeac5bef34937cfdc1bd73034d17-Abstract.html
Wesley J. Maddox, Samuel Stanton, Andrew G. Wilson
https://papers.nips.cc/paper_files/paper/2021/hash/325eaeac5bef34937cfdc1bd73034d17-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12110-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/325eaeac5bef34937cfdc1bd73034d17-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CCvpHGFOzC3
https://papers.nips.cc/paper_files/paper/2021/file/325eaeac5bef34937cfdc1bd73034d17-Supplemental.pdf
With a principled representation of uncertainty and closed form posterior updates, Gaussian processes (GPs) are a natural choice for online decision making. However, Gaussian processes typically require at least $\mathcal{O}(n^2)$ computations for $n$ training points, limiting their general applicability. Stochastic variational Gaussian processes (SVGPs) can provide scalable inference for a dataset of fixed size, but are difficult to efficiently condition on new data. We propose online variational conditioning (OVC), a procedure for efficiently conditioning SVGPs in an online setting that does not require re-training through the evidence lower bound with the addition of new data. OVC enables the pairing of SVGPs with advanced look-ahead acquisition functions for black-box optimization, even with non-Gaussian likelihoods. We show OVC provides compelling performance in a range of applications including active learning of malaria incidence, and reinforcement learning on MuJoCo simulated robotic control tasks.
null
Spherical Motion Dynamics: Learning Dynamics of Normalized Neural Network using SGD and Weight Decay
https://papers.nips.cc/paper_files/paper/2021/hash/326a8c055c0d04f5b06544665d8bb3ea-Abstract.html
Ruosi Wan, Zhanxing Zhu, Xiangyu Zhang, Jian Sun
https://papers.nips.cc/paper_files/paper/2021/hash/326a8c055c0d04f5b06544665d8bb3ea-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12111-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/326a8c055c0d04f5b06544665d8bb3ea-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=RcbphT7qjTq
https://papers.nips.cc/paper_files/paper/2021/file/326a8c055c0d04f5b06544665d8bb3ea-Supplemental.pdf
In this paper, we comprehensively reveal the learning dynamics of normalized neural network using Stochastic Gradient Descent (with momentum) and Weight Decay (WD), named as Spherical Motion Dynamics (SMD). Most related works focus on studying behavior of effective learning rate" inequilibrium" state, i.e. assuming weight norm remains unchanged. However, their discussion on why this equilibrium can be reached is either absent or less convincing. Our work directly explores the cause of equilibrium, as a special state of SMD. Specifically, 1) we introduce the assumptions that can lead to equilibrium state in SMD, and prove equilibrium can be reached in a linear rate regime under given assumptions; 2) we propose ``angular update" as a substitute for effective learning rate to depict the state of SMD, and derive the theoretical value of angular update in equilibrium state; 3) we verify our assumptions and theoretical results on various large-scale computer vision tasks including ImageNet and MSCOCO with standard settings. Experiment results show our theoretical findings agree well with empirical observations. We also show that the behavior of angular update in SMD can produce interesting effect to the optimization of neural network in practice.
null
Imitating Deep Learning Dynamics via Locally Elastic Stochastic Differential Equations
https://papers.nips.cc/paper_files/paper/2021/hash/327af0f71f7acdfd882774225f04775f-Abstract.html
Jiayao Zhang, Hua Wang, Weijie Su
https://papers.nips.cc/paper_files/paper/2021/hash/327af0f71f7acdfd882774225f04775f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12112-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/327af0f71f7acdfd882774225f04775f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zEuLFJCRk4X
https://papers.nips.cc/paper_files/paper/2021/file/327af0f71f7acdfd882774225f04775f-Supplemental.pdf
Understanding the training dynamics of deep learning models is perhaps a necessary step toward demystifying the effectiveness of these models. In particular, how do training data from different classes gradually become separable in their feature spaces when training neural networks using stochastic gradient descent? In this paper, we model the evolution of features during deep learning training using a set of stochastic differential equations (SDEs) that each corresponding to a training sample. As a crucial ingredient in our modeling strategy, each SDE contains a drift term that reflects the impact of backpropagation at an input on the features of all samples. Our main finding uncovers a sharp phase transition phenomenon regarding the intra-class impact: if the SDEs are locally elastic in the sense that the impact is more significant on samples from the same class as the input, the features of training data become linearly separable---meaning vanishing training loss; otherwise, the features are not separable, no matter how long the training time is. In the presence of local elasticity, moreover, an analysis of our SDEs shows the emergence of a simple geometric structure called neural collapse of the features. Taken together, our results shed light on the decisive role of local elasticity underlying the training dynamics of neural networks. We corroborate our theoretical analysis with experiments on a synthesized dataset of geometric shapes as well as on CIFAR-10.
null
Probabilistic Forecasting: A Level-Set Approach
https://papers.nips.cc/paper_files/paper/2021/hash/32b127307a606effdcc8e51f60a45922-Abstract.html
Hilaf Hasson, Bernie Wang, Tim Januschowski, Jan Gasthaus
https://papers.nips.cc/paper_files/paper/2021/hash/32b127307a606effdcc8e51f60a45922-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12113-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/32b127307a606effdcc8e51f60a45922-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=VD3TMzyxKK
https://papers.nips.cc/paper_files/paper/2021/file/32b127307a606effdcc8e51f60a45922-Supplemental.pdf
Large-scale time series panels have become ubiquitous over the last years in areas such as retail, operational metrics, IoT, and medical domain (to name only a few). This has resulted in a need for forecasting techniques that effectively leverage all available data by learning across all time series in each panel. Among the desirable properties of forecasting techniques, being able to generate probabilistic predictions ranks among the top. In this paper, we therefore present Level Set Forecaster (LSF), a simple yet effective general approach to transform a point estimator into a probabilistic one. By recognizing the connection of our algorithm to random forests (RFs) and quantile regression forests (QRFs), we are able to prove consistency guarantees of our approach under mild assumptions on the underlying point estimator. As a byproduct, we prove the first consistency results for QRFs under the CART-splitting criterion. Empirical experiments show that our approach, equipped with tree-based models as the point estimator, rivals state-of-the-art deep learning models in terms of forecasting accuracy.
null
Roto-translated Local Coordinate Frames For Interacting Dynamical Systems
https://papers.nips.cc/paper_files/paper/2021/hash/32b991e5d77ad140559ffb95522992d0-Abstract.html
Miltiadis Kofinas, Naveen Nagaraja, Efstratios Gavves
https://papers.nips.cc/paper_files/paper/2021/hash/32b991e5d77ad140559ffb95522992d0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12114-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/32b991e5d77ad140559ffb95522992d0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=c3RKZas9am
https://papers.nips.cc/paper_files/paper/2021/file/32b991e5d77ad140559ffb95522992d0-Supplemental.pdf
Modelling interactions is critical in learning complex dynamical systems, namely systems of interacting objects with highly non-linear and time-dependent behaviour. A large class of such systems can be formalized as $\textit{geometric graphs}$, $\textit{i.e.}$ graphs with nodes positioned in the Euclidean space given an $\textit{arbitrarily}$ chosen global coordinate system, for instance vehicles in a traffic scene. Notwithstanding the arbitrary global coordinate system, the governing dynamics of the respective dynamical systems are invariant to rotations and translations, also known as $\textit{Galilean invariance}$. As ignoring these invariances leads to worse generalization, in this work we propose local coordinate systems per node-object to induce roto-translation invariance to the geometric graph of the interacting dynamical system. Further, the local coordinate systems allow for a natural definition of anisotropic filtering in graph neural networks. Experiments in traffic scenes, 3D motion capture, and colliding particles demonstrate the proposed approach comfortably outperforms the recent state-of-the-art.
null
ParK: Sound and Efficient Kernel Ridge Regression by Feature Space Partitions
https://papers.nips.cc/paper_files/paper/2021/hash/32b9e74c8f60958158eba8d1fa372971-Abstract.html
Luigi Carratino, Stefano Vigogna, Daniele Calandriello, Lorenzo Rosasco
https://papers.nips.cc/paper_files/paper/2021/hash/32b9e74c8f60958158eba8d1fa372971-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12115-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/32b9e74c8f60958158eba8d1fa372971-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AIIzCpn_GJ
null
We introduce ParK, a new large-scale solver for kernel ridge regression. Our approach combines partitioning with random projections and iterative optimization to reduce space and time complexity while provably maintaining the same statistical accuracy. In particular, constructing suitable partitions directly in the feature space rather than in the input space, we promote orthogonality between the local estimators, thus ensuring that key quantities such as local effective dimension and bias remain under control. We characterize the statistical-computational tradeoff of our model, and demonstrate the effectiveness of our method by numerical experiments on large-scale datasets.
null
Scaling Gaussian Processes with Derivative Information Using Variational Inference
https://papers.nips.cc/paper_files/paper/2021/hash/32bbf7b2bc4ed14eb1e9c2580056a989-Abstract.html
Misha Padidar, Xinran Zhu, Leo Huang, Jacob Gardner, David Bindel
https://papers.nips.cc/paper_files/paper/2021/hash/32bbf7b2bc4ed14eb1e9c2580056a989-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12116-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/32bbf7b2bc4ed14eb1e9c2580056a989-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=mV4hBipdm5l
https://papers.nips.cc/paper_files/paper/2021/file/32bbf7b2bc4ed14eb1e9c2580056a989-Supplemental.pdf
Gaussian processes with derivative information are useful in many settings where derivative information is available, including numerous Bayesian optimization and regression tasks that arise in the natural sciences. Incorporating derivative observations, however, comes with a dominating $O(N^3D^3)$ computational cost when training on $N$ points in $D$ input dimensions. This is intractable for even moderately sized problems. While recent work has addressed this intractability in the low-$D$ setting, the high-$N$, high-$D$ setting is still unexplored and of great value, particularly as machine learning problems increasingly become high dimensional. In this paper, we introduce methods to achieve fully scalable Gaussian process regression with derivatives using variational inference. Analogous to the use of inducing values to sparsify the labels of a training set, we introduce the concept of inducing directional derivatives to sparsify the partial derivative information of the training set. This enables us to construct a variational posterior that incorporates derivative information but whose size depends neither on the full dataset size $N$ nor the full dimensionality $D$. We demonstrate the full scalability of our approach on a variety of tasks, ranging from a high dimensional Stellarator fusion regression task to training graph convolutional neural networks on PubMed using Bayesian optimization. Surprisingly, we additionally find that our approach can improve regression performance even in settings where only label data is available.
null
On the Representation of Solutions to Elliptic PDEs in Barron Spaces
https://papers.nips.cc/paper_files/paper/2021/hash/32cfdce9631d8c7906e8e9d6e68b514b-Abstract.html
Ziang Chen, Jianfeng Lu, Yulong Lu
https://papers.nips.cc/paper_files/paper/2021/hash/32cfdce9631d8c7906e8e9d6e68b514b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12117-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/32cfdce9631d8c7906e8e9d6e68b514b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ST1P270dwOE
https://papers.nips.cc/paper_files/paper/2021/file/32cfdce9631d8c7906e8e9d6e68b514b-Supplemental.pdf
Numerical solutions to high-dimensional partial differential equations (PDEs) based on neural networks have seen exciting developments. This paper derives complexity estimates of the solutions of $d$-dimensional second-order elliptic PDEs in the Barron space, that is a set of functions admitting the integral of certain parametric ridge function against a probability measure on the parameters. We prove under some appropriate assumptions that if the coefficients and the source term of the elliptic PDE lie in Barron spaces, then the solution of the PDE is $\epsilon$-close with respect to the $H^1$ norm to a Barron function. Moreover, we prove dimension-explicit bounds for the Barron norm of this approximate solution, depending at most polynomially on the dimension $d$ of the PDE. As a direct consequence of the complexity estimates, the solution of the PDE can be approximated on any bounded domain by a two-layer neural network with respect to the $H^1$ norm with a dimension-explicit convergence rate.
null
A/B Testing for Recommender Systems in a Two-sided Marketplace
https://papers.nips.cc/paper_files/paper/2021/hash/32e19424b63cc63077a4031b87fb1010-Abstract.html
Preetam Nandy, Divya Venugopalan, Chun Lo, Shaunak Chatterjee
https://papers.nips.cc/paper_files/paper/2021/hash/32e19424b63cc63077a4031b87fb1010-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12118-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/32e19424b63cc63077a4031b87fb1010-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=GOnkx08Gm6
https://papers.nips.cc/paper_files/paper/2021/file/32e19424b63cc63077a4031b87fb1010-Supplemental.pdf
Two-sided marketplaces are standard business models of many online platforms (e.g., Amazon, Facebook, LinkedIn), wherein the platforms have consumers, buyers or content viewers on one side and producers, sellers or content-creators on the other. Consumer side measurement of the impact of a treatment variant can be done via simple online A/B testing. Producer side measurement is more challenging because the producer experience depends on the treatment assignment of the consumers. Existing approaches for producer side measurement are either based on graph cluster-based randomization or on certain treatment propagation assumptions. The former approach results in low-powered experiments as the producer-consumer network density increases and the latter approach lacks a strict notion of error control. In this paper, we propose (i) a quantification of the quality of a producer side experiment design, and (ii) a new experiment design mechanism that generates high-quality experiments based on this quantification. Our approach, called UniCoRn (Unifying Counterfactual Rankings), provides explicit control over the quality of the experiment and its computation cost. Further, we prove that our experiment design is optimal to the proposed design quality measure. Our approach is agnostic to the density of the producer-consumer network and does not rely on any treatment propagation assumption. Moreover, unlike the existing approaches, we do not need to know the underlying network in advance, making this widely applicable to the industrial setting where the underlying network is unknown and challenging to predict a priori due to its dynamic nature. We use simulations to validate our approach and compare it against existing methods. We also deployed UniCoRn in an edge recommendation application that serves tens of millions of members and billions of edge recommendations daily.
null
Retiring Adult: New Datasets for Fair Machine Learning
https://papers.nips.cc/paper_files/paper/2021/hash/32e54441e6382a7fbacbbbaf3c450059-Abstract.html
Frances Ding, Moritz Hardt, John Miller, Ludwig Schmidt
https://papers.nips.cc/paper_files/paper/2021/hash/32e54441e6382a7fbacbbbaf3c450059-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12119-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/32e54441e6382a7fbacbbbaf3c450059-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bYi_2708mKK
https://papers.nips.cc/paper_files/paper/2021/file/32e54441e6382a7fbacbbbaf3c450059-Supplemental.pdf
Although the fairness community has recognized the importance of data, researchers in the area primarily rely on UCI Adult when it comes to tabular data. Derived from a 1994 US Census survey, this dataset has appeared in hundreds of research papers where it served as the basis for the development and comparison of many algorithmic fairness interventions. We reconstruct a superset of the UCI Adult data from available US Census sources and reveal idiosyncrasies of the UCI Adult dataset that limit its external validity. Our primary contribution is a suite of new datasets derived from US Census surveys that extend the existing data ecosystem for research on fair machine learning. We create prediction tasks relating to income, employment, health, transportation, and housing. The data span multiple years and all states of the United States, allowing researchers to study temporal shift and geographic variation. We highlight a broad initial sweep of new empirical insights relating to trade-offs between fairness criteria, performance of algorithmic interventions, and the role of distribution shift based on our new datasets. Our findings inform ongoing debates, challenge some existing narratives, and point to future research directions.
null
Cardinality constrained submodular maximization for random streams
https://papers.nips.cc/paper_files/paper/2021/hash/333222170ab9edca4785c39f55221fe7-Abstract.html
Paul Liu, Aviad Rubinstein, Jan Vondrak, Junyao Zhao
https://papers.nips.cc/paper_files/paper/2021/hash/333222170ab9edca4785c39f55221fe7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12120-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/333222170ab9edca4785c39f55221fe7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7_t4Gvubkeo
https://papers.nips.cc/paper_files/paper/2021/file/333222170ab9edca4785c39f55221fe7-Supplemental.pdf
We consider the problem of maximizing submodular functions in single-pass streaming and secretaries-with-shortlists models, both with random arrival order.For cardinality constrained monotone functions, Agrawal, Shadravan, and Stein~\cite{SMC19} gave a single-pass $(1-1/e-\varepsilon)$-approximation algorithm using only linear memory, but their exponential dependence on $\varepsilon$ makes it impractical even for $\varepsilon=0.1$.We simplify both the algorithm and the analysis, obtaining an exponential improvement in the $\varepsilon$-dependence (in particular, $O(k/\varepsilon)$ memory).Extending these techniques, we also give a simple $(1/e-\varepsilon)$-approximation for non-monotone functions in $O(k/\varepsilon)$ memory. For the monotone case, we also give a corresponding unconditional hardness barrier of $1-1/e+\varepsilon$ for single-pass algorithms in randomly ordered streams, even assuming unlimited computation. Finally, we show that the algorithms are simple to implement and work well on real world datasets.
null
Self-Instantiated Recurrent Units with Dynamic Soft Recursion
https://papers.nips.cc/paper_files/paper/2021/hash/3341f6f048384ec73a7ba2e77d2db48b-Abstract.html
Aston Zhang, Yi Tay, Yikang Shen, Alvin Chan, SHUAI ZHANG
https://papers.nips.cc/paper_files/paper/2021/hash/3341f6f048384ec73a7ba2e77d2db48b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12121-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/3341f6f048384ec73a7ba2e77d2db48b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7Da3azsjjlh
https://papers.nips.cc/paper_files/paper/2021/file/3341f6f048384ec73a7ba2e77d2db48b-Supplemental.pdf
While standard recurrent neural networks explicitly impose a chain structure on different forms of data, they do not have an explicit bias towards recursive self-instantiation where the extent of recursion is dynamic. Given diverse and even growing data modalities (e.g., logic, algorithmic input and output, music, code, images, and language) that can be expressed in sequences and may benefit from more architectural flexibility, we propose the self-instantiated recurrent unit (Self-IRU) with a novel inductive bias towards dynamic soft recursion. On one hand, theSelf-IRU is characterized by recursive self-instantiation via its gating functions, i.e., gating mechanisms of the Self-IRU are controlled by instances of the Self-IRU itself, which are repeatedly invoked in a recursive fashion. On the other hand, the extent of the Self-IRU recursion is controlled by gates whose values are between 0 and 1 and may vary across the temporal dimension of sequences, enabling dynamic soft recursion depth at each time step. The architectural flexibility and effectiveness of our proposed approach are demonstrated across multiple data modalities. For example, the Self-IRU achieves state-of-the-art performance on the logical inference dataset [Bowman et al., 2014] even when comparing with competitive models that have access to ground-truth syntactic information.
null
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
https://papers.nips.cc/paper_files/paper/2021/hash/334467d41d5cf21e234465a1530ba647-Abstract.html
Hippolyt Ritter, Martin Kukla, Cheng Zhang, Yingzhen Li
https://papers.nips.cc/paper_files/paper/2021/hash/334467d41d5cf21e234465a1530ba647-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12122-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/334467d41d5cf21e234465a1530ba647-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SkU3kbKTrb6
https://papers.nips.cc/paper_files/paper/2021/file/334467d41d5cf21e234465a1530ba647-Supplemental.pdf
Bayesian Neural Networks and deep ensembles represent two modern paradigms of uncertainty quantification in deep learning. Yet these approaches struggle to scale mainly due to memory inefficiency, requiring parameter storage several times that of their deterministic counterparts. To address this, we augment each weight matrix with a small inducing weight matrix, projecting the uncertainty quantification into a lower dimensional space. We further extend Matheron’s conditional Gaussian sampling rule to enable fast weight sampling, which enables our inference method to maintain reasonable run-time as compared with ensembles. Importantly, our approach achieves competitive performance to the state-of-the-art in prediction and uncertainty estimation tasks with fully connected neural networks and ResNets, while reducing the parameter size to $\leq 24.3\%$ of that of a single neural network.
null
Scalable Inference of Sparsely-changing Gaussian Markov Random Fields
https://papers.nips.cc/paper_files/paper/2021/hash/33853141e0873909be88f5c3e6144cc6-Abstract.html
Salar Fattahi, Andres Gomez
https://papers.nips.cc/paper_files/paper/2021/hash/33853141e0873909be88f5c3e6144cc6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/12123-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/33853141e0873909be88f5c3e6144cc6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Krtz-LgTYIt
https://papers.nips.cc/paper_files/paper/2021/file/33853141e0873909be88f5c3e6144cc6-Supplemental.pdf
We study the problem of inferring time-varying Gaussian Markov random fields, where the underlying graphical model is both sparse and changes {sparsely} over time. Most of the existing methods for the inference of time-varying Markov random fields (MRFs) rely on the \textit{regularized maximum likelihood estimation} (MLE), that typically suffer from weak statistical guarantees and high computational time. Instead, we introduce a new class of constrained optimization problems for the inference of sparsely-changing Gaussian MRFs (GMRFs). The proposed optimization problem is formulated based on the exact $\ell_0$ regularization, and can be solved in near-linear time and memory. Moreover, we show that the proposed estimator enjoys a provably small estimation error. We derive sharp statistical guarantees in the high-dimensional regime, showing that such problems can be learned with as few as one sample per time period. Our proposed method is extremely efficient in practice: it can accurately estimate sparsely-changing GMRFs with more than 500 million variables in less than one hour.
null