title
stringlengths
15
153
url
stringlengths
97
97
authors
stringlengths
6
328
detail_url
stringlengths
97
97
tags
stringclasses
1 value
Bibtex
stringlengths
54
54
Paper
stringlengths
93
93
Reviews And Public Comment »
stringlengths
63
65
Supplemental
stringlengths
100
100
abstract
stringlengths
310
2.42k
Supplemental Errata
stringclasses
1 value
Weisfeiler and Lehman Go Cellular: CW Networks
https://papers.nips.cc/paper_files/paper/2021/hash/157792e4abb490f99dbd738483e0d2d4-Abstract.html
Cristian Bodnar, Fabrizio Frasca, Nina Otter, Yuguang Wang, Pietro Liò, Guido F. Montufar, Michael Bronstein
https://papers.nips.cc/paper_files/paper/2021/hash/157792e4abb490f99dbd738483e0d2d4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11824-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/157792e4abb490f99dbd738483e0d2d4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=uVPZCMVtsSG
https://papers.nips.cc/paper_files/paper/2021/file/157792e4abb490f99dbd738483e0d2d4-Supplemental.pdf
Graph Neural Networks (GNNs) are limited in their expressive power, struggle with long-range interactions and lack a principled way to model higher-order structures. These problems can be attributed to the strong coupling between the computational graph and the input graph structure. The recently proposed Message Passing Simplicial Networks naturally decouple these elements by performing message passing on the clique complex of the graph. Nevertheless, these models can be severely constrained by the rigid combinatorial structure of Simplicial Complexes (SCs). In this work, we extend recent theoretical results on SCs to regular Cell Complexes, topological objects that flexibly subsume SCs and graphs. We show that this generalisation provides a powerful set of graph "lifting" transformations, each leading to a unique hierarchical message passing procedure. The resulting methods, which we collectively call CW Networks (CWNs), are strictly more powerful than the WL test and not less powerful than the 3-WL test. In particular, we demonstrate the effectiveness of one such scheme, based on rings, when applied to molecular graph problems. The proposed architecture benefits from provably larger expressivity than commonly used GNNs, principled modelling of higher-order signals and from compressing the distances between nodes. We demonstrate that our model achieves state-of-the-art results on a variety of molecular datasets.
null
Learning Conjoint Attentions for Graph Neural Nets
https://papers.nips.cc/paper_files/paper/2021/hash/1587965fb4d4b5afe8428a4a024feb0d-Abstract.html
Tiantian He, Yew Soon Ong, L Bai
https://papers.nips.cc/paper_files/paper/2021/hash/1587965fb4d4b5afe8428a4a024feb0d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11825-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1587965fb4d4b5afe8428a4a024feb0d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=SMU_hbhhEQ
https://papers.nips.cc/paper_files/paper/2021/file/1587965fb4d4b5afe8428a4a024feb0d-Supplemental.pdf
In this paper, we present Conjoint Attentions (CAs), a class of novel learning-to-attend strategies for graph neural networks (GNNs). Besides considering the layer-wise node features propagated within the GNN, CAs can additionally incorporate various structural interventions, such as node cluster embedding, and higher-order structural correlations that can be learned outside of GNN, when computing attention scores. The node features that are regarded as significant by the conjoint criteria are therefore more likely to be propagated in the GNN. Given the novel Conjoint Attention strategies, we then propose Graph conjoint attention networks (CATs) that can learn representations embedded with significant latent features deemed by the Conjoint Attentions. Besides, we theoretically validate the discriminative capacity of CATs. CATs utilizing the proposed Conjoint Attention strategies have been extensively tested in well-established benchmarking datasets and comprehensively compared with state-of-the-art baselines. The obtained notable performance demonstrates the effectiveness of the proposed Conjoint Attentions.
null
Hybrid Regret Bounds for Combinatorial Semi-Bandits and Adversarial Linear Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/15a50c8ba6a0002a2fa7e5d8c0a40bd9-Abstract.html
Shinji Ito
https://papers.nips.cc/paper_files/paper/2021/hash/15a50c8ba6a0002a2fa7e5d8c0a40bd9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11826-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/15a50c8ba6a0002a2fa7e5d8c0a40bd9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=h3M00I96Ed
https://papers.nips.cc/paper_files/paper/2021/file/15a50c8ba6a0002a2fa7e5d8c0a40bd9-Supplemental.pdf
This study aims to develop bandit algorithms that automatically exploit tendencies of certain environments to improve performance, without any prior knowledge regarding the environments. We first propose an algorithm for combinatorial semi-bandits with a hybrid regret bound that includes two main features: a best-of-three-worlds guarantee and multiple data-dependent regret bounds. The former means that the algorithm will work nearly optimally in all environments in an adversarial setting, a stochastic setting, or a stochastic setting with adversarial corruptions. The latter implies that, even if the environment is far from exhibiting stochastic behavior, the algorithm will perform better as long as the environment is "easy" in terms of certain metrics. The metrics w.r.t. the easiness referred to in this paper include cumulative loss for optimal actions, total quadratic variation of losses, and path-length of a loss sequence. We also show hybrid data-dependent regret bounds for adversarial linear bandits, which include a first path-length regret bound that is tight up to logarithmic factors.
null
Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling
https://papers.nips.cc/paper_files/paper/2021/hash/15c00b5250ddedaabc203b67f8b034fd-Abstract.html
Hongyu Gong, Yun Tang, Juan Pino, Xian Li
https://papers.nips.cc/paper_files/paper/2021/hash/15c00b5250ddedaabc203b67f8b034fd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11827-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/15c00b5250ddedaabc203b67f8b034fd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4J_H903nUE
null
Multi-head attention has each of the attention heads collect salient information from different parts of an input sequence, making it a powerful mechanism for sequence modeling. Multilingual and multi-domain learning are common scenarios for sequence modeling, where the key challenge is to maximize positive transfer and mitigate negative interference across languages and domains. In this paper, we find that non-selective attention sharing is sub-optimal for achieving good generalization across all languages and domains. We further propose attention sharing strategies to facilitate parameter sharing and specialization in multilingual and multi-domain sequence modeling. Our approach automatically learns shared and specialized attention heads for different languages and domains. Evaluated in various tasks including speech recognition, text-to-text and speech-to-text translation, the proposed attention sharing strategies consistently bring gains to sequence models built upon multi-head attention. For speech-to-text translation, our approach yields an average of $+2.0$ BLEU over $13$ language directions in multilingual setting and $+2.0$ BLEU over $3$ domains in multi-domain setting.
null
Cardinality-Regularized Hawkes-Granger Model
https://papers.nips.cc/paper_files/paper/2021/hash/15cf76466b97264765356fcc56d801d1-Abstract.html
Tsuyoshi Ide, Georgios Kollias, Dzung Phan, Naoki Abe
https://papers.nips.cc/paper_files/paper/2021/hash/15cf76466b97264765356fcc56d801d1-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11828-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/15cf76466b97264765356fcc56d801d1-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gkyg2aOE6MU
https://papers.nips.cc/paper_files/paper/2021/file/15cf76466b97264765356fcc56d801d1-Supplemental.pdf
We propose a new sparse Granger-causal learning framework for temporal event data. We focus on a specific class of point processes called the Hawkes process. We begin by pointing out that most of the existing sparse causal learning algorithms for the Hawkes process suffer from a singularity in maximum likelihood estimation. As a result, their sparse solutions can appear only as numerical artifacts. In this paper, we propose a mathematically well-defined sparse causal learning framework based on a cardinality-regularized Hawkes process, which remedies the pathological issues of existing approaches. We leverage the proposed algorithm for the task of instance-wise causal event analysis, where sparsity plays a critical role. We validate the proposed framework with two real use-cases, one from the power grid and the other from the cloud data center management domain.
null
Aligned Structured Sparsity Learning for Efficient Image Super-Resolution
https://papers.nips.cc/paper_files/paper/2021/hash/15de21c670ae7c3f6f3f1f37029303c9-Abstract.html
Yulun Zhang, Huan Wang, Can Qin, Yun Fu
https://papers.nips.cc/paper_files/paper/2021/hash/15de21c670ae7c3f6f3f1f37029303c9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11829-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/15de21c670ae7c3f6f3f1f37029303c9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zAuDbrHC6fq
https://papers.nips.cc/paper_files/paper/2021/file/15de21c670ae7c3f6f3f1f37029303c9-Supplemental.pdf
Lightweight image super-resolution (SR) networks have obtained promising results with moderate model size. Many SR methods have focused on designing lightweight architectures, which neglect to further reduce the redundancy of network parameters. On the other hand, model compression techniques, like neural architecture search and knowledge distillation, typically consume considerable memory and computation resources. In contrast, network pruning is a cheap and effective model compression technique. However, it is hard to be applied to SR networks directly, because filter pruning for residual blocks is well-known tricky. To address the above issues, we propose aligned structured sparsity learning (ASSL), which introduces a weight normalization layer and applies $L_2$ regularization to the scale parameters for sparsity. To align the pruned locations across different layers, we propose a \emph{sparsity structure alignment} penalty term, which minimizes the norm of soft mask gram matrix. We apply aligned structured sparsity learning strategy to train efficient image SR network, named as ASSLN, with smaller model size and lower computation than state-of-the-art methods. We conduct extensive comparisons with lightweight SR networks. Our ASSLN achieves superior performance gains over recent methods quantitatively and visually.
null
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/15f99f2165aa8c86c9dface16fefd281-Abstract.html
Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, Jinjun Xiong
https://papers.nips.cc/paper_files/paper/2021/hash/15f99f2165aa8c86c9dface16fefd281-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11830-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/15f99f2165aa8c86c9dface16fefd281-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=UAjh00C0BhT
https://papers.nips.cc/paper_files/paper/2021/file/15f99f2165aa8c86c9dface16fefd281-Supplemental.pdf
The lottery ticket hypothesis (LTH) states that learning on a properly pruned network (the winning ticket) has improved test accuracy over the original unpruned network. Although LTH has been justified empirically in a broad range of deep neural network (DNN) involved applications like computer vision and natural language processing, the theoretical validation of the improved generalization of a winning ticket remains elusive. To the best of our knowledge, our work, for the first time, characterizes the performance of training a pruned neural network by analyzing the geometric structure of the objective function and the sample complexity to achieve zero generalization error. We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned, indicating the structural importance of a winning ticket. Moreover, as the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer. With a fixed number of samples, training a pruned neural network enjoys a faster convergence rate to the desired model than training the original unpruned one, providing a formal justification of the improved generalization of the winning ticket. Our theoretical results are acquired from learning a pruned neural network of one hidden layer, while experimental results are further provided to justify the implications in pruning multi-layer neural networks.
null
Constrained Robust Submodular Partitioning
https://papers.nips.cc/paper_files/paper/2021/hash/161882dd2d19c716819081aee2c08b98-Abstract.html
Shengjie Wang, Tianyi Zhou, Chandrashekhar Lavania, Jeff A Bilmes
https://papers.nips.cc/paper_files/paper/2021/hash/161882dd2d19c716819081aee2c08b98-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11831-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/161882dd2d19c716819081aee2c08b98-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_1HETTYd7Wr
https://papers.nips.cc/paper_files/paper/2021/file/161882dd2d19c716819081aee2c08b98-Supplemental.pdf
In the robust submodular partitioning problem, we aim to allocate a set of items into $m$ blocks, so that the evaluation of the minimum block according to a submodular function is maximized. Robust submodular partitioning promotes the diversity of every block in the partition. It has many applications in machine learning, e.g., partitioning data for distributed training so that the gradients computed on every block are consistent. We study an extension of the robust submodular partition problem with additional constraints (e.g., cardinality, multiple matroids, and/or knapsack) on every block. For example, when partitioning data for distributed training, we can add a constraint that the number of samples of each class is the same in each partition block, ensuring data balance. We present two classes of algorithms, i.e., Min-Block Greedy based algorithms (with an $\Omega(1/m)$ bound), and Round-Robin Greedy based algorithms (with a constant bound) and show that under various constraints, they still have good approximation guarantees. Interestingly, while normally the latter runs in only weakly polynomial time, we show that using the two together yields strongly polynomial running time while preserving the approximation guarantee. Lastly, we apply the algorithms on a real-world machine learning data partitioning problem showing good results.
null
Online Knapsack with Frequency Predictions
https://papers.nips.cc/paper_files/paper/2021/hash/161c5c5ad51fcc884157890511b3c8b0-Abstract.html
Sungjin Im, Ravi Kumar, Mahshid Montazer Qaem, Manish Purohit
https://papers.nips.cc/paper_files/paper/2021/hash/161c5c5ad51fcc884157890511b3c8b0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11832-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/161c5c5ad51fcc884157890511b3c8b0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=rMm9d_aDtOa
https://papers.nips.cc/paper_files/paper/2021/file/161c5c5ad51fcc884157890511b3c8b0-Supplemental.pdf
There has been recent interest in using machine-learned predictions to improve the worst-case guarantees of online algorithms. In this paper we continue this line of work by studying the online knapsack problem, but with very weak predictions: in the form of knowing an upper and lower bound for the number of items of each value. We systematically derive online algorithms that attain the best possible competitive ratio for any fixed prediction; we also extend the results to more general settings such as generalized one-way trading and two-stage online knapsack. Our work shows that even seemingly weak predictions can be utilized effectively to provably improve the performance of online algorithms.
null
On Component Interactions in Two-Stage Recommender Systems
https://papers.nips.cc/paper_files/paper/2021/hash/162d18156abe38a3b32851b72b1d44f5-Abstract.html
Jiri Hron, Karl Krauth, Michael Jordan, Niki Kilbertus
https://papers.nips.cc/paper_files/paper/2021/hash/162d18156abe38a3b32851b72b1d44f5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11833-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/162d18156abe38a3b32851b72b1d44f5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zO6Q8q2AmbV
https://papers.nips.cc/paper_files/paper/2021/file/162d18156abe38a3b32851b72b1d44f5-Supplemental.pdf
Thanks to their scalability, two-stage recommenders are used by many of today's largest online platforms, including YouTube, LinkedIn, and Pinterest. These systems produce recommendations in two steps: (i) multiple nominators—tuned for low prediction latency—preselect a small subset of candidates from the whole item pool; (ii) a slower but more accurate ranker further narrows down the nominated items, and serves to the user. Despite their popularity, the literature on two-stage recommenders is relatively scarce, and the algorithms are often treated as mere sums of their parts. Such treatment presupposes that the two-stage performance is explained by the behavior of the individual components in isolation. This is not the case: using synthetic and real-world data, we demonstrate that interactions between the ranker and the nominators substantially affect the overall performance. Motivated by these findings, we derive a generalization lower bound which shows that independent nominator training can lead to performance on par with uniformly random recommendations. We find that careful design of item pools, each assigned to a different nominator, alleviates these issues. As manual search for a good pool allocation is difficult, we propose to learn one instead using a Mixture-of-Experts based approach. This significantly improves both precision and recall at $K$.
null
Lip to Speech Synthesis with Visual Context Attentional GAN
https://papers.nips.cc/paper_files/paper/2021/hash/16437d40c29a1a7b1e78143c9c38f289-Abstract.html
Minsu Kim, Joanna Hong, Yong Man Ro
https://papers.nips.cc/paper_files/paper/2021/hash/16437d40c29a1a7b1e78143c9c38f289-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11834-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/16437d40c29a1a7b1e78143c9c38f289-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=x6z8J_17LP3
https://papers.nips.cc/paper_files/paper/2021/file/16437d40c29a1a7b1e78143c9c38f289-Supplemental.zip
In this paper, we propose a novel lip-to-speech generative adversarial network, Visual Context Attentional GAN (VCA-GAN), which can jointly model local and global lip movements during speech synthesis. Specifically, the proposed VCA-GAN synthesizes the speech from local lip visual features by finding a mapping function of viseme-to-phoneme, while global visual context is embedded into the intermediate layers of the generator to clarify the ambiguity in the mapping induced by homophene. To achieve this, a visual context attention module is proposed where it encodes global representations from the local visual features, and provides the desired global visual context corresponding to the given coarse speech representation to the generator through audio-visual attention. In addition to the explicit modelling of local and global visual representations, synchronization learning is introduced as a form of contrastive learning that guides the generator to synthesize a speech in sync with the given input lip movements. Extensive experiments demonstrate that the proposed VCA-GAN outperforms existing state-of-the-art and is able to effectively synthesize the speech from multi-speaker that has been barely handled in the previous works.
null
Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis
https://papers.nips.cc/paper_files/paper/2021/hash/164bf317ea19ccfd9e97853edc2389f4-Abstract.html
Jikai Jin, Bohang Zhang, Haiyang Wang, Liwei Wang
https://papers.nips.cc/paper_files/paper/2021/hash/164bf317ea19ccfd9e97853edc2389f4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11835-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/164bf317ea19ccfd9e97853edc2389f4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gZLhHMyxa-
https://papers.nips.cc/paper_files/paper/2021/file/164bf317ea19ccfd9e97853edc2389f4-Supplemental.pdf
Distributionally robust optimization (DRO) is a widely-used approach to learn models that are robust against distribution shift. Compared with the standard optimization setting, the objective function in DRO is more difficult to optimize, and most of the existing theoretical results make strong assumptions on the loss function. In this work we bridge the gap by studying DRO algorithms for general smooth non-convex losses. By carefully exploiting the specific form of the DRO objective, we are able to provide non-asymptotic convergence guarantees even though the objective function is possibly non-convex, non-smooth and has unbounded gradient noise. In particular, we prove that a special algorithm called the mini-batch normalized gradient descent with momentum, can find an $\epsilon$-first-order stationary point within $\mathcal O(\epsilon^{-4})$ gradient complexity. We also discuss the conditional value-at-risk (CVaR) setting, where we propose a penalized DRO objective based on a smoothed version of the CVaR that allows us to obtain a similar convergence guarantee. We finally verify our theoretical results in a number of tasks and find that the proposed algorithm can consistently achieve prominent acceleration.
null
Goal-Aware Cross-Entropy for Multi-Target Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/165a59f7cf3b5c4396ba65953d679f17-Abstract.html
Kibeom Kim, Min Whoo Lee, Yoonsung Kim, JeHwan Ryu, Minsu Lee, Byoung-Tak Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/165a59f7cf3b5c4396ba65953d679f17-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11836-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/165a59f7cf3b5c4396ba65953d679f17-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=bsGr_8zmRos
https://papers.nips.cc/paper_files/paper/2021/file/165a59f7cf3b5c4396ba65953d679f17-Supplemental.pdf
Learning in a multi-target environment without prior knowledge about the targets requires a large amount of samples and makes generalization difficult. To solve this problem, it is important to be able to discriminate targets through semantic understanding. In this paper, we propose goal-aware cross-entropy (GACE) loss, that can be utilized in a self-supervised way using auto-labeled goal states alongside reinforcement learning. Based on the loss, we then devise goal-discriminative attention networks (GDAN) which utilize the goal-relevant information to focus on the given instruction. We evaluate the proposed methods on visual navigation and robot arm manipulation tasks with multi-target environments and show that GDAN outperforms the state-of-the-art methods in terms of task success ratio, sample efficiency, and generalization. Additionally, qualitative analyses demonstrate that our proposed method can help the agent become aware of and focus on the given instruction clearly, promoting goal-directed behavior.
null
Smooth Normalizing Flows
https://papers.nips.cc/paper_files/paper/2021/hash/167434fa6219316417cd4160c0c5e7d2-Abstract.html
Jonas Köhler, Andreas Krämer, Frank Noe
https://papers.nips.cc/paper_files/paper/2021/hash/167434fa6219316417cd4160c0c5e7d2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11837-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/167434fa6219316417cd4160c0c5e7d2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=yxsak5ND2pA
https://papers.nips.cc/paper_files/paper/2021/file/167434fa6219316417cd4160c0c5e7d2-Supplemental.zip
Normalizing flows are a promising tool for modeling probability distributions in physical systems. While state-of-the-art flows accurately approximate distributions and energies, applications in physics additionally require smooth energies to compute forces and higher-order derivatives. Furthermore, such densities are often defined on non-trivial topologies. A recent example are Boltzmann Generators for generating 3D-structures of peptides and small proteins. These generative models leverage the space of internal coordinates (dihedrals, angles, and bonds), which is a product of hypertori and compact intervals. In this work, we introduce a class of smooth mixture transformations working on both compact intervals and hypertori.Mixture transformations employ root-finding methods to invert them in practice, which has so far prevented bi-directional flow training. To this end, we show that parameter gradients and forces of such inverses can be computed from forward evaluations via the inverse function theorem.We demonstrate two advantages of such smooth flows: they allow training by force matching to simulation data and can be used as potentials in molecular dynamics simulations.
null
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images
https://papers.nips.cc/paper_files/paper/2021/hash/1680829293f2a8541efa2647a0290f88-Abstract.html
Shaofei Wang, Marko Mihajlovic, Qianli Ma, Andreas Geiger, Siyu Tang
https://papers.nips.cc/paper_files/paper/2021/hash/1680829293f2a8541efa2647a0290f88-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11838-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1680829293f2a8541efa2647a0290f88-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Q-PA3D1OsDz
https://papers.nips.cc/paper_files/paper/2021/file/1680829293f2a8541efa2647a0290f88-Supplemental.pdf
In this paper, we aim to create generalizable and controllable neural signed distance fields (SDFs) that represent clothed humans from monocular depth observations. Recent advances in deep learning, especially neural implicit representations, have enabled human shape reconstruction and controllable avatar generation from different sensor inputs. However, to generate realistic cloth deformations from novel input poses, watertight meshes or dense full-body scans are usually needed as inputs. Furthermore, due to the difficulty of effectively modeling pose-dependent cloth deformations for diverse body shapes and cloth types, existing approaches resort to per-subject/cloth-type optimization from scratch, which is computationally expensive. In contrast, we propose an approach that can quickly generate realistic clothed human avatars, represented as controllable neural SDFs, given only monocular depth images. We achieve this by using meta-learning to learn an initialization of a hypernetwork that predicts the parameters of neural SDFs. The hypernetwork is conditioned on human poses and represents a clothed neural avatar that deforms non-rigidly according to the input poses. Meanwhile, it is meta-learned to effectively incorporate priors of diverse body shapes and cloth types and thus can be much faster to fine-tune, compared to models trained from scratch. We qualitatively and quantitatively show that our approach outperforms state-of-the-art approaches that require complete meshes as inputs while our approach requires only depth frames as inputs and runs orders of magnitudes faster. Furthermore, we demonstrate that our meta-learned hypernetwork is very robust, being the first to generate avatars with realistic dynamic cloth deformations given as few as 8 monocular depth frames.
null
Distributed Principal Component Analysis with Limited Communication
https://papers.nips.cc/paper_files/paper/2021/hash/1680e9fa7b4dd5d62ece800239bb53bd-Abstract.html
Foivos Alimisis, Peter Davies, Bart Vandereycken, Dan Alistarh
https://papers.nips.cc/paper_files/paper/2021/hash/1680e9fa7b4dd5d62ece800239bb53bd-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11839-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1680e9fa7b4dd5d62ece800239bb53bd-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=edCFRvlWqV
https://papers.nips.cc/paper_files/paper/2021/file/1680e9fa7b4dd5d62ece800239bb53bd-Supplemental.pdf
We study efficient distributed algorithms for the fundamental problem of principal component analysis and leading eigenvector computation on the sphere, when the data are randomly distributed among a set of computational nodes. We propose a new quantized variant of Riemannian gradient descent to solve this problem, and prove that the algorithm converges with high probability under a set of necessary spherical-convexity properties. We give bounds on the number of bits transmitted by the algorithm under common initialization schemes, and investigate the dependency on the problem dimension in each case.
null
Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update
https://papers.nips.cc/paper_files/paper/2021/hash/16837163fee34175358a47e0b51485ff-Abstract.html
Michal Derezinski, Jonathan Lacotte, Mert Pilanci, Michael W. Mahoney
https://papers.nips.cc/paper_files/paper/2021/hash/16837163fee34175358a47e0b51485ff-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11840-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/16837163fee34175358a47e0b51485ff-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Bl0GlLmNGLV
https://papers.nips.cc/paper_files/paper/2021/file/16837163fee34175358a47e0b51485ff-Supplemental.pdf
In second-order optimization, a potential bottleneck can be computing the Hessian matrix of the optimized function at every iteration. Randomized sketching has emerged as a powerful technique for constructing estimates of the Hessian which can be used to perform approximate Newton steps. This involves multiplication by a random sketching matrix, which introduces a trade-off between the computational cost of sketching and the convergence rate of the optimization. A theoretically desirable but practically much too expensive choice is to use a dense Gaussian sketching matrix, which produces unbiased estimates of the exact Newton step and offers strong problem-independent convergence guarantees. We show that the Gaussian matrix can be drastically sparsified, substantially reducing the computational cost, without affecting its convergence properties in any way. This approach, called Newton-LESS, is based on a recently introduced sketching technique: LEverage Score Sparsified (LESS) embeddings. We prove that Newton-LESS enjoys nearly the same problem-independent local convergence rate as Gaussian embeddings for a large class of functions. In particular, this leads to a new state-of-the-art convergence result for an iterative least squares solver. Finally, we substantially extend LESS embeddings to include uniformly sparsified random sign matrices which can be implemented efficiently and perform well in numerical experiments.
null
Confident Anchor-Induced Multi-Source Free Domain Adaptation
https://papers.nips.cc/paper_files/paper/2021/hash/168908dd3227b8358eababa07fcaf091-Abstract.html
Jiahua Dong, Zhen Fang, Anjin Liu, Gan Sun, Tongliang Liu
https://papers.nips.cc/paper_files/paper/2021/hash/168908dd3227b8358eababa07fcaf091-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11841-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/168908dd3227b8358eababa07fcaf091-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=EAdJEN8xKUl
https://papers.nips.cc/paper_files/paper/2021/file/168908dd3227b8358eababa07fcaf091-Supplemental.pdf
Unsupervised domain adaptation has attracted appealing academic attentions by transferring knowledge from labeled source domain to unlabeled target domain. However, most existing methods assume the source data are drawn from a single domain, which cannot be successfully applied to explore complementarily transferable knowledge from multiple source domains with large distribution discrepancies. Moreover, they require access to source data during training, which are inefficient and unpractical due to privacy preservation and memory storage. To address these challenges, we develop a novel Confident-Anchor-induced multi-source-free Domain Adaptation (CAiDA) model, which is a pioneer exploration of knowledge adaptation from multiple source domains to the unlabeled target domain without any source data, but with only pre-trained source models. Specifically, a source-specific transferable perception module is proposed to automatically quantify the contributions of the complementary knowledge transferred from multi-source domains to the target domain. To generate pseudo labels for the target domain without access to the source data, we develop a confident-anchor-induced pseudo label generator by constructing a confident anchor group and assigning each unconfident target sample with a semantic-nearest confident anchor. Furthermore, a class-relationship-aware consistency loss is proposed to preserve consistent inter-class relationships by aligning soft confusion matrices across domains. Theoretical analysis answers why multi-source domains are better than a single source domain, and establishes a novel learning bound to show the effectiveness of exploiting multi-source domains. Experiments on several representative datasets illustrate the superiority of our proposed CAiDA model. The code is available at https://github.com/Learning-group123/CAiDA.
null
Word2Fun: Modelling Words as Functions for Diachronic Word Representation
https://papers.nips.cc/paper_files/paper/2021/hash/16a5cdae362b8d27a1d8f8c7b78b4330-Abstract.html
Benyou Wang, Emanuele Di Buccio, Massimo Melucci
https://papers.nips.cc/paper_files/paper/2021/hash/16a5cdae362b8d27a1d8f8c7b78b4330-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11842-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/16a5cdae362b8d27a1d8f8c7b78b4330-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vy9jsg8VyoG
null
Word meaning may change over time as a reflection of changes in human society. Therefore, modeling time in word representation is necessary for some diachronic tasks. Most existing diachronic word representation approaches train the embeddings separately for each pre-grouped time-stamped corpus and align these embeddings, e.g., by orthogonal projections, vector initialization, temporal referencing, and compass. However, not only does word meaning change in a short time, word meaning may also be subject to evolution over long timespans, thus resulting in a unified continuous process. A recent approach called `DiffTime' models semantic evolution as functions parameterized by multiple-layer nonlinear neural networks over time. In this paper, we will carry on this line of work by learning explicit functions over time for each word. Our approach, called `Word2Fun', reduces the space complexity from $\mathcal{O}(TVD)$ to $\mathcal{O}(kVD)$ where $k$ is a small constant ($k \ll T $). In particular, a specific instance based on polynomial functions could provably approximate any function modeling word evolution with a given negligible error thanks to the Weierstrass Approximation Theorem. The effectiveness of the proposed approach is evaluated in diverse tasks including time-aware word clustering, temporal analogy, and semantic change detection. Code at: {\url{https://github.com/wabyking/Word2Fun.git}}.
null
Iteratively Reweighted Least Squares for Basis Pursuit with Global Linear Convergence Rate
https://papers.nips.cc/paper_files/paper/2021/hash/16bda725ae44af3bb9316f416bd13b1b-Abstract.html
Christian Kümmerle, Claudio Mayrink Verdun, Dominik Stöger
https://papers.nips.cc/paper_files/paper/2021/hash/16bda725ae44af3bb9316f416bd13b1b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11843-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/16bda725ae44af3bb9316f416bd13b1b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-S1V_oEOE52
https://papers.nips.cc/paper_files/paper/2021/file/16bda725ae44af3bb9316f416bd13b1b-Supplemental.pdf
The recovery of sparse data is at the core of many applications in machine learning and signal processing. While such problems can be tackled using $\ell_1$-regularization as in the LASSO estimator and in the Basis Pursuit approach, specialized algorithms are typically required to solve the corresponding high-dimensional non-smooth optimization for large instances.Iteratively Reweighted Least Squares (IRLS) is a widely used algorithm for this purpose due to its excellent numerical performance. However, while existing theory is able to guarantee convergence of this algorithm to the minimizer, it does not provide a global convergence rate. In this paper, we prove that a variant of IRLS converges \emph{with a global linear rate} to a sparse solution, i.e., with a linear error decrease occurring immediately from any initialization if the measurements fulfill the usual null space property assumption. We support our theory by numerical experiments showing that our linear rate captures the correct dimension dependence. We anticipate that our theoretical findings will lead to new insights for many other use cases of the IRLS algorithm, such as in low-rank matrix recovery.
null
Low-Rank Constraints for Fast Inference in Structured Models
https://papers.nips.cc/paper_files/paper/2021/hash/16c0d78ef6a76b5c247113a4c9514059-Abstract.html
Justin Chiu, Yuntian Deng, Alexander Rush
https://papers.nips.cc/paper_files/paper/2021/hash/16c0d78ef6a76b5c247113a4c9514059-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11844-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/16c0d78ef6a76b5c247113a4c9514059-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Mcldz4OJ6QB
https://papers.nips.cc/paper_files/paper/2021/file/16c0d78ef6a76b5c247113a4c9514059-Supplemental.pdf
Structured distributions, i.e. distributions over combinatorial spaces, are commonly used to learn latent probabilistic representations from observed data. However, scaling these models is bottlenecked by the high computational and memory complexity with respect to the size of the latent representations. Common models such as Hidden Markov Models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) require time and space quadratic and cubic in the number of hidden states respectively. This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models. We show that by viewing the central inference step as a matrix-vector product and using a low-rank constraint, we can trade off model expressivity and speed via the rank. Experiments with neural parameterized structured models for language modeling, polyphonic music modeling, unsupervised grammar induction, and video modeling show that our approach matches the accuracy of standard models at large state spaces while providing practical speedups.
null
Accumulative Poisoning Attacks on Real-time Data
https://papers.nips.cc/paper_files/paper/2021/hash/16d11e9595188dbad0418a85f0351aba-Abstract.html
Tianyu Pang, Xiao Yang, Yinpeng Dong, Hang Su, Jun Zhu
https://papers.nips.cc/paper_files/paper/2021/hash/16d11e9595188dbad0418a85f0351aba-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11845-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/16d11e9595188dbad0418a85f0351aba-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=4CrjylrL9vM
https://papers.nips.cc/paper_files/paper/2021/file/16d11e9595188dbad0418a85f0351aba-Supplemental.pdf
Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy. When trained on offline datasets, poisoning adversaries have to inject the poisoned data in advance before training, and the order of feeding these poisoned batches into the model is stochastic. In contrast, practical systems are more usually trained/fine-tuned on sequentially captured real-time data, in which case poisoning adversaries could dynamically poison each data batch according to the current model state. In this paper, we focus on the real-time settings and propose a new attacking strategy, which affiliates an accumulative phase with poisoning attacks to secretly (i.e., without affecting accuracy) magnify the destructive effect of a (poisoned) trigger batch. By mimicking online learning and federated learning on MNIST and CIFAR-10, we show that model accuracy significantly drops by a single update step on the trigger batch after the accumulative phase. Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects, with no need to explore complex techniques.
null
UCB-based Algorithms for Multinomial Logistic Regression Bandits
https://papers.nips.cc/paper_files/paper/2021/hash/16f852a6d01b6065c8ff5cc11caae9c6-Abstract.html
Sanae Amani, Christos Thrampoulidis
https://papers.nips.cc/paper_files/paper/2021/hash/16f852a6d01b6065c8ff5cc11caae9c6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11846-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/16f852a6d01b6065c8ff5cc11caae9c6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Jhp38rtUTV
https://papers.nips.cc/paper_files/paper/2021/file/16f852a6d01b6065c8ff5cc11caae9c6-Supplemental.pdf
Out of the rich family of generalized linear bandits, perhaps the most well studied ones are logistic bandits that are used in problems with binary rewards: for instance, when the learner aims to maximize the profit over a user that can select one of two possible outcomes (e.g., `click' vs `no-click'). Despite remarkable recent progress and improved algorithms for logistic bandits, existing works do not address practical situations where the number of outcomes that can be selected by the user is larger than two (e.g., `click', `show me later', `never show again', `no click'). In this paper, we study such an extension. We use multinomial logit (MNL) to model the probability of each one of $K+1\geq 2$ possible outcomes (+1 stands for the `not click' outcome): we assume that for a learner's action $\mathbf{x}_t$, the user selects one of $K+1\geq 2$ outcomes, say outcome $i$, with a MNL probabilistic model with corresponding unknown parameter $\bar{\boldsymbol{\theta}}_{\ast i}$. Each outcome $i$ is also associated with a revenue parameter $\rho_i$ and the goal is to maximize the expected revenue. For this problem, we present MNL-UCB, an upper confidence bound (UCB)-based algorithm, that achieves regret $\tilde{\mathcal{O}}(dK\sqrt{T})$ with small dependency on problem-dependent constants that can otherwise be arbitrarily large and lead to loose regret bounds. We present numerical simulations that corroborate our theoretical results.
null
Estimating the Long-Term Effects of Novel Treatments
https://papers.nips.cc/paper_files/paper/2021/hash/16fa2b0294e410b2551c3bf6965c0853-Abstract.html
Keith Battocchi, Eleanor Dillon, Maggie Hei, Greg Lewis, Miruna Oprescu, Vasilis Syrgkanis
https://papers.nips.cc/paper_files/paper/2021/hash/16fa2b0294e410b2551c3bf6965c0853-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11847-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/16fa2b0294e410b2551c3bf6965c0853-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=nzqoh6FN6sF
https://papers.nips.cc/paper_files/paper/2021/file/16fa2b0294e410b2551c3bf6965c0853-Supplemental.zip
Policy makers often need to estimate the long-term effects of novel treatments, while only having historical data of older treatment options. We propose a surrogate-based approach using a long-term dataset where only past treatments were administered and a short-term dataset where novel treatments have been administered. Our approach generalizes previous surrogate-style methods, allowing for continuous treatments and serially-correlated treatment policies while maintaining consistency and root-n asymptotically normal estimates under a Markovian assumption on the data and the observational policy. Using a semi-synthetic dataset on customer incentives from a major corporation, we evaluate the performance of our method and discuss solutions to practical challenges when deploying our methodology.
null
Dual Progressive Prototype Network for Generalized Zero-Shot Learning
https://papers.nips.cc/paper_files/paper/2021/hash/1700002963a49da13542e0726b7bb758-Abstract.html
Chaoqun Wang, Shaobo Min, Xuejin Chen, Xiaoyan Sun, Houqiang Li
https://papers.nips.cc/paper_files/paper/2021/hash/1700002963a49da13542e0726b7bb758-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11848-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1700002963a49da13542e0726b7bb758-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-K4tIyQLaY
https://papers.nips.cc/paper_files/paper/2021/file/1700002963a49da13542e0726b7bb758-Supplemental.pdf
Generalized Zero-Shot Learning (GZSL) aims to recognize new categories with auxiliary semantic information, e.g., category attributes. In this paper, we handle the critical issue of domain shift problem, i.e., confusion between seen and unseen categories, by progressively improving cross-domain transferability and category discriminability of visual representations. Our approach, named Dual Progressive Prototype Network (DPPN), constructs two types of prototypes that record prototypical visual patterns for attributes and categories, respectively. With attribute prototypes, DPPN alternately searches attribute-related local regions and updates corresponding attribute prototypes to progressively explore accurate attribute-region correspondence. This enables DPPN to produce visual representations with accurate attribute localization ability, which benefits the semantic-visual alignment and representation transferability. Besides, along with progressive attribute localization, DPPN further projects category prototypes into multiple spaces to progressively repel visual representations from different categories, which boosts category discriminability. Both attribute and category prototypes are collaboratively learned in a unified framework, which makes visual representations of DPPN transferable and distinctive.Experiments on four benchmarks prove that DPPN effectively alleviates the domain shift problem in GZSL.
null
Derivative-Free Policy Optimization for Linear Risk-Sensitive and Robust Control Design: Implicit Regularization and Sample Complexity
https://papers.nips.cc/paper_files/paper/2021/hash/1714726c817af50457d810aae9d27a2e-Abstract.html
Kaiqing Zhang, Xiangyuan Zhang, Bin Hu, Tamer Basar
https://papers.nips.cc/paper_files/paper/2021/hash/1714726c817af50457d810aae9d27a2e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11849-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1714726c817af50457d810aae9d27a2e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=NVAOPWZWYlv
https://papers.nips.cc/paper_files/paper/2021/file/1714726c817af50457d810aae9d27a2e-Supplemental.pdf
Direct policy search serves as one of the workhorses in modern reinforcement learning (RL), and its applications in continuous control tasks have recently attracted increasing attention. In this work, we investigate the convergence theory of policy gradient (PG) methods for learning the linear risk-sensitive and robust controller. In particular, we develop PG methods that can be implemented in a derivative-free fashion by sampling system trajectories, and establish both global convergence and sample complexity results in the solutions of two fundamental settings in risk-sensitive and robust control: the finite-horizon linear exponential quadratic Gaussian, and the finite-horizon linear-quadratic disturbance attenuation problems. As a by-product, our results also provide the first sample complexity for the global convergence of PG methods on solving zero-sum linear-quadratic dynamic games, a nonconvex-nonconcave minimax optimization problem that serves as a baseline setting in multi-agent reinforcement learning (MARL) with continuous spaces. One feature of our algorithms is that during the learning phase, a certain level of robustness/risk-sensitivity of the controller is preserved, which we termed as the implicit regularization property, and is an essential requirement in safety-critical control systems.
null
G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators
https://papers.nips.cc/paper_files/paper/2021/hash/171ae1bbb81475eb96287dd78565b38b-Abstract.html
Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl Gunter, Bo Li
https://papers.nips.cc/paper_files/paper/2021/hash/171ae1bbb81475eb96287dd78565b38b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11850-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/171ae1bbb81475eb96287dd78565b38b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_CmrI7UrmCl
https://papers.nips.cc/paper_files/paper/2021/file/171ae1bbb81475eb96287dd78565b38b-Supplemental.pdf
Recent advances in machine learning have largely benefited from the massive accessible training data. However, large-scale data sharing has raised great privacy concerns. In this work, we propose a novel privacy-preserving data Generative model based on the PATE framework (G-PATE), aiming to train a scalable differentially private data generator that preserves high generated data utility. Our approach leverages generative adversarial nets to generate data, combined with private aggregation among different discriminators to ensure strong privacy guarantees. Compared to existing approaches, G-PATE significantly improves the use of privacy budgets. In particular, we train a student data generator with an ensemble of teacher discriminators and propose a novel private gradient aggregation mechanism to ensure differential privacy on all information that flows from teacher discriminators to the student generator. In addition, with random projection and gradient discretization, the proposed gradient aggregation mechanism is able to effectively deal with high-dimensional gradient vectors. Theoretically, we prove that G-PATE ensures differential privacy for the data generator. Empirically, we demonstrate the superiority of G-PATE over prior work through extensive experiments. We show that G-PATE is the first work being able to generate high-dimensional image data with high data utility under limited privacy budgets ($\varepsilon \le 1$). Our code is available at https://github.com/AI-secure/G-PATE.
null
On the Existence of The Adversarial Bayes Classifier
https://papers.nips.cc/paper_files/paper/2021/hash/172ef5a94b4dd0aa120c6878fc29f70c-Abstract.html
Pranjal Awasthi, Natalie Frank, Mehryar Mohri
https://papers.nips.cc/paper_files/paper/2021/hash/172ef5a94b4dd0aa120c6878fc29f70c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11851-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/172ef5a94b4dd0aa120c6878fc29f70c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=iQICgKcrGpE
https://papers.nips.cc/paper_files/paper/2021/file/172ef5a94b4dd0aa120c6878fc29f70c-Supplemental.pdf
Adversarial robustness is a critical property in a variety of modern machine learning applications. While it has been the subject of several recent theoretical studies, many important questions related to adversarial robustness are still open. In this work, we study a fundamental question regarding Bayes optimality for adversarial robustness. We provide general sufficient conditions under which the existence of a Bayes optimal classifier can be guaranteed for adversarial robustness. Our results can provide a useful tool for a subsequent study of surrogate losses in adversarial robustness and their consistency properties.
null
Convex-Concave Min-Max Stackelberg Games
https://papers.nips.cc/paper_files/paper/2021/hash/174a61b0b3eab8c94e0a9e78b912307f-Abstract.html
Denizalp Goktas, Amy Greenwald
https://papers.nips.cc/paper_files/paper/2021/hash/174a61b0b3eab8c94e0a9e78b912307f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11852-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/174a61b0b3eab8c94e0a9e78b912307f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=gaftyBQ4Lu
https://papers.nips.cc/paper_files/paper/2021/file/174a61b0b3eab8c94e0a9e78b912307f-Supplemental.pdf
Min-max optimization problems (i.e., min-max games) have been attracting a great deal of attention because of their applicability to a wide range of machine learning problems. Although significant progress has been made recently, the literature to date has focused on games with independent strategy sets; little is known about solving games with dependent strategy sets, which can be characterized as min-max Stackelberg games. We introduce two first-order methods that solve a large class of convex-concave min-max Stackelberg games, and show that our methods converge in polynomial time. Min-max Stackelberg games were first studied by Wald, under the posthumous name of Wald’s maximin model, a variant of which is the main paradigm used in robust optimization, which means that our methods can likewise solve many convex robust optimization problems. We observe that the computation of competitive equilibria in Fisher markets also comprises a min-max Stackelberg game. Further, we demonstrate the efficacy and efficiency of our algorithms in practice by computing competitive equilibria in Fisher markets with varying utility structures. Our experiments suggest potential ways to extend our theoretical results, by demonstrating how different smoothness properties can affect the convergence rate of our algorithms.
null
Misspecified Gaussian Process Bandit Optimization
https://papers.nips.cc/paper_files/paper/2021/hash/177db6acfe388526a4c7bff88e1feb15-Abstract.html
Ilija Bogunovic, Andreas Krause
https://papers.nips.cc/paper_files/paper/2021/hash/177db6acfe388526a4c7bff88e1feb15-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11853-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/177db6acfe388526a4c7bff88e1feb15-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=kbzx0uNZdS
https://papers.nips.cc/paper_files/paper/2021/file/177db6acfe388526a4c7bff88e1feb15-Supplemental.pdf
We consider the problem of optimizing a black-box function based on noisy bandit feedback. Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem. They heavily rely on the assumption that the model is well-specified, however, and can fail without it. Instead, we introduce and address a \emph{misspecified} kernelized bandit setting where the unknown function can be $\epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS). We design efficient and practical algorithms whose performance degrades minimally in the presence of model misspecification. Specifically, we present two algorithms based on Gaussian process (GP) methods: an optimistic EC-GP-UCB algorithm that requires knowing the misspecification error, and Phased GP Uncertainty Sampling, an elimination-type algorithm that can adapt to unknown model misspecification. We provide upper bounds on their cumulative regret in terms of $\epsilon$, the time horizon, and the underlying kernel, and we show that our algorithm achieves optimal dependence on $\epsilon$ with no prior knowledge of misspecification. In addition, in a stochastic contextual setting, we show that EC-GP-UCB can be effectively combined with the regret bound balancing strategy and attain similar regret bounds despite not knowing $\epsilon$.
null
Visual Adversarial Imitation Learning using Variational Models
https://papers.nips.cc/paper_files/paper/2021/hash/1796a48fa1968edd5c5d10d42c7b1813-Abstract.html
Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, Chelsea Finn
https://papers.nips.cc/paper_files/paper/2021/hash/1796a48fa1968edd5c5d10d42c7b1813-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11854-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1796a48fa1968edd5c5d10d42c7b1813-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-646c8bpgPl
https://papers.nips.cc/paper_files/paper/2021/file/1796a48fa1968edd5c5d10d42c7b1813-Supplemental.pdf
Reward function specification, which requires considerable human effort and iteration, remains a major impediment for learning behaviors through deep reinforcement learning. In contrast, providing visual demonstrations of desired behaviors presents an easier and more natural way to teach agents. We consider a setting where an agent is provided a fixed dataset of visual demonstrations illustrating how to perform a task, and must learn to solve the task using the provided demonstrations and unsupervised environment interactions. This setting presents a number of challenges including representation learning for visual observations, sample complexity due to high dimensional spaces, and learning instability due to the lack of a fixed reward or learning signal. Towards addressing these challenges, we develop a variational model-based adversarial imitation learning (V-MAIL) algorithm. The model-based approach provides a strong signal for representation learning, enables sample efficiency, and improves the stability of adversarial training by enabling on-policy learning. Through experiments involving several vision-based locomotion and manipulation tasks, we find that V-MAIL learns successful visuomotor policies in a sample-efficient manner, has better stability compared to prior work, and also achieves higher asymptotic performance. We further find that by transferring the learned models, V-MAIL can learn new tasks from visual demonstrations without any additional environment interactions. All results including videos can be found online at https://sites.google.com/view/variational-mail
null
Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning
https://papers.nips.cc/paper_files/paper/2021/hash/17a3120e4e5fbdc3cb5b5f946809b06a-Abstract.html
Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu
https://papers.nips.cc/paper_files/paper/2021/hash/17a3120e4e5fbdc3cb5b5f946809b06a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11855-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/17a3120e4e5fbdc3cb5b5f946809b06a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=FEhntTXAeHN
https://papers.nips.cc/paper_files/paper/2021/file/17a3120e4e5fbdc3cb5b5f946809b06a-Supplemental.pdf
Behavioral cloning has proven to be effective for learning sequential decision-making policies from expert demonstrations. However, behavioral cloning often suffers from the causal confusion problem where a policy relies on the noticeable effect of expert actions due to the strong correlation but not the cause we desire. This paper presents Object-aware REgularizatiOn (OREO), a simple technique that regularizes an imitation policy in an object-aware manner. Our main idea is to encourage a policy to uniformly attend to all semantic objects, in order to prevent the policy from exploiting nuisance variables strongly correlated with expert actions. To this end, we introduce a two-stage approach: (a) we extract semantic objects from images by utilizing discrete codes from a vector-quantized variational autoencoder, and (b) we randomly drop the units that share the same discrete code together, i.e., masking out semantic objects. Our experiments demonstrate that OREO significantly improves the performance of behavioral cloning, outperforming various other regularization and causality-based methods on a variety of Atari environments and a self-driving CARLA environment. We also show that our method even outperforms inverse reinforcement learning methods trained with a considerable amount of environment interaction.
null
Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection
https://papers.nips.cc/paper_files/paper/2021/hash/17e23e50bedc63b4095e3d8204ce063b-Abstract.html
Chunjong Park, Anas Awadalla, Tadayoshi Kohno, Shwetak Patel
https://papers.nips.cc/paper_files/paper/2021/hash/17e23e50bedc63b4095e3d8204ce063b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11856-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/17e23e50bedc63b4095e3d8204ce063b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hNMOSUxE8o6
https://papers.nips.cc/paper_files/paper/2021/file/17e23e50bedc63b4095e3d8204ce063b-Supplemental.pdf
Unpredictable ML model behavior on unseen data, especially in the health domain, raises serious concerns about its safety as repercussions for mistakes can be fatal. In this paper, we explore the feasibility of using state-of-the-art out-of-distribution detectors for reliable and trustworthy diagnostic predictions. We select publicly available deep learning models relating to various health conditions (e.g., skin cancer, lung sound, and Parkinson's disease) using various input data types (e.g., image, audio, and motion data). We demonstrate that these models show unreasonable predictions on out-of-distribution datasets. We show that Mahalanobis distance- and Gram matrices-based out-of-distribution detection methods are able to detect out-of-distribution data with high accuracy for the health models that operate on different modalities. We then translate the out-of-distribution score into a human interpretable \textsc{confidence score} to investigate its effect on the users' interaction with health ML applications. Our user study shows that the \textsc{confidence score} helped the participants only trust the results with a high score to make a medical decision and disregard results with a low score. Through this work, we demonstrate that dataset shift is a critical piece of information for high-stake ML applications, such as medical diagnosis and healthcare, to provide reliable and trustworthy predictions to the users.
null
Multiclass Boosting and the Cost of Weak Learning
https://papers.nips.cc/paper_files/paper/2021/hash/17f5e6db87929fb55cebeb7fd58c1d41-Abstract.html
Nataly Brukhim, Elad Hazan, Shay Moran, Indraneel Mukherjee, Robert E. Schapire
https://papers.nips.cc/paper_files/paper/2021/hash/17f5e6db87929fb55cebeb7fd58c1d41-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11857-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/17f5e6db87929fb55cebeb7fd58c1d41-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fJWmx5i5lOv
https://papers.nips.cc/paper_files/paper/2021/file/17f5e6db87929fb55cebeb7fd58c1d41-Supplemental.pdf
Boosting is an algorithmic approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. In this work we study multiclass boosting with a possibly large number of classes or categories. Multiclass boosting can be formulated in various ways. Here, we focus on an especially natural formulation in which the weak hypotheses are assumed to belong to an ''easy-to-learn'' base class, and the weak learner is an agnostic PAC learner for that class with respect to the standard classification loss. This is in contrast with other, more complicated losses as have often been considered in the past. The goal of the overall boosting algorithm is then to learn a combination of weak hypotheses by repeatedly calling the weak learner.We study the resources required for boosting, especially how theydepend on the number of classes $k$, for both the booster and weak learner.We find that the boosting algorithm itself only requires $O(\log k)$samples, as we show by analyzing a variant of AdaBoost for oursetting. In stark contrast, assuming typical limits on the number of weak-learner calls,we prove that the number of samples required by a weak learner is at least polynomial in $k$, exponentially more than thenumber of samples needed by the booster.Alternatively, we prove that the weak learner's accuracy parametermust be smaller than an inverse polynomial in $k$, showing that the returned weakhypotheses must be nearly the best in their class when $k$ is large.We also prove a trade-off between number of oracle calls and theresources required of the weak learner, meaning that the fewer calls to theweak learner the more that is demanded on each call.
null
Partition-Based Formulations for Mixed-Integer Optimization of Trained ReLU Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/17f98ddf040204eda0af36a108cbdea4-Abstract.html
Calvin Tsay, Jan Kronqvist, Alexander Thebelt, Ruth Misener
https://papers.nips.cc/paper_files/paper/2021/hash/17f98ddf040204eda0af36a108cbdea4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11858-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/17f98ddf040204eda0af36a108cbdea4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jhd62iKzRuj
https://papers.nips.cc/paper_files/paper/2021/file/17f98ddf040204eda0af36a108cbdea4-Supplemental.pdf
This paper introduces a class of mixed-integer formulations for trained ReLU neural networks. The approach balances model size and tightness by partitioning node inputs into a number of groups and forming the convex hull over the partitions via disjunctive programming. At one extreme, one partition per input recovers the convex hull of a node, i.e., the tightest possible formulation for each node. For fewer partitions, we develop smaller relaxations that approximate the convex hull, and show that they outperform existing formulations. Specifically, we propose strategies for partitioning variables based on theoretical motivations and validate these strategies using extensive computational experiments. Furthermore, the proposed scheme complements known algorithmic approaches, e.g., optimization-based bound tightening captures dependencies within a partition.
null
Hyperparameter Optimization Is Deceiving Us, and How to Stop It
https://papers.nips.cc/paper_files/paper/2021/hash/17fafe5f6ce2f1904eb09d2e80a4cbf6-Abstract.html
A. Feder Cooper, Yucheng Lu, Jessica Forde, Christopher M. De Sa
https://papers.nips.cc/paper_files/paper/2021/hash/17fafe5f6ce2f1904eb09d2e80a4cbf6-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11859-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/17fafe5f6ce2f1904eb09d2e80a4cbf6-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2lZdja9xYzh
https://papers.nips.cc/paper_files/paper/2021/file/17fafe5f6ce2f1904eb09d2e80a4cbf6-Supplemental.pdf
Recent empirical work shows that inconsistent results based on choice of hyperparameter optimization (HPO) configuration are a widespread problem in ML research. When comparing two algorithms J and K searching one subspace can yield the conclusion that J outperforms K, whereas searching another can entail the opposite. In short, the way we choose hyperparameters can deceive us. We provide a theoretical complement to this prior work, arguing that, to avoid such deception, the process of drawing conclusions from HPO should be made more rigorous. We call this process epistemic hyperparameter optimization (EHPO), and put forth a logical framework to capture its semantics and how it can lead to inconsistent conclusions about performance. Our framework enables us to prove EHPO methods that are guaranteed to be defended against deception, given bounded compute time budget t. We demonstrate our framework's utility by proving and empirically validating a defended variant of random search.
null
On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/18085327b86002fc604c323b9a07f997-Abstract.html
Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, Asuman Ozdaglar
https://papers.nips.cc/paper_files/paper/2021/hash/18085327b86002fc604c323b9a07f997-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11860-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/18085327b86002fc604c323b9a07f997-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Xs-vglI4EBi
https://papers.nips.cc/paper_files/paper/2021/file/18085327b86002fc604c323b9a07f997-Supplemental.pdf
We consider Model-Agnostic Meta-Learning (MAML) methods for Reinforcement Learning (RL) problems, where the goal is to find a policy using data from several tasks represented by Markov Decision Processes (MDPs) that can be updated by one step of \textit{stochastic} policy gradient for the realized MDP. In particular, using stochastic gradients in MAML update steps is crucial for RL problems since computation of exact gradients requires access to a large number of possible trajectories. For this formulation, we propose a variant of the MAML method, named Stochastic Gradient Meta-Reinforcement Learning (SG-MRL), and study its convergence properties. We derive the iteration and sample complexity of SG-MRL to find an $\epsilon$-first-order stationary point, which, to the best of our knowledge, provides the first convergence guarantee for model-agnostic meta-reinforcement learning algorithms. We further show how our results extend to the case where more than one step of stochastic policy gradient method is used at test time. Finally, we empirically compare SG-MRL and MAML in several deep RL environments.
null
3D Pose Transfer with Correspondence Learning and Mesh Refinement
https://papers.nips.cc/paper_files/paper/2021/hash/18a411989b47ed75a60ac69d9da05aa5-Abstract.html
Chaoyue Song, Jiacheng Wei, Ruibo Li, Fayao Liu, Guosheng Lin
https://papers.nips.cc/paper_files/paper/2021/hash/18a411989b47ed75a60ac69d9da05aa5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11861-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/18a411989b47ed75a60ac69d9da05aa5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fG01Z_unHC
https://papers.nips.cc/paper_files/paper/2021/file/18a411989b47ed75a60ac69d9da05aa5-Supplemental.pdf
3D pose transfer is one of the most challenging 3D generation tasks. It aims to transfer the pose of a source mesh to a target mesh and keep the identity (e.g., body shape) of the target mesh. Some previous works require key point annotations to build reliable correspondence between the source and target meshes, while other methods do not consider any shape correspondence between sources and targets, which leads to limited generation quality. In this work, we propose a correspondence-refinement network to achieve the 3D pose transfer for both human and animal meshes. The correspondence between source and target meshes is first established by solving an optimal transport problem. Then, we warp the source mesh according to the dense correspondence and obtain a coarse warped mesh. The warped mesh will be better refined with our proposed Elastic Instance Normalization, which is a conditional normalization layer and can help to generate high-quality meshes. Extensive experimental results show that the proposed architecture can effectively transfer the poses from source to target meshes and produce better results with satisfied visual performance than state-of-the-art methods.
null
Framing RNN as a kernel method: A neural ODE approach
https://papers.nips.cc/paper_files/paper/2021/hash/18a9042b3fc5b02fe3d57fea87d6992f-Abstract.html
Adeline Fermanian, Pierre Marion, Jean-Philippe Vert, Gérard Biau
https://papers.nips.cc/paper_files/paper/2021/hash/18a9042b3fc5b02fe3d57fea87d6992f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11862-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/18a9042b3fc5b02fe3d57fea87d6992f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=QT9ulkiN-LX
https://papers.nips.cc/paper_files/paper/2021/file/18a9042b3fc5b02fe3d57fea87d6992f-Supplemental.pdf
Building on the interpretation of a recurrent neural network (RNN) as a continuous-time neural differential equation, we show, under appropriate conditions, that the solution of a RNN can be viewed as a linear function of a specific feature set of the input sequence, known as the signature. This connection allows us to frame a RNN as a kernel method in a suitable reproducing kernel Hilbert space. As a consequence, we obtain theoretical guarantees on generalization and stability for a large class of recurrent networks. Our results are illustrated on simulated datasets.
null
Contextual Similarity Aggregation with Self-attention for Visual Re-ranking
https://papers.nips.cc/paper_files/paper/2021/hash/18d10dc6e666eab6de9215ae5b3d54df-Abstract.html
Jianbo Ouyang, Hui Wu, Min Wang, Wengang Zhou, Houqiang Li
https://papers.nips.cc/paper_files/paper/2021/hash/18d10dc6e666eab6de9215ae5b3d54df-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11863-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/18d10dc6e666eab6de9215ae5b3d54df-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=uOxe0CHI5dq
https://papers.nips.cc/paper_files/paper/2021/file/18d10dc6e666eab6de9215ae5b3d54df-Supplemental.pdf
In content-based image retrieval, the first-round retrieval result by simple visual feature comparison may be unsatisfactory, which can be refined by visual re-ranking techniques. In image retrieval, it is observed that the contextual similarity among the top-ranked images is an important clue to distinguish the semantic relevance. Inspired by this observation, in this paper, we propose a visual re-ranking method by contextual similarity aggregation with self-attention. In our approach, for each image in the top-K ranking list, we represent it into an affinity feature vector by comparing it with a set of anchor images. Then, the affinity features of the top-K images are refined by aggregating the contextual information with a transformer encoder. Finally, the affinity features are used to recalculate the similarity scores between the query and the top-K images for re-ranking of the latter. To further improve the robustness of our re-ranking model and enhance the performance of our method, a new data augmentation scheme is designed. Since our re-ranking model is not directly involved with the visual feature used in the initial retrieval, it is ready to be applied to retrieval result lists obtained from various retrieval algorithms. We conduct comprehensive experiments on four benchmark datasets to demonstrate the generality and effectiveness of our proposed visual re-ranking method.
null
Can Information Flows Suggest Targets for Interventions in Neural Circuits?
https://papers.nips.cc/paper_files/paper/2021/hash/18de4beb01f6a17b6e1dfb9813ba6045-Abstract.html
Praveen Venkatesh, Sanghamitra Dutta, Neil Mehta, Pulkit Grover
https://papers.nips.cc/paper_files/paper/2021/hash/18de4beb01f6a17b6e1dfb9813ba6045-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11864-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/18de4beb01f6a17b6e1dfb9813ba6045-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jBQaRXpEgO
https://papers.nips.cc/paper_files/paper/2021/file/18de4beb01f6a17b6e1dfb9813ba6045-Supplemental.pdf
Motivated by neuroscientific and clinical applications, we empirically examine whether observational measures of information flow can suggest interventions. We do so by performing experiments on artificial neural networks in the context of fairness in machine learning, where the goal is to induce fairness in the system through interventions. Using our recently developed M-information flow framework, we measure the flow of information about the true label (responsible for accuracy, and hence desirable), and separately, the flow of information about a protected attribute (responsible for bias, and hence undesirable) on the edges of a trained neural network. We then compare the flow magnitudes against the effect of intervening on those edges by pruning. We show that pruning edges that carry larger information flows about the protected attribute reduces bias at the output to a greater extent. This demonstrates that M-information flow can meaningfully suggest targets for interventions, answering the title's question in the affirmative. We also evaluate bias-accuracy tradeoffs for different intervention strategies, to analyze how one might use estimates of desirable and undesirable information flows (here, accuracy and bias flows) to inform interventions that preserve the former while reducing the latter.
null
AutoBalance: Optimized Loss Functions for Imbalanced Data
https://papers.nips.cc/paper_files/paper/2021/hash/191f8f858acda435ae0daf994e2a72c2-Abstract.html
Mingchen Li, Xuechen Zhang, Christos Thrampoulidis, Jiasi Chen, Samet Oymak
https://papers.nips.cc/paper_files/paper/2021/hash/191f8f858acda435ae0daf994e2a72c2-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11865-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/191f8f858acda435ae0daf994e2a72c2-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=ebQXflQre5a
https://papers.nips.cc/paper_files/paper/2021/file/191f8f858acda435ae0daf994e2a72c2-Supplemental.zip
Imbalanced datasets are commonplace in modern machine learning problems. The presence of under-represented classes or groups with sensitive attributes results in concerns about generalization and fairness. Such concerns are further exacerbated by the fact that large capacity deep nets can perfectly fit the training data and appear to achieve perfect accuracy and fairness during training, but perform poorly during test. To address these challenges, we propose AutoBalance, a bi-level optimization framework that automatically designs a training loss function to optimize a blend of accuracy and fairness-seeking objectives. Specifically, a lower-level problem trains the model weights, and an upper-level problem tunes the loss function by monitoring and optimizing the desired objective over the validation data. Our loss design enables personalized treatment for classes/groups by employing a parametric cross-entropy loss and individualized data augmentation schemes. We evaluate the benefits and performance of our approach for the application scenarios of imbalanced and group-sensitive classification. Extensive empirical evaluations demonstrate the benefits of AutoBalance over state-of-the-art approaches. Our experimental findings are complemented with theoretical insights on loss function design and the benefits of the train-validation split. All code is available open-source.
null
SyncTwin: Treatment Effect Estimation with Longitudinal Outcomes
https://papers.nips.cc/paper_files/paper/2021/hash/19485224d128528da1602ca47383f078-Abstract.html
Zhaozhi Qian, Yao Zhang, Ioana Bica, Angela Wood, Mihaela van der Schaar
https://papers.nips.cc/paper_files/paper/2021/hash/19485224d128528da1602ca47383f078-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11866-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/19485224d128528da1602ca47383f078-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=52YubM-VC6H
https://papers.nips.cc/paper_files/paper/2021/file/19485224d128528da1602ca47383f078-Supplemental.pdf
Most of the medical observational studies estimate the causal treatment effects using electronic health records (EHR), where a patient's covariates and outcomes are both observed longitudinally. However, previous methods focus only on adjusting for the covariates while neglecting the temporal structure in the outcomes. To bridge the gap, this paper develops a new method, SyncTwin, that learns a patient-specific time-constant representation from the pre-treatment observations. SyncTwin issues counterfactual prediction of a target patient by constructing a synthetic twin that closely matches the target in representation. The reliability of the estimated treatment effect can be assessed by comparing the observed and synthetic pre-treatment outcomes. The medical experts can interpret the estimate by examining the most important contributing individuals to the synthetic twin. In the real-data experiment, SyncTwin successfully reproduced the findings of a randomized controlled clinical trial using observational data, which demonstrates its usability in the complex real-world EHR.
null
Statistical Query Lower Bounds for List-Decodable Linear Regression
https://papers.nips.cc/paper_files/paper/2021/hash/19b1b73d63d4c9ea79f8ca57e9d67095-Abstract.html
Ilias Diakonikolas, Daniel Kane, Ankit Pensia, Thanasis Pittas, Alistair Stewart
https://papers.nips.cc/paper_files/paper/2021/hash/19b1b73d63d4c9ea79f8ca57e9d67095-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11867-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/19b1b73d63d4c9ea79f8ca57e9d67095-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=7pU_P1IbePx
https://papers.nips.cc/paper_files/paper/2021/file/19b1b73d63d4c9ea79f8ca57e9d67095-Supplemental.pdf
We study the problem of list-decodable linear regression, where an adversary can corrupt a majority of the examples. Specifically, we are given a set $T$ of labeled examples $(x, y) \in \mathbb{R}^d \times \mathbb{R}$ and a parameter $0< \alpha <1/2$ such that an $\alpha$-fraction of the points in $T$ are i.i.d. samples from a linear regression model with Gaussian covariates, and the remaining $(1-\alpha)$-fraction of the points are drawn from an arbitrary noise distribution. The goal is to output a small list of hypothesis vectors such that at least one of them is close to the target regression vector. Our main result is a Statistical Query (SQ) lower bound of $d^{\mathrm{poly}(1/\alpha)}$ for this problem. Our SQ lower bound qualitatively matches the performance of previously developed algorithms, providing evidence that current upper bounds for this task are nearly best possible.
null
Unsupervised Motion Representation Learning with Capsule Autoencoders
https://papers.nips.cc/paper_files/paper/2021/hash/19ca14e7ea6328a42e0eb13d585e4c22-Abstract.html
Ziwei Xu, Xudong Shen, Yongkang Wong, Mohan S. Kankanhalli
https://papers.nips.cc/paper_files/paper/2021/hash/19ca14e7ea6328a42e0eb13d585e4c22-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11868-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/19ca14e7ea6328a42e0eb13d585e4c22-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=vCthaJ4ywT
https://papers.nips.cc/paper_files/paper/2021/file/19ca14e7ea6328a42e0eb13d585e4c22-Supplemental.pdf
We propose the Motion Capsule Autoencoder (MCAE), which addresses a key challenge in the unsupervised learning of motion representations: transformation invariance. MCAE models motion in a two-level hierarchy. In the lower level, a spatio-temporal motion signal is divided into short, local, and semantic-agnostic snippets. In the higher level, the snippets are aggregated to form full-length semantic-aware segments. For both levels, we represent motion with a set of learned transformation invariant templates and the corresponding geometric transformations by using capsule autoencoders of a novel design. This leads to a robust and efficient encoding of viewpoint changes. MCAE is evaluated on a novel Trajectory20 motion dataset and various real-world skeleton-based human action datasets. Notably, it achieves better results than baselines on Trajectory20 with considerably fewer parameters and state-of-the-art performance on the unsupervised skeleton-based action recognition task.
null
VigDet: Knowledge Informed Neural Temporal Point Process for Coordination Detection on Social Media
https://papers.nips.cc/paper_files/paper/2021/hash/1a344877f11195aaf947ccfe48ee9c89-Abstract.html
Yizhou Zhang, Karishma Sharma, Yan Liu
https://papers.nips.cc/paper_files/paper/2021/hash/1a344877f11195aaf947ccfe48ee9c89-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11869-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1a344877f11195aaf947ccfe48ee9c89-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=sYNr-OqGC9m
null
Recent years have witnessed an increasing use of coordinated accounts on social media, operated by misinformation campaigns to influence public opinion and manipulate social outcomes. Consequently, there is an urgent need to develop an effective methodology for coordinated group detection to combat the misinformation on social media. However, existing works suffer from various drawbacks, such as, either limited performance due to extreme reliance on predefined signatures of coordination, or instead an inability to address the natural sparsity of account activities on social media with useful prior domain knowledge. Therefore, in this paper, we propose a coordination detection framework incorporating neural temporal point process with prior knowledge such as temporal logic or pre-defined filtering functions. Specifically, when modeling the observed data from social media with neural temporal point process, we jointly learn a Gibbs-like distribution of group assignment based on how consistent an assignment is to (1) the account embedding space and (2) the prior knowledge. To address the challenge that the distribution is hard to be efficiently computed and sampled from, we design a theoretically guaranteed variational inference approach to learn a mean-field approximation for it. Experimental results on a real-world dataset show the effectiveness of our proposed method compared to the SOTA model in both unsupervised and semi-supervised settings. We further apply our model on a COVID-19 Vaccine Tweets dataset. The detection result suggests the presence of suspicious coordinated efforts on spreading misinformation about COVID-19 vaccines.
null
An Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders
https://papers.nips.cc/paper_files/paper/2021/hash/1a3650aedfdd3a21444047ed2d89458f-Abstract.html
Xinmeng Huang, Kun Yuan, Xianghui Mao, Wotao Yin
https://papers.nips.cc/paper_files/paper/2021/hash/1a3650aedfdd3a21444047ed2d89458f-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11870-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1a3650aedfdd3a21444047ed2d89458f-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=x2lBl0GRav5
https://papers.nips.cc/paper_files/paper/2021/file/1a3650aedfdd3a21444047ed2d89458f-Supplemental.pdf
When applying a stochastic algorithm, one must choose an order to draw samples. The practical choices are without-replacement sampling orders, which are empirically faster and more cache-friendly than uniform-iid-sampling but often have inferior theoretical guarantees. Without-replacement sampling is well understood only for SGD without variance reduction. In this paper, we will improve the convergence analysis and rates of variance reduction under without-replacement sampling orders for composite finite-sum minimization.Our results are in two-folds. First, we develop a damped variant of Finito called Prox-DFinito and establish its convergence rates with random reshuffling, cyclic sampling, and shuffling-once, under both generally and strongly convex scenarios. These rates match full-batch gradient descent and are state-of-the-art compared to the existing results for without-replacement sampling with variance-reduction. Second, our analysis can gauge how the cyclic order will influence the rate of cyclic sampling and, thus, allows us to derive the optimal fixed ordering. In the highly data-heterogeneous scenario, Prox-DFinito with optimal cyclic sampling can attain a sample-size-independent convergence rate, which, to our knowledge, is the first result that can match with uniform-iid-sampling with variance reduction. We also propose a practical method to discover the optimal cyclic ordering numerically.
null
Exploring Forensic Dental Identification with Deep Learning
https://papers.nips.cc/paper_files/paper/2021/hash/1a423f7c07a179ec243e82b0c017a034-Abstract.html
Yuan Liang, Weikun Han, Liang Qiu, Chen Wu, Yiting Shao, Kun Wang, Lei He
https://papers.nips.cc/paper_files/paper/2021/hash/1a423f7c07a179ec243e82b0c017a034-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11871-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1a423f7c07a179ec243e82b0c017a034-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YN4TMf3sv52
https://papers.nips.cc/paper_files/paper/2021/file/1a423f7c07a179ec243e82b0c017a034-Supplemental.pdf
Dental forensic identification targets to identify persons with dental traces.The task is vital for the investigation of criminal scenes and mass disasters because of the resistance of dental structures and the wide-existence of dental imaging. However, no widely accepted automated solution is available for this labour-costly task. In this work, we pioneer to study deep learning for dental forensic identification based on panoramic radiographs. We construct a comprehensive benchmark with various dental variations that can adequately reflect the difficulties of the task. By considering the task's unique challenges, we propose FoID, a deep learning method featured by: (\textit{i}) clinical-inspired attention localization, (\textit{ii}) domain-specific augmentations that enable instance discriminative learning, and (\textit{iii}) transformer-based self-attention mechanism that dynamically reasons the relative importance of attentions. We show that FoID can outperform traditional approaches by at least \textbf{22.98\%} in terms of Rank-1 accuracy, and outperform strong CNN baselines by at least \textbf{10.50\%} in terms of mean Average Precision (mAP). Moreover, extensive ablation studies verify the effectiveness of each building blocks of FoID. Our work can be a first step towards the automated system for forensic identification among large-scale multi-site databases. Also, the proposed techniques, \textit{e.g.}, self-attention mechanism, can also be meaningful for other identification tasks, \textit{e.g.}, pedestrian re-identification.Related data and codes can be found at \href{https://github.com/liangyuandg/FoID}{https://github.com/liangyuandg/FoID}.
null
Learning to Generate Realistic Noisy Images via Pixel-level Noise-aware Adversarial Training
https://papers.nips.cc/paper_files/paper/2021/hash/1a5b1e4daae265b790965a275b53ae50-Abstract.html
Yuanhao Cai, Xiaowan Hu, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Donglai Wei
https://papers.nips.cc/paper_files/paper/2021/hash/1a5b1e4daae265b790965a275b53ae50-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11872-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1a5b1e4daae265b790965a275b53ae50-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Wua2zjxJdYo
https://papers.nips.cc/paper_files/paper/2021/file/1a5b1e4daae265b790965a275b53ae50-Supplemental.pdf
Existing deep learning real denoising methods require a large amount of noisy-clean image pairs for supervision. Nonetheless, capturing a real noisy-clean dataset is an unacceptable expensive and cumbersome procedure. To alleviate this problem, this work investigates how to generate realistic noisy images. Firstly, we formulate a simple yet reasonable noise model that treats each real noisy pixel as a random variable. This model splits the noisy image generation problem into two sub-problems: image domain alignment and noise domain alignment. Subsequently, we propose a novel framework, namely Pixel-level Noise-aware Generative Adversarial Network (PNGAN). PNGAN employs a pre-trained real denoiser to map the fake and real noisy images into a nearly noise-free solution space to perform image domain alignment. Simultaneously, PNGAN establishes a pixel-level adversarial training to conduct noise domain alignment. Additionally, for better noise fitting, we present an efficient architecture Simple Multi-scale Network (SMNet) as the generator. Qualitative validation shows that noise generated by PNGAN is highly similar to real noise in terms of intensity and distribution. Quantitative experiments demonstrate that a series of denoisers trained with the generated noisy images achieve state-of-the-art (SOTA) results on four real denoising benchmarks.
null
Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks
https://papers.nips.cc/paper_files/paper/2021/hash/1a6727711b84fd1efbb87fc565199d13-Abstract.html
Jianhong Wang, Wangkun Xu, Yunjie Gu, Wenbin Song, Tim C Green
https://papers.nips.cc/paper_files/paper/2021/hash/1a6727711b84fd1efbb87fc565199d13-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11873-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1a6727711b84fd1efbb87fc565199d13-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=hwoK62_GkiT
https://papers.nips.cc/paper_files/paper/2021/file/1a6727711b84fd1efbb87fc565199d13-Supplemental.pdf
This paper presents a problem in power networks that creates an exciting and yet challenging real-world scenario for application of multi-agent reinforcement learning (MARL). The emerging trend of decarbonisation is placing excessive stress on power distribution networks. Active voltage control is seen as a promising solution to relieve power congestion and improve voltage quality without extra hardware investment, taking advantage of the controllable apparatuses in the network, such as roof-top photovoltaics (PVs) and static var compensators (SVCs). These controllable apparatuses appear in a vast number and are distributed in a wide geographic area, making MARL a natural candidate. This paper formulates the active voltage control problem in the framework of Dec-POMDP and establishes an open-source environment. It aims to bridge the gap between the power community and the MARL community and be a drive force towards real-world applications of MARL algorithms. Finally, we analyse the special characteristics of the active voltage control problems that cause challenges (e.g. interpretability) for state-of-the-art MARL approaches, and summarise the potential directions.
null
Looking Beyond Single Images for Contrastive Semantic Segmentation Learning
https://papers.nips.cc/paper_files/paper/2021/hash/1a68e5f4ade56ed1d4bf273e55510750-Abstract.html
FEIHU ZHANG, Philip Torr, Rene Ranftl, Stephan Richter
https://papers.nips.cc/paper_files/paper/2021/hash/1a68e5f4ade56ed1d4bf273e55510750-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11874-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1a68e5f4ade56ed1d4bf273e55510750-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MSVlSMBbBt
https://papers.nips.cc/paper_files/paper/2021/file/1a68e5f4ade56ed1d4bf273e55510750-Supplemental.pdf
We present an approach to contrastive representation learning for semantic segmentation. Our approach leverages the representational power of existing feature extractors to find corresponding regions across images. These cross-image correspondences are used as auxiliary labels to guide the pixel-level selection of positive and negative samples for more effective contrastive learning in semantic segmentation. We show that auxiliary labels can be generated from a variety of feature extractors, ranging from image classification networks that have been trained using unsupervised contrastive learning to segmentation models that have been trained on a small amount of labeled data. We additionally introduce a novel metric for rapidly judging the quality of a given auxiliary-labeling strategy, and empirically analyze various factors that influence the performance of contrastive learning for semantic segmentation. We demonstrate the effectiveness of our method both in the low-data as well as the high-data regime on various datasets. Our experiments show that contrastive learning with our auxiliary-labeling approach consistently boosts semantic segmentation accuracy when compared to standard ImageNet pretraining and outperforms existing approaches of contrastive and semi-supervised semantic segmentation.
null
A Constant Approximation Algorithm for Sequential Random-Order No-Substitution k-Median Clustering
https://papers.nips.cc/paper_files/paper/2021/hash/1aa057313c28fa4a40c5bc084b11d276-Abstract.html
Tom Hess, Michal Moshkovitz, Sivan Sabato
https://papers.nips.cc/paper_files/paper/2021/hash/1aa057313c28fa4a40c5bc084b11d276-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11875-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1aa057313c28fa4a40c5bc084b11d276-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wFuWSdCD7BN
https://papers.nips.cc/paper_files/paper/2021/file/1aa057313c28fa4a40c5bc084b11d276-Supplemental.pdf
We study k-median clustering under the sequential no-substitution setting. In this setting, a data stream is sequentially observed, and some of the points are selected by the algorithm as cluster centers. However, a point can be selected as a center only immediately after it is observed, before observing the next point. In addition, a selected center cannot be substituted later. We give the first algorithm for this setting that obtains a constant approximation factor on the optimal cost under a random arrival order, an exponential improvement over previous work. This is also the first constant approximation guarantee that holds without any structural assumptions on the input data. Moreover, the number of selected centers is only quasi-linear in k. Our algorithm and analysis are based on a careful cost estimation that avoids outliers, a new concept of a linear bin division, and a multi-scale approach to center selection.
null
Dangers of Bayesian Model Averaging under Covariate Shift
https://papers.nips.cc/paper_files/paper/2021/hash/1ab60b5e8bd4eac8a7537abb5936aadc-Abstract.html
Pavel Izmailov, Patrick Nicholson, Sanae Lotfi, Andrew G. Wilson
https://papers.nips.cc/paper_files/paper/2021/hash/1ab60b5e8bd4eac8a7537abb5936aadc-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11876-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1ab60b5e8bd4eac8a7537abb5936aadc-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2rR3aBnhCaP
null
Approximate Bayesian inference for neural networks is considered a robust alternative to standard training, often providing good performance on out-of-distribution data. However, Bayesian neural networks (BNNs) with high-fidelity approximate inference via full-batch Hamiltonian Monte Carlo achieve poor generalization under covariate shift, even underperforming classical estimation. We explain this surprising result, showing how a Bayesian model average can in fact be problematic under covariate shift, particularly in cases where linear dependencies in the input features cause a lack of posterior contraction. We additionally show why the same issue does not affect many approximate inference procedures, or classical maximum a-posteriori (MAP) training. Finally, we propose novel priors that improve the robustness of BNNs to many sources of covariate shift.
null
Learning Equilibria in Matching Markets from Bandit Feedback
https://papers.nips.cc/paper_files/paper/2021/hash/1b89a2e980724cb8997459fadb907712-Abstract.html
Meena Jagadeesan, Alexander Wei, Yixin Wang, Michael Jordan, Jacob Steinhardt
https://papers.nips.cc/paper_files/paper/2021/hash/1b89a2e980724cb8997459fadb907712-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11877-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1b89a2e980724cb8997459fadb907712-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=TgDTMyA9Nk
https://papers.nips.cc/paper_files/paper/2021/file/1b89a2e980724cb8997459fadb907712-Supplemental.pdf
Large-scale, two-sided matching platforms must find market outcomes that align with user preferences while simultaneously learning these preferences from data. But since preferences are inherently uncertain during learning, the classical notion of stability (Gale and Shapley, 1962; Shapley and Shubik, 1971) is unattainable in these settings. To bridge this gap, we develop a framework and algorithms for learning stable market outcomes under uncertainty. Our primary setting is matching with transferable utilities, where the platform both matches agents and sets monetary transfers between them. We design an incentive-aware learning objective that captures the distance of a market outcome from equilibrium. Using this objective, we analyze the complexity of learning as a function of preference structure, casting learning as a stochastic multi-armed bandit problem. Algorithmically, we show that "optimism in the face of uncertainty," the principle underlying many bandit algorithms, applies to a primal-dual formulation of matching with transfers and leads to near-optimal regret bounds. Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.
null
Towards Lower Bounds on the Depth of ReLU Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/1b9812b99fe2672af746cefda86be5f9-Abstract.html
Christoph Hertrich, Amitabh Basu, Marco Di Summa, Martin Skutella
https://papers.nips.cc/paper_files/paper/2021/hash/1b9812b99fe2672af746cefda86be5f9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11878-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1b9812b99fe2672af746cefda86be5f9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0OWwNh-4in1
https://papers.nips.cc/paper_files/paper/2021/file/1b9812b99fe2672af746cefda86be5f9-Supplemental.pdf
We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). This problem has potential impact on algorithmic and statistical aspects because of the insight it provides into the class of functions represented by neural hypothesis classes. However, to the best of our knowledge, this question has not been investigated in the neural network literature. We also present upper bounds on the sizes of neural networks required to represent functions in these neural hypothesis classes.
null
The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective
https://papers.nips.cc/paper_files/paper/2021/hash/1b9f38268c50805669fd8caf8f3cc84a-Abstract.html
Geoff Pleiss, John P. Cunningham
https://papers.nips.cc/paper_files/paper/2021/hash/1b9f38268c50805669fd8caf8f3cc84a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11879-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1b9f38268c50805669fd8caf8f3cc84a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YDepgWDUDXx
https://papers.nips.cc/paper_files/paper/2021/file/1b9f38268c50805669fd8caf8f3cc84a-Supplemental.pdf
Large width limits have been a recent focus of deep learning research: modulo computational practicalities, do wider networks outperform narrower ones? Answering this question has been challenging, as conventional networks gain representational power with width, potentially masking any negative effects. Our analysis in this paper decouples capacity and width via the generalization of neural networks to Deep Gaussian Processes (Deep GP), a class of nonparametric hierarchical models that subsume neural nets. In doing so, we aim to understand how width affects (standard) neural networks once they have sufficient capacity for a given modeling task. Our theoretical and empirical results on Deep GP suggest that large width can be detrimental to hierarchical models. Surprisingly, we prove that even nonparametric Deep GP converge to Gaussian processes, effectively becoming shallower without any increase in representational power. The posterior, which corresponds to a mixture of data-adaptable basis functions, becomes less data-dependent with width. Our tail analysis demonstrates that width and depth have opposite effects: depth accentuates a model’s non-Gaussianity, while width makes models increasingly Gaussian. We find there is a “sweet spot” that maximizes test performance before the limiting GP behavior prevents adaptability, occurring at width = 1 or width = 2 for nonparametric Deep GP. These results make strong predictions about the same phenomenon in conventional neural networks trained with L2 regularization (analogous to a Gaussian prior on parameters): we show that such neural networks may need up to 500 − 1000 hidden units for sufficient capacity - depending on the dataset - but further width degrades performance.
null
Exact marginal prior distributions of finite Bayesian neural networks
https://papers.nips.cc/paper_files/paper/2021/hash/1baff70e2669e8376347efd3a874a341-Abstract.html
Jacob Zavatone-Veth, Cengiz Pehlevan
https://papers.nips.cc/paper_files/paper/2021/hash/1baff70e2669e8376347efd3a874a341-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11880-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1baff70e2669e8376347efd3a874a341-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=MxE7xFzv0N8
https://papers.nips.cc/paper_files/paper/2021/file/1baff70e2669e8376347efd3a874a341-Supplemental.pdf
Bayesian neural networks are theoretically well-understood only in the infinite-width limit, where Gaussian priors over network weights yield Gaussian priors over network outputs. Recent work has suggested that finite Bayesian networks may outperform their infinite counterparts, but their non-Gaussian output priors have been characterized only though perturbative approaches. Here, we derive exact solutions for the function space priors for individual input examples of a class of finite fully-connected feedforward Bayesian neural networks. For deep linear networks, the prior has a simple expression in terms of the Meijer $G$-function. The prior of a finite ReLU network is a mixture of the priors of linear networks of smaller widths, corresponding to different numbers of active units in each layer. Our results unify previous descriptions of finite network priors in terms of their tail decay and large-width behavior.
null
Spatiotemporal Joint Filter Decomposition in 3D Convolutional Neural Networks
https://papers.nips.cc/paper_files/paper/2021/hash/1bb91f73e9d31ea2830a5e73ce3ed328-Abstract.html
Zichen Miao, Ze Wang, Xiuyuan Cheng, Qiang Qiu
https://papers.nips.cc/paper_files/paper/2021/hash/1bb91f73e9d31ea2830a5e73ce3ed328-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11881-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1bb91f73e9d31ea2830a5e73ce3ed328-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Alr5_kKmLBX
https://papers.nips.cc/paper_files/paper/2021/file/1bb91f73e9d31ea2830a5e73ce3ed328-Supplemental.pdf
In this paper, we introduce spatiotemporal joint filter decomposition to decouple spatial and temporal learning, while preserving spatiotemporal dependency in a video. A 3D convolutional filter is now jointly decomposed over a set of spatial and temporal filter atoms respectively. In this way, a 3D convolutional layer becomes three: a temporal atom layer, a spatial atom layer, and a joint coefficient layer, all three remaining convolutional. One obvious arithmetic manipulation allowed in our joint decomposition is to swap spatial or temporal atoms with a set of atoms that have the same number but different sizes, while keeping the remaining unchanged. For example, as shown later, we can now achieve tempo-invariance by simply dilating temporal atoms only. To illustrate this useful atom-swapping property, we further demonstrate how such a decomposition permits the direct learning of 3D CNNs with full-size videos through iterations of two consecutive sub-stages of learning: In the temporal stage, full-temporal downsampled-spatial data are used to learn temporal atoms and joint coefficients while fixing spatial atoms. In the spatial stage, full-spatial downsampled-temporal data are used for spatial atoms and joint coefficients while fixing temporal atoms. We show empirically on multiple action recognition datasets that, the decoupled spatiotemporal learning significantly reduces the model memory footprints, and allows deep 3D CNNs to model high-spatial long-temporal dependency with limited computational resources while delivering comparable performance.
null
Pooling by Sliced-Wasserstein Embedding
https://papers.nips.cc/paper_files/paper/2021/hash/1bc2029a8851ad344a8d503930dfd7f7-Abstract.html
Navid Naderializadeh, Joseph F Comer, Reed Andrews, Heiko Hoffmann, Soheil Kolouri
https://papers.nips.cc/paper_files/paper/2021/hash/1bc2029a8851ad344a8d503930dfd7f7-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11882-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1bc2029a8851ad344a8d503930dfd7f7-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=1z2T01DKEaE
null
Learning representations from sets has become increasingly important with many applications in point cloud processing, graph learning, image/video recognition, and object detection. We introduce a geometrically-interpretable and generic pooling mechanism for aggregating a set of features into a fixed-dimensional representation. In particular, we treat elements of a set as samples from a probability distribution and propose an end-to-end trainable Euclidean embedding for sliced-Wasserstein distance to learn from set-structured data effectively. We evaluate our proposed pooling method on a wide variety of set-structured data, including point-cloud, graph, and image classification tasks, and demonstrate that our proposed method provides superior performance over existing set representation learning approaches. Our code is available at https://github.com/navid-naderi/PSWE.
null
On the Theory of Reinforcement Learning with Once-per-Episode Feedback
https://papers.nips.cc/paper_files/paper/2021/hash/1bf2efbbe0c49b9f567c2e40f645279a-Abstract.html
Niladri Chatterji, Aldo Pacchiano, Peter Bartlett, Michael Jordan
https://papers.nips.cc/paper_files/paper/2021/hash/1bf2efbbe0c49b9f567c2e40f645279a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11883-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1bf2efbbe0c49b9f567c2e40f645279a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-uFBxNwRHa2
https://papers.nips.cc/paper_files/paper/2021/file/1bf2efbbe0c49b9f567c2e40f645279a-Supplemental.pdf
We study a theory of reinforcement learning (RL) in which the learner receives binary feedback only once at the end of an episode. While this is an extreme test case for theory, it is also arguably more representative of real-world applications than the traditional requirement in RL practice that the learner receive feedback at every time step. Indeed, in many real-world applications of reinforcement learning, such as self-driving cars and robotics, it is easier to evaluate whether a learner's complete trajectory was either good'' orbad,'' but harder to provide a reward signal at each step. To show that learning is possible in this more challenging setting, we study the case where trajectory labels are generated by an unknown parametric model, and provide a statistically and computationally efficient algorithm that achieves sublinear regret.
null
ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees
https://papers.nips.cc/paper_files/paper/2021/hash/1bf50aaf147b3b0ddd26a820d2ed394d-Abstract.html
Kuan-Lin Chen, Ching-Hua Lee, Harinath Garudadri, Bhaskar D Rao
https://papers.nips.cc/paper_files/paper/2021/hash/1bf50aaf147b3b0ddd26a820d2ed394d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11884-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1bf50aaf147b3b0ddd26a820d2ed394d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=IROqhpEha8
https://papers.nips.cc/paper_files/paper/2021/file/1bf50aaf147b3b0ddd26a820d2ed394d-Supplemental.pdf
Models recently used in the literature proving residual networks (ResNets) are better than linear predictors are actually different from standard ResNets that have been widely used in computer vision. In addition to the assumptions such as scalar-valued output or single residual block, the models fundamentally considered in the literature have no nonlinearities at the final residual representation that feeds into the final affine layer. To codify such a difference in nonlinearities and reveal a linear estimation property, we define ResNEsts, i.e., Residual Nonlinear Estimators, by simply dropping nonlinearities at the last residual representation from standard ResNets. We show that wide ResNEsts with bottleneck blocks can always guarantee a very desirable training property that standard ResNets aim to achieve, i.e., adding more blocks does not decrease performance given the same set of basis elements. To prove that, we first recognize ResNEsts are basis function models that are limited by a coupling problem in basis learning and linear prediction. Then, to decouple prediction weights from basis learning, we construct a special architecture termed augmented ResNEst (A-ResNEst) that always guarantees no worse performance with the addition of a block. As a result, such an A-ResNEst establishes empirical risk lower bounds for a ResNEst using corresponding bases. Our results demonstrate ResNEsts indeed have a problem of diminishing feature reuse; however, it can be avoided by sufficiently expanding or widening the input space, leading to the above-mentioned desirable property. Inspired by the densely connected networks (DenseNets) that have been shown to outperform ResNets, we also propose a corresponding new model called Densely connected Nonlinear Estimator (DenseNEst). We show that any DenseNEst can be represented as a wide ResNEst with bottleneck blocks. Unlike ResNEsts, DenseNEsts exhibit the desirable property without any special architectural re-design.
null
Locally private online change point detection
https://papers.nips.cc/paper_files/paper/2021/hash/1c1d4df596d01da60385f0bb17a4a9e0-Abstract.html
Tom Berrett, Yi Yu
https://papers.nips.cc/paper_files/paper/2021/hash/1c1d4df596d01da60385f0bb17a4a9e0-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11885-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1c1d4df596d01da60385f0bb17a4a9e0-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=KfC0i9Hjvl2
https://papers.nips.cc/paper_files/paper/2021/file/1c1d4df596d01da60385f0bb17a4a9e0-Supplemental.pdf
We study online change point detection problems under the constraint of local differential privacy (LDP) where, in particular, the statistician does not have access to the raw data. As a concrete problem, we study a multivariate nonparametric regression problem. At each time point $t$, the raw data are assumed to be of the form $(X_t, Y_t)$, where $X_t$ is a $d$-dimensional feature vector and $Y_t$ is a response variable. Our primary aim is to detect changes in the regression function $m_t(x)=\mathbb{E}(Y_t |X_t=x)$ as soon as the change occurs. We provide algorithms which respect the LDP constraint, which control the false alarm probability, and which detect changes with a minimal (minimax rate-optimal) delay. To quantify the cost of privacy, we also present the optimal rate in the benchmark, non-private setting. These non-private results are also new to the literature and thus are interesting \emph{per se}. In addition, we study the univariate mean online change point detection problem, under privacy constraints. This serves as the blueprint of studying more complicated private change point detection problems.
null
Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization
https://papers.nips.cc/paper_files/paper/2021/hash/1c336b8080f82bcc2cd2499b4c57261d-Abstract.html
Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio, Ioannis Mitliagkas, Irina Rish
https://papers.nips.cc/paper_files/paper/2021/hash/1c336b8080f82bcc2cd2499b4c57261d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11886-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1c336b8080f82bcc2cd2499b4c57261d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jlchsFOLfeF
https://papers.nips.cc/paper_files/paper/2021/file/1c336b8080f82bcc2cd2499b4c57261d-Supplemental.pdf
The invariance principle from causality is at the heart of notable approaches such as invariant risk minimization (IRM) that seek to address out-of-distribution (OOD) generalization failures. Despite the promising theory, invariance principle-based approaches fail in common classification tasks, where invariant (causal) features capture all the information about the label. Are these failures due to the methods failing to capture the invariance? Or is the invariance principle itself insufficient? To answer these questions, we revisit the fundamental assumptions in linear regression tasks, where invariance-based approaches were shown to provably generalize OOD. In contrast to the linear regression tasks, we show that for linear classification tasks we need much stronger restrictions on the distribution shifts, or otherwise OOD generalization is impossible. Furthermore, even with appropriate restrictions on distribution shifts in place, we show that the invariance principle alone is insufficient. We prove that a form of the information bottleneck constraint along with invariance helps address the key failures when invariant features capture all the information about the label and also retains the existing success when they do not. We propose an approach that incorporates both of these principles and demonstrate its effectiveness in several experiments.
null
Repulsive Deep Ensembles are Bayesian
https://papers.nips.cc/paper_files/paper/2021/hash/1c63926ebcabda26b5cdb31b5cc91efb-Abstract.html
Francesco D'Angelo, Vincent Fortuin
https://papers.nips.cc/paper_files/paper/2021/hash/1c63926ebcabda26b5cdb31b5cc91efb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11887-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1c63926ebcabda26b5cdb31b5cc91efb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LAKplpLMbP8
https://papers.nips.cc/paper_files/paper/2021/file/1c63926ebcabda26b5cdb31b5cc91efb-Supplemental.pdf
Deep ensembles have recently gained popularity in the deep learning community for their conceptual simplicity and efficiency. However, maintaining functional diversity between ensemble members that are independently trained with gradient descent is challenging. This can lead to pathologies when adding more ensemble members, such as a saturation of the ensemble performance, which converges to the performance of a single model. Moreover, this does not only affect the quality of its predictions, but even more so the uncertainty estimates of the ensemble, and thus its performance on out-of-distribution data. We hypothesize that this limitation can be overcome by discouraging different ensemble members from collapsing to the same function. To this end, we introduce a kernelized repulsive term in the update rule of the deep ensembles. We show that this simple modification not only enforces and maintains diversity among the members but, even more importantly, transforms the maximum a posteriori inference into proper Bayesian inference. Namely, we show that the training dynamics of our proposed repulsive ensembles follow a Wasserstein gradient flow of the KL divergence with the true posterior. We study repulsive terms in weight and function space and empirically compare their performance to standard ensembles and Bayesian baselines on synthetic and real-world prediction tasks.
null
BayesIMP: Uncertainty Quantification for Causal Data Fusion
https://papers.nips.cc/paper_files/paper/2021/hash/1ca5c750a30312d1919ae6a4d636dcc4-Abstract.html
Siu Lun Chau, Jean-Francois Ton, Javier González, Yee Teh, Dino Sejdinovic
https://papers.nips.cc/paper_files/paper/2021/hash/1ca5c750a30312d1919ae6a4d636dcc4-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11888-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1ca5c750a30312d1919ae6a4d636dcc4-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=aSjbPcve-b
https://papers.nips.cc/paper_files/paper/2021/file/1ca5c750a30312d1919ae6a4d636dcc4-Supplemental.pdf
While causal models are becoming one of the mainstays of machine learning, the problem of uncertainty quantification in causal inference remains challenging. In this paper, we study the causal data fusion problem, where data arising from multiple causal graphs are combined to estimate the average treatment effect of a target variable. As data arises from multiple sources and can vary in quality and sample size, principled uncertainty quantification becomes essential. To that end, we introduce \emph{Bayesian Causal Mean Processes}, the framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space, while taking into account the uncertainty within each causal graph. To demonstrate the informativeness of our uncertainty estimation, we apply our method to the Causal Bayesian Optimisation task and show improvements over state-of-the-art methods.
null
RMM: Reinforced Memory Management for Class-Incremental Learning
https://papers.nips.cc/paper_files/paper/2021/hash/1cbcaa5abbb6b70f378a3a03d0c26386-Abstract.html
Yaoyao Liu, Bernt Schiele, Qianru Sun
https://papers.nips.cc/paper_files/paper/2021/hash/1cbcaa5abbb6b70f378a3a03d0c26386-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11889-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1cbcaa5abbb6b70f378a3a03d0c26386-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=BfPzZSype5M
https://papers.nips.cc/paper_files/paper/2021/file/1cbcaa5abbb6b70f378a3a03d0c26386-Supplemental.pdf
Class-Incremental Learning (CIL) [38] trains classifiers under a strict memory budget: in each incremental phase, learning is done for new data, most of which is abandoned to free space for the next phase. The preserved data are exemplars used for replaying. However, existing methods use a static and ad hoc strategy for memory allocation, which is often sub-optimal. In this work, we propose a dynamic memory management strategy that is optimized for the incremental phases and different object classes. We call our method reinforced memory management (RMM), leveraging reinforcement learning. RMM training is not naturally compatible with CIL as the past, and future data are strictly non-accessible during the incremental phases. We solve this by training the policy function of RMM on pseudo CIL tasks, e.g., the tasks built on the data of the zeroth phase, and then applying it to target tasks. RMM propagates two levels of actions: Level-1 determines how to split the memory between old and new classes, and Level-2 allocates memory for each specific class. In essence, it is an optimizable and general method for memory management that can be used in any replaying-based CIL method. For evaluation, we plug RMM into two top-performing baselines (LUCIR+AANets and POD+AANets [28]) and conduct experiments on three benchmarks (CIFAR-100, ImageNet-Subset, and ImageNet-Full). Our results show clear improvements, e.g., boosting POD+AANets by 3.6%, 4.4%, and 1.9% in the 25-Phase settings of the above benchmarks, respectively. The code is available at https://class-il.mpi-inf.mpg.de/rmm/.
null
Learning Compact Representations of Neural Networks using DiscriminAtive Masking (DAM)
https://papers.nips.cc/paper_files/paper/2021/hash/1cc8a8ea51cd0adddf5dab504a285915-Abstract.html
Jie Bu, Arka Daw, M. Maruf, Anuj Karpatne
https://papers.nips.cc/paper_files/paper/2021/hash/1cc8a8ea51cd0adddf5dab504a285915-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11890-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1cc8a8ea51cd0adddf5dab504a285915-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=jE5UVpKhkUG
https://papers.nips.cc/paper_files/paper/2021/file/1cc8a8ea51cd0adddf5dab504a285915-Supplemental.pdf
A central goal in deep learning is to learn compact representations of features at every layer of a neural network, which is useful for both unsupervised representation learning and structured network pruning. While there is a growing body of work in structured pruning, current state-of-the-art methods suffer from two key limitations: (i) instability during training, and (ii) need for an additional step of fine-tuning, which is resource-intensive. At the core of these limitations is the lack of a systematic approach that jointly prunes and refines weights during training in a single stage, and does not require any fine-tuning upon convergence to achieve state-of-the-art performance. We present a novel single-stage structured pruning method termed DiscriminAtive Masking (DAM). The key intuition behind DAM is to discriminatively prefer some of the neurons to be refined during the training process, while gradually masking out other neurons. We show that our proposed DAM approach has remarkably good performance over a diverse range of applications in representation learning and structured pruning, including dimensionality reduction, recommendation system, graph representation learning, and structured pruning for image classification. We also theoretically show that the learning objective of DAM is directly related to minimizing the L_0 norm of the masking layer. All of our codes and datasets are available https://github.com/jayroxis/dam-pytorch.
null
Neural Auto-Curricula in Two-Player Zero-Sum Games
https://papers.nips.cc/paper_files/paper/2021/hash/1cd73be1e256a7405516501e94e892ac-Abstract.html
Xidong Feng, Oliver Slumbers, Ziyu Wan, Bo Liu, Stephen McAleer, Ying Wen, Jun Wang, Yaodong Yang
https://papers.nips.cc/paper_files/paper/2021/hash/1cd73be1e256a7405516501e94e892ac-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11891-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1cd73be1e256a7405516501e94e892ac-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=dZWFBYWp6UY
https://papers.nips.cc/paper_files/paper/2021/file/1cd73be1e256a7405516501e94e892ac-Supplemental.pdf
When solving two-player zero-sum games, multi-agent reinforcement learning (MARL) algorithms often create populations of agents where, at each iteration, a new agent is discovered as the best response to a mixture over the opponent population. Within such a process, the update rules of "who to compete with" (i.e., the opponent mixture) and "how to beat them" (i.e., finding best responses) are underpinned by manually developed game theoretical principles such as fictitious play and Double Oracle. In this paper, we introduce a novel framework—Neural Auto-Curricula (NAC)—that leverages meta-gradient descent to automate the discovery of the learning update rule without explicit human design. Specifically, we parameterise the opponent selection module by neural networks and the best-response module by optimisation subroutines, and update their parameters solely via interaction with the game engine, where both players aim to minimise their exploitability. Surprisingly, even without human design, the discovered MARL algorithms achieve competitive or even better performance with the state-of-the-art population-based game solvers (e.g., PSRO) on Games of Skill, differentiable Lotto, non-transitive Mixture Games, Iterated Matching Pennies, and Kuhn Poker. Additionally, we show that NAC is able to generalise from small games to large games, for example training on Kuhn Poker and outperforming PSRO on Leduc Poker. Our work inspires a promising future direction to discover general MARL algorithms solely from data.
null
ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis
https://papers.nips.cc/paper_files/paper/2021/hash/1cdf14d1e3699d61d237cf76ce1c2dca-Abstract.html
Patrick Esser, Robin Rombach, Andreas Blattmann, Bjorn Ommer
https://papers.nips.cc/paper_files/paper/2021/hash/1cdf14d1e3699d61d237cf76ce1c2dca-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11892-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1cdf14d1e3699d61d237cf76ce1c2dca-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=-1AAgrS5FF
https://papers.nips.cc/paper_files/paper/2021/file/1cdf14d1e3699d61d237cf76ce1c2dca-Supplemental.pdf
Autoregressive models and their sequential factorization of the data likelihood have recently demonstrated great potential for image representation and synthesis. Nevertheless, they incorporate image context in a linear 1D order by attending only to previously synthesized image patches above or to the left. Not only is this unidirectional, sequential bias of attention unnatural for images as it disregards large parts of a scene until synthesis is almost complete. It also processes the entire image on a single scale, thus ignoring more global contextual information up to the gist of the entire scene. As a remedy we incorporate a coarse-to-fine hierarchy of context by combining the autoregressive formulation with a multinomial diffusion process: Whereas a multistage diffusion process successively compresses and removes information to coarsen an image, we train a Markov chain to invert this process. In each stage, the resulting autoregressive ImageBART model progressively incorporates context from previous stages in a coarse-to-fine manner. Experiments demonstrate the gain over current autoregressive models, continuous diffusion probabilistic models, and latent variable models. Moreover, the approach enables to control the synthesis process and to trade compression rate against reconstruction accuracy, while still guaranteeing visually plausible results.
null
From global to local MDI variable importances for random forests and when they are Shapley values
https://papers.nips.cc/paper_files/paper/2021/hash/1cfa81af29c6f2d8cacb44921722e753-Abstract.html
Antonio Sutera, Gilles Louppe, Van Anh Huynh-Thu, Louis Wehenkel, Pierre Geurts
https://papers.nips.cc/paper_files/paper/2021/hash/1cfa81af29c6f2d8cacb44921722e753-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11893-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1cfa81af29c6f2d8cacb44921722e753-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=2vyiCxfb6el
https://papers.nips.cc/paper_files/paper/2021/file/1cfa81af29c6f2d8cacb44921722e753-Supplemental.pdf
Random forests have been widely used for their ability to provide so-called importance measures, which give insight at a global (per dataset) level on the relevance of input variables to predict a certain output. On the other hand, methods based on Shapley values have been introduced to refine the analysis of feature relevance in tree-based models to a local (per instance) level. In this context, we first show that the global Mean Decrease of Impurity (MDI) variable importance scores correspond to Shapley values under some conditions. Then, we derive a local MDI importance measure of variable relevance, which has a very natural connection with the global MDI measure and can be related to a new notion of local feature relevance. We further link local MDI importances with Shapley values and discuss them in the light of related measures from the literature. The measures are illustrated through experiments on several classification and regression problems.
null
Adversarial Robustness of Streaming Algorithms through Importance Sampling
https://papers.nips.cc/paper_files/paper/2021/hash/1d01bd2e16f57892f0954902899f0692-Abstract.html
Vladimir Braverman, Avinatan Hassidim, Yossi Matias, Mariano Schain, Sandeep Silwal, Samson Zhou
https://papers.nips.cc/paper_files/paper/2021/hash/1d01bd2e16f57892f0954902899f0692-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11894-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1d01bd2e16f57892f0954902899f0692-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=83A-0x6Pfi_
https://papers.nips.cc/paper_files/paper/2021/file/1d01bd2e16f57892f0954902899f0692-Supplemental.zip
Robustness against adversarial attacks has recently been at the forefront of algorithmic design for machine learning tasks. In the adversarial streaming model, an adversary gives an algorithm a sequence of adaptively chosen updates $u_1,\ldots,u_n$ as a data stream. The goal of the algorithm is to compute or approximate some predetermined function for every prefix of the adversarial stream, but the adversary may generate future updates based on previous outputs of the algorithm. In particular, the adversary may gradually learn the random bits internally used by an algorithm to manipulate dependencies in the input. This is especially problematic as many important problems in the streaming model require randomized algorithms, as they are known to not admit any deterministic algorithms that use sublinear space. In this paper, we introduce adversarially robust streaming algorithms for central machine learning and algorithmic tasks, such as regression and clustering, as well as their more general counterparts, subspace embedding, low-rank approximation, and coreset construction. For regression and other numerical linear algebra related tasks, we consider the row arrival streaming model. Our results are based on a simple, but powerful, observation that many importance sampling-based algorithms give rise to adversarial robustness which is in contrast to sketching based algorithms, which are very prevalent in the streaming literature but suffer from adversarial attacks. In addition, we show that the well-known merge and reduce paradigm in streaming is adversarially robust. Since the merge and reduce paradigm allows coreset constructions in the streaming setting, we thus obtain robust algorithms for $k$-means, $k$-median, $k$-center, Bregman clustering, projective clustering, principal component analysis (PCA) and non-negative matrix factorization. To the best of our knowledge, these are the first adversarially robust results for these problems yet require no new algorithmic implementations. Finally, we empirically confirm the robustness of our algorithms on various adversarial attacks and demonstrate that by contrast, some common existing algorithms are not robust.
null
Tractable Regularization of Probabilistic Circuits
https://papers.nips.cc/paper_files/paper/2021/hash/1d0832c4969f6a4cc8e8a8fffe083efb-Abstract.html
Anji Liu, Guy Van den Broeck
https://papers.nips.cc/paper_files/paper/2021/hash/1d0832c4969f6a4cc8e8a8fffe083efb-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11895-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1d0832c4969f6a4cc8e8a8fffe083efb-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=W9oywyjO8VN
https://papers.nips.cc/paper_files/paper/2021/file/1d0832c4969f6a4cc8e8a8fffe083efb-Supplemental.pdf
Probabilistic Circuits (PCs) are a promising avenue for probabilistic modeling. They combine advantages of probabilistic graphical models (PGMs) with those of neural networks (NNs). Crucially, however, they are tractable probabilistic models, supporting efficient and exact computation of many probabilistic inference queries, such as marginals and MAP. Further, since PCs are structured computation graphs, they can take advantage of deep-learning-style parameter updates, which greatly improves their scalability. However, this innovation also makes PCs prone to overfitting, which has been observed in many standard benchmarks. Despite the existence of abundant regularization techniques for both PGMs and NNs, they are not effective enough when applied to PCs. Instead, we re-think regularization for PCs and propose two intuitive techniques, data softening and entropy regularization, that both take advantage of PCs' tractability and still have an efficient implementation as a computation graph. Specifically, data softening provides a principled way to add uncertainty in datasets in closed form, which implicitly regularizes PC parameters. To learn parameters from a softened dataset, PCs only need linear time by virtue of their tractability. In entropy regularization, the exact entropy of the distribution encoded by a PC can be regularized directly, which is again infeasible for most other density estimation models. We show that both methods consistently improve the generalization performance of a wide variety of PCs. Moreover, when paired with a simple PC structure, we achieved state-of-the-art results on 10 out of 20 standard discrete density estimation benchmarks. Open-source code and experiments are available at https://github.com/UCLA-StarAI/Tractable-PC-Regularization.
null
On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness
https://papers.nips.cc/paper_files/paper/2021/hash/1d49780520898fe37f0cd6b41c5311bf-Abstract.html
Eric Mintun, Alexander Kirillov, Saining Xie
https://papers.nips.cc/paper_files/paper/2021/hash/1d49780520898fe37f0cd6b41c5311bf-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11896-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1d49780520898fe37f0cd6b41c5311bf-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=LOHyqjfyra
https://papers.nips.cc/paper_files/paper/2021/file/1d49780520898fe37f0cd6b41c5311bf-Supplemental.pdf
Invariance to a broad array of image corruptions, such as warping, noise, or color shifts, is an important aspect of building robust models in computer vision. Recently, several new data augmentations have been proposed that significantly improve performance on ImageNet-C, a benchmark of such corruptions. However, there is still a lack of basic understanding on the relationship between data augmentations and test-time corruptions. To this end, we develop a feature space for image transforms, and then use a new measure in this space between augmentations and corruptions called the Minimal Sample Distance to demonstrate there is a strong correlation between similarity and performance. We then investigate recent data augmentations and observe a significant degradation in corruption robustness when the test-time corruptions are sampled to be perceptually dissimilar from ImageNet-C in this feature space. Our results suggest that test error can be improved by training on perceptually similar augmentations, and data augmentations may not generalize well beyond the existing benchmark. We hope our results and tools will allow for more robust progress towards improving robustness to image corruptions. We provide code at https://github.com/facebookresearch/augmentation-corruption.
null
Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with Unlabeled Data
https://papers.nips.cc/paper_files/paper/2021/hash/1d6408264d31d453d556c60fe7d0459e-Abstract.html
Ashraful Islam, Chun-Fu (Richard) Chen, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Richard J. Radke
https://papers.nips.cc/paper_files/paper/2021/hash/1d6408264d31d453d556c60fe7d0459e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11897-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1d6408264d31d453d556c60fe7d0459e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=_4VxORHq-0g
https://papers.nips.cc/paper_files/paper/2021/file/1d6408264d31d453d556c60fe7d0459e-Supplemental.pdf
Most existing works in few-shot learning rely on meta-learning the network on a large base dataset which is typically from the same domain as the target dataset. We tackle the problem of cross-domain few-shot learning where there is a large shift between the base and target domain. The problem of cross-domain few-shot recognition with unlabeled target data is largely unaddressed in the literature. STARTUP was the first method that tackles this problem using self-training. However, it uses a fixed teacher pretrained on a labeled base dataset to create soft labels for the unlabeled target samples. As the base dataset and unlabeled dataset are from different domains, projecting the target images in the class-domain of the base dataset with a fixed pretrained model might be sub-optimal. We propose a simple dynamic distillation-based approach to facilitate unlabeled images from the novel/base dataset. We impose consistency regularization by calculating predictions from the weakly-augmented versions of the unlabeled images from a teacher network and matching it with the strongly augmented versions of the same images from a student network. The parameters of the teacher network are updated as exponential moving average of the parameters of the student network. We show that the proposed network learns representation that can be easily adapted to the target domain even though it has not been trained with target-specific classes during the pretraining phase. Our model outperforms the current state-of-the art method by 4.4% for 1-shot and 3.6% for 5-shot classification in the BSCD-FSL benchmark, and also shows competitive performance on traditional in-domain few-shot learning task.
null
Hypergraph Propagation and Community Selection for Objects Retrieval
https://papers.nips.cc/paper_files/paper/2021/hash/1da546f25222c1ee710cf7e2f7a3ff0c-Abstract.html
Guoyuan An, Yuchi Huo, Sung-eui Yoon
https://papers.nips.cc/paper_files/paper/2021/hash/1da546f25222c1ee710cf7e2f7a3ff0c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11898-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1da546f25222c1ee710cf7e2f7a3ff0c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=85h_DhXf3v
https://papers.nips.cc/paper_files/paper/2021/file/1da546f25222c1ee710cf7e2f7a3ff0c-Supplemental.pdf
Spatial verification is a crucial technique for particular object retrieval. It utilizes spatial information for the accurate detection of true positive images. However, existing query expansion and diffusion methods cannot efficiently propagate the spatial information in an ordinary graph with scalar edge weights, resulting in low recall or precision. To tackle these problems, we propose a novel hypergraph-based framework that efficiently propagates spatial information in query time and retrieves an object in the database accurately. Additionally, we propose using the image graph's structure information through community selection technique, to measure the accuracy of the initial search result and to provide correct starting points for hypergraph propagation without heavy spatial verification computations. Experiment results on ROxford and RParis show that our method significantly outperforms the existing query expansion and diffusion methods.
null
Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space
https://papers.nips.cc/paper_files/paper/2021/hash/1dacb10f0623c67cb7dbb37587d8b38a-Abstract.html
Taiji Suzuki, Atsushi Nitanda
https://papers.nips.cc/paper_files/paper/2021/hash/1dacb10f0623c67cb7dbb37587d8b38a-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11899-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1dacb10f0623c67cb7dbb37587d8b38a-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=uSQQH7Fj5U
https://papers.nips.cc/paper_files/paper/2021/file/1dacb10f0623c67cb7dbb37587d8b38a-Supplemental.pdf
Deep learning has exhibited superior performance for various tasks, especially for high-dimensional datasets, such as images. To understand this property, we investigate the approximation and estimation ability of deep learning on {\it anisotropic Besov spaces}.The anisotropic Besov space is characterized by direction-dependent smoothness and includes several function classes that have been investigated thus far.We demonstrate that the approximation error and estimation error of deep learning only depend on the average value of the smoothness parameters in all directions. Consequently, the curse of dimensionality can be avoided if the smoothness of the target function is highly anisotropic.Unlike existing studies, our analysis does not require a low-dimensional structure of the input data.We also investigate the minimax optimality of deep learning and compare its performance with that of the kernel method (more generally, linear estimators).The results show that deep learning has better dependence on the input dimensionality if the target function possesses anisotropic smoothness, and it achieves an adaptive rate for functions with spatially inhomogeneous smoothness.
null
QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning
https://papers.nips.cc/paper_files/paper/2021/hash/1dba3025b159cd9354da65e2d0436a31-Abstract.html
Kaan Ozkara, Navjot Singh, Deepesh Data, Suhas Diggavi
https://papers.nips.cc/paper_files/paper/2021/hash/1dba3025b159cd9354da65e2d0436a31-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11900-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1dba3025b159cd9354da65e2d0436a31-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Yowoe1scJOD
null
Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration of clients with diverse resources. In this work, we introduce a quantized and personalized FL algorithm QuPeD that facilitates collective (personalized model compression) training via knowledge distillation (KD) among clients who have access to heterogeneous data and resources. For personalization, we allow clients to learn compressed personalized models with different quantization parameters and model dimensions/structures. Towards this, first we propose an algorithm for learning quantized models through a relaxed optimization problem, where quantization values are also optimized over. When each client participating in the (federated) learning process has different requirements of the compressed model (both in model dimension and precision), we formulate a compressed personalization framework by introducing knowledge distillation loss for local client objectives collaborating through a global model. We develop an alternating proximal gradient update for solving this compressed personalization problem, and analyze its convergence properties. Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.
null
Model Adaptation: Historical Contrastive Learning for Unsupervised Domain Adaptation without Source Data
https://papers.nips.cc/paper_files/paper/2021/hash/1dba5eed8838571e1c80af145184e515-Abstract.html
Jiaxing Huang, Dayan Guan, Aoran Xiao, Shijian Lu
https://papers.nips.cc/paper_files/paper/2021/hash/1dba5eed8838571e1c80af145184e515-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11901-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1dba5eed8838571e1c80af145184e515-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=0zXJRJecC_
https://papers.nips.cc/paper_files/paper/2021/file/1dba5eed8838571e1c80af145184e515-Supplemental.pdf
Unsupervised domain adaptation aims to align a labeled source domain and an unlabeled target domain, but it requires to access the source data which often raises concerns in data privacy, data portability and data transmission efficiency. We study unsupervised model adaptation (UMA), or called Unsupervised Domain Adaptation without Source Data, an alternative setting that aims to adapt source-trained models towards target distributions without accessing source data. To this end, we design an innovative historical contrastive learning (HCL) technique that exploits historical source hypothesis to make up for the absence of source data in UMA. HCL addresses the UMA challenge from two perspectives. First, it introduces historical contrastive instance discrimination (HCID) that learns from target samples by contrasting their embeddings which are generated by the currently adapted model and the historical models. With the historical models, HCID encourages UMA to learn instance-discriminative target representations while preserving the source hypothesis. Second, it introduces historical contrastive category discrimination (HCCD) that pseudo-labels target samples to learn category-discriminative target representations. Specifically, HCCD re-weights pseudo labels according to their prediction consistency across the current and historical models. Extensive experiments show that HCL outperforms and state-of-the-art methods consistently across a variety of visual tasks and setups.
null
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
https://papers.nips.cc/paper_files/paper/2021/hash/1def1713ebf17722cbe300cfc1c88558-Abstract.html
Peter Hase, Harry Xie, Mohit Bansal
https://papers.nips.cc/paper_files/paper/2021/hash/1def1713ebf17722cbe300cfc1c88558-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11902-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1def1713ebf17722cbe300cfc1c88558-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=HCrp4pdk2i
https://papers.nips.cc/paper_files/paper/2021/file/1def1713ebf17722cbe300cfc1c88558-Supplemental.pdf
Feature importance (FI) estimates are a popular form of explanation, and they are commonly created and evaluated by computing the change in model confidence caused by removing certain input features at test time. For example, in the standard Sufficiency metric, only the top-k most important tokens are kept. In this paper, we study several under-explored dimensions of FI explanations, providing conceptual and empirical improvements for this form of explanation. First, we advance a new argument for why it can be problematic to remove features from an input when creating or evaluating explanations: the fact that these counterfactual inputs are out-of-distribution (OOD) to models implies that the resulting explanations are socially misaligned. The crux of the problem is that the model prior and random weight initialization influence the explanations (and explanation metrics) in unintended ways. To resolve this issue, we propose a simple alteration to the model training process, which results in more socially aligned explanations and metrics. Second, we compare among five approaches for removing features from model inputs. We find that some methods produce more OOD counterfactuals than others, and we make recommendations for selecting a feature-replacement function. Finally, we introduce four search-based methods for identifying FI explanations and compare them to strong baselines, including LIME, Anchors, and Integrated Gradients. Through experiments with six diverse text classification datasets, we find that the only method that consistently outperforms random search is a Parallel Local Search (PLS) that we introduce. Improvements over the second best method are as large as 5.4 points for Sufficiency and 17 points for Comprehensiveness.
null
Control Variates for Slate Off-Policy Evaluation
https://papers.nips.cc/paper_files/paper/2021/hash/1e0b802d5c0e1e8434a771ba7ff2c301-Abstract.html
Nikos Vlassis, Ashok Chandrashekar, Fernando Amat, Nathan Kallus
https://papers.nips.cc/paper_files/paper/2021/hash/1e0b802d5c0e1e8434a771ba7ff2c301-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11903-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1e0b802d5c0e1e8434a771ba7ff2c301-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=e9_UPqMNfi
https://papers.nips.cc/paper_files/paper/2021/file/1e0b802d5c0e1e8434a771ba7ff2c301-Supplemental.pdf
We study the problem of off-policy evaluation from batched contextual bandit data with multidimensional actions, often termed slates. The problem is common to recommender systems and user-interface optimization, and it is particularly challenging because of the combinatorially-sized action space. Swaminathan et al. (2017) have proposed the pseudoinverse (PI) estimator under the assumption that the conditional mean rewards are additive in actions. Using control variates, we consider a large class of unbiased estimators that includes as specific cases the PI estimator and (asymptotically) its self-normalized variant. By optimizing over this class, we obtain new estimators with risk improvement guarantees over both the PI and the self-normalized PI estimators. Experiments with real-world recommender data as well as synthetic data validate these improvements in practice.
null
Stabilizing Deep Q-Learning with ConvNets and Vision Transformers under Data Augmentation
https://papers.nips.cc/paper_files/paper/2021/hash/1e0f65eb20acbfb27ee05ddc000b50ec-Abstract.html
Nicklas Hansen, Hao Su, Xiaolong Wang
https://papers.nips.cc/paper_files/paper/2021/hash/1e0f65eb20acbfb27ee05ddc000b50ec-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11904-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1e0f65eb20acbfb27ee05ddc000b50ec-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zQvxc8ul2rR
https://papers.nips.cc/paper_files/paper/2021/file/1e0f65eb20acbfb27ee05ddc000b50ec-Supplemental.pdf
While agents trained by Reinforcement Learning (RL) can solve increasingly challenging tasks directly from visual observations, generalizing learned skills to novel environments remains very challenging. Extensive use of data augmentation is a promising technique for improving generalization in RL, but it is often found to decrease sample efficiency and can even lead to divergence. In this paper, we investigate causes of instability when using data augmentation in common off-policy RL algorithms. We identify two problems, both rooted in high-variance Q-targets. Based on our findings, we propose a simple yet effective technique for stabilizing this class of algorithms under augmentation. We perform extensive empirical evaluation of image-based RL using both ConvNets and Vision Transformers (ViT) on a family of benchmarks based on DeepMind Control Suite, as well as in robotic manipulation tasks. Our method greatly improves stability and sample efficiency of ConvNets under augmentation, and achieves generalization results competitive with state-of-the-art methods for image-based RL in environments with unseen visuals. We further show that our method scales to RL with ViT-based architectures, and that data augmentation may be especially important in this setting.
null
On Effective Scheduling of Model-based Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/1e4d36177d71bbb3558e43af9577d70e-Abstract.html
Hang Lai, Jian Shen, Weinan Zhang, Yimin Huang, Xing Zhang, Ruiming Tang, Yong Yu, Zhenguo Li
https://papers.nips.cc/paper_files/paper/2021/hash/1e4d36177d71bbb3558e43af9577d70e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11905-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1e4d36177d71bbb3558e43af9577d70e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=z36cUrI0jKJ
https://papers.nips.cc/paper_files/paper/2021/file/1e4d36177d71bbb3558e43af9577d70e-Supplemental.pdf
Model-based reinforcement learning has attracted wide attention due to its superior sample efficiency. Despite its impressive success so far, it is still unclear how to appropriately schedule the important hyperparameters to achieve adequate performance, such as the real data ratio for policy optimization in Dyna-style model-based algorithms. In this paper, we first theoretically analyze the role of real data in policy training, which suggests that gradually increasing the ratio of real data yields better performance. Inspired by the analysis, we propose a framework named AutoMBPO to automatically schedule the real data ratio as well as other hyperparameters in training model-based policy optimization (MBPO) algorithm, a representative running case of model-based methods. On several continuous control tasks, the MBPO instance trained with hyperparameters scheduled by AutoMBPO can significantly surpass the original one, and the real data ratio schedule found by AutoMBPO shows consistency with our theoretical analysis.
null
Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience
https://papers.nips.cc/paper_files/paper/2021/hash/1e5eeb40a3fce716b244599862fd2200-Abstract.html
Dominic Gonschorek, Larissa Höfling, Klaudia P. Szatko, Katrin Franke, Timm Schubert, Benjamin Dunn, Philipp Berens, David Klindt, Thomas Euler
https://papers.nips.cc/paper_files/paper/2021/hash/1e5eeb40a3fce716b244599862fd2200-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11906-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1e5eeb40a3fce716b244599862fd2200-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=lVmIjQiJJSr
https://papers.nips.cc/paper_files/paper/2021/file/1e5eeb40a3fce716b244599862fd2200-Supplemental.pdf
Integrating data from multiple experiments is common practice in systems neuroscience but it requires inter-experimental variability to be negligible compared to the biological signal of interest. This requirement is rarely fulfilled; systematic changes between experiments can drastically affect the outcome of complex analysis pipelines. Modern machine learning approaches designed to adapt models across multiple data domains offer flexible ways of removing inter-experimental variability where classical statistical methods often fail. While applications of these methods have been mostly limited to single-cell genomics, in this work, we develop a theoretical framework for domain adaptation in systems neuroscience. We implement this in an adversarial optimization scheme that removes inter-experimental variability while preserving the biological signal. We compare our method to previous approaches on a large-scale dataset of two-photon imaging recordings of retinal bipolar cell responses to visual stimuli. This dataset provides a unique benchmark as it contains biological signal from well-defined cell types that is obscured by large inter-experimental variability. In a supervised setting, we compare the generalization performance of cell type classifiers across experiments, which we validate with anatomical cell type distributions from electron microscopy data. In an unsupervised setting, we remove inter-experimental variability from the data which can then be fed into arbitrary downstream analyses. In both settings, we find that our method achieves the best trade-off between removing inter-experimental variability and preserving biological signal. Thus, we offer a flexible approach to remove inter-experimental variability and integrate datasets across experiments in systems neuroscience. Code available at https://github.com/eulerlab/rave.
null
Learning Knowledge Graph-based World Models of Textual Environments
https://papers.nips.cc/paper_files/paper/2021/hash/1e747ddbea997a1b933aaf58a7953c3c-Abstract.html
Prithviraj Ammanabrolu, Mark Riedl
https://papers.nips.cc/paper_files/paper/2021/hash/1e747ddbea997a1b933aaf58a7953c3c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11907-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1e747ddbea997a1b933aaf58a7953c3c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=o24k_XfIe6_
https://papers.nips.cc/paper_files/paper/2021/file/1e747ddbea997a1b933aaf58a7953c3c-Supplemental.pdf
World models improve a learning agent's ability to efficiently operate in interactive and situated environments. This work focuses on the task of building world models of text-based game environments. Text-based games, or interactive narratives, are reinforcement learning environments in which agents perceive and interact with the world using textual natural language. These environments contain long, multi-step puzzles or quests woven through a world that is filled with hundreds of characters, locations, and objects. Our world model learns to simultaneously: (1) predict changes in the world caused by an agent's actions when representing the world as a knowledge graph; and (2) generate the set of contextually relevant natural language actions required to operate in the world. We frame this task as a Set of Sequences generation problem by exploiting the inherent structure of knowledge graphs and actions and introduce both a transformer-based multi-task architecture and a loss function to train it. A zero-shot ablation study on never-before-seen textual worlds shows that our methodology significantly outperforms existing textual world modeling techniques as well as the importance of each of our contributions.
null
Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization
https://papers.nips.cc/paper_files/paper/2021/hash/1e79596878b2320cac26dd792a6c51c9-Abstract.html
Ke Sun, Yafei Wang, Yi Liu, yingnan zhao, Bo Pan, Shangling Jui, Bei Jiang, Linglong Kong
https://papers.nips.cc/paper_files/paper/2021/hash/1e79596878b2320cac26dd792a6c51c9-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11908-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1e79596878b2320cac26dd792a6c51c9-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=giEMdtueyZn
https://papers.nips.cc/paper_files/paper/2021/file/1e79596878b2320cac26dd792a6c51c9-Supplemental.pdf
Anderson mixing has been heuristically applied to reinforcement learning (RL) algorithms for accelerating convergence and improving the sampling efficiency of deep RL. Despite its heuristic improvement of convergence, a rigorous mathematical justification for the benefits of Anderson mixing in RL has not yet been put forward. In this paper, we provide deeper insights into a class of acceleration schemes built on Anderson mixing that improve the convergence of deep RL algorithms. Our main results establish a connection between Anderson mixing and quasi-Newton methods and prove that Anderson mixing increases the convergence radius of policy iteration schemes by an extra contraction factor. The key focus of the analysis roots in the fixed-point iteration nature of RL. We further propose a stabilization strategy by introducing a stable regularization term in Anderson mixing and a differentiable, non-expansive MellowMax operator that can allow both faster convergence and more stable behavior. Extensive experiments demonstrate that our proposed method enhances the convergence, stability, and performance of RL algorithms.
null
Approximate Decomposable Submodular Function Minimization for Cardinality-Based Components
https://papers.nips.cc/paper_files/paper/2021/hash/1e8a19426224ca89e83cef47f1e7f53b-Abstract.html
Nate Veldt, Austin R. Benson, Jon Kleinberg
https://papers.nips.cc/paper_files/paper/2021/hash/1e8a19426224ca89e83cef47f1e7f53b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11909-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1e8a19426224ca89e83cef47f1e7f53b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=tKMheNMoi2Y
https://papers.nips.cc/paper_files/paper/2021/file/1e8a19426224ca89e83cef47f1e7f53b-Supplemental.pdf
Minimizing a sum of simple submodular functions of limited support is a special case of general submodular function minimization that has seen numerous applications in machine learning. We develop faster techniques for instances where components in the sum are cardinality-based, meaning they depend only on the size of the input set. This variant is one of the most widely applied in practice, encompassing, e.g., common energy functions arising in image segmentation and recent generalized hypergraph cut functions. We develop the first approximation algorithms for this problem, where the approximations can be quickly computed via reduction to a sparse graph cut problem, with graph sparsity controlled by the desired approximation factor. Our method relies on a new connection between sparse graph reduction techniques and piecewise linear approximations to concave functions. Our sparse reduction technique leads to significant improvements in theoretical runtimes, as well as substantial practical gains in problems ranging from benchmark image segmentation tasks to hypergraph clustering problems.
null
Episodic Multi-agent Reinforcement Learning with Curiosity-driven Exploration
https://papers.nips.cc/paper_files/paper/2021/hash/1e8ca836c962598551882e689265c1c5-Abstract.html
Lulu Zheng, Jiarui Chen, Jianhao Wang, Jiamin He, Yujing Hu, Yingfeng Chen, Changjie Fan, Yang Gao, Chongjie Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/1e8ca836c962598551882e689265c1c5-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11910-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1e8ca836c962598551882e689265c1c5-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=YDGJ5YExiw6
https://papers.nips.cc/paper_files/paper/2021/file/1e8ca836c962598551882e689265c1c5-Supplemental.pdf
Efficient exploration in deep cooperative multi-agent reinforcement learning (MARL) still remains challenging in complex coordination problems. In this paper, we introduce a novel Episodic Multi-agent reinforcement learning with Curiosity-driven exploration, called EMC. We leverage an insight of popular factorized MARL algorithms that the ``induced" individual Q-values, i.e., the individual utility functions used for local execution, are the embeddings of local action-observation histories, and can capture the interaction between agents due to reward backpropagation during centralized training. Therefore, we use prediction errors of individual Q-values as intrinsic rewards for coordinated exploration and utilize episodic memory to exploit explored informative experience to boost policy training. As the dynamics of an agent's individual Q-value function captures the novelty of states and the influence from other agents, our intrinsic reward can induce coordinated exploration to new or promising states. We illustrate the advantages of our method by didactic examples, and demonstrate its significant outperformance over state-of-the-art MARL baselines on challenging tasks in the StarCraft II micromanagement benchmark.
null
Two Sides of Meta-Learning Evaluation: In vs. Out of Distribution
https://papers.nips.cc/paper_files/paper/2021/hash/1e932f24dc0aa4e7a6ac2beec387416d-Abstract.html
Amrith Setlur, Oscar Li, Virginia Smith
https://papers.nips.cc/paper_files/paper/2021/hash/1e932f24dc0aa4e7a6ac2beec387416d-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11911-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1e932f24dc0aa4e7a6ac2beec387416d-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Gi6SHsbxkgY
null
We categorize meta-learning evaluation into two settings: $\textit{in-distribution}$ [ID], in which the train and test tasks are sampled $\textit{iid}$ from the same underlying task distribution, and $\textit{out-of-distribution}$ [OOD], in which they are not. While most meta-learning theory and some FSL applications follow the ID setting, we identify that most existing few-shot classification benchmarks instead reflect OOD evaluation, as they use disjoint sets of train (base) and test (novel) classes for task generation. This discrepancy is problematic because -- as we show on numerous benchmarks -- meta-learning methods that perform better on existing OOD datasets may perform significantly worse in the ID setting. In addition, in the OOD setting, even though current FSL benchmarks seem befitting, our study highlights concerns in 1) reliably performing model selection for a given meta-learning method, and 2) consistently comparing the performance of different methods. To address these concerns, we provide suggestions on how to construct FSL benchmarks to allow for ID evaluation as well as more reliable OOD evaluation. Our work aims to inform the meta-learning community about the importance and distinction of ID vs. OOD evaluation, as well as the subtleties of OOD evaluation with current benchmarks.
null
Debiased Visual Question Answering from Feature and Sample Perspectives
https://papers.nips.cc/paper_files/paper/2021/hash/1f4477bad7af3616c1f933a02bfabe4e-Abstract.html
Zhiquan Wen, Guanghui Xu, Mingkui Tan, Qingyao Wu, Qi Wu
https://papers.nips.cc/paper_files/paper/2021/hash/1f4477bad7af3616c1f933a02bfabe4e-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11912-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1f4477bad7af3616c1f933a02bfabe4e-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=Z4ry59PVMq8
https://papers.nips.cc/paper_files/paper/2021/file/1f4477bad7af3616c1f933a02bfabe4e-Supplemental.pdf
Visual question answering (VQA) is designed to examine the visual-textual reasoning ability of an intelligent agent. However, recent observations show that many VQA models may only capture the biases between questions and answers in a dataset rather than showing real reasoning abilities. For example, given a question, some VQA models tend to output the answer that occurs frequently in the dataset and ignore the images. To reduce this tendency, existing methods focus on weakening the language bias. Meanwhile, only a few works also consider vision bias implicitly. However, these methods introduce additional annotations or show unsatisfactory performance. Moreover, not all biases are harmful to the models. Some “biases” learnt from datasets represent natural rules of the world and can help limit the range of answers. Thus, how to filter and remove the true negative biases in language and vision modalities remain a major challenge. In this paper, we propose a method named D-VQA to alleviate the above challenges from the feature and sample perspectives. Specifically, from the feature perspective, we build a question-to-answer and vision-to-answer branch to capture the language and vision biases, respectively. Next, we apply two unimodal bias detection modules to explicitly recognise and remove the negative biases. From the sample perspective, we construct two types of negative samples to assist the training of the models, without introducing additional annotations. Extensive experiments on the VQA-CP v2 and VQA v2 datasets demonstrate the effectiveness of our D-VQA method.
null
Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness
https://papers.nips.cc/paper_files/paper/2021/hash/1f4fe6a4411edc2ff625888b4093e917-Abstract.html
Jie Ren, Die Zhang, Yisen Wang, Lu Chen, Zhanpeng Zhou, Yiting Chen, Xu Cheng, Xin Wang, Meng Zhou, Jie Shi, Quanshi Zhang
https://papers.nips.cc/paper_files/paper/2021/hash/1f4fe6a4411edc2ff625888b4093e917-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11913-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1f4fe6a4411edc2ff625888b4093e917-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=fMaIxda5Y6K
https://papers.nips.cc/paper_files/paper/2021/file/1f4fe6a4411edc2ff625888b4093e917-Supplemental.pdf
This paper provides a unified view to explain different adversarial attacks and defense methods, i.e. the view of multi-order interactions between input variables of DNNs. Based on the multi-order interaction, we discover that adversarial attacks mainly affect high-order interactions to fool the DNN. Furthermore, we find that the robustness of adversarially trained DNNs comes from category-specific low-order interactions. Our findings provide a potential method to unify adversarial perturbations and robustness, which can explain the existing robustness-boosting methods in a principle way. Besides, our findings also make a revision of previous inaccurate understanding of the shape bias of adversarially learned features. Our code is available online at https://github.com/Jie-Ren/A-Unified-Game-Theoretic-Interpretation-of-Adversarial-Robustness.
null
On the Out-of-distribution Generalization of Probabilistic Image Modelling
https://papers.nips.cc/paper_files/paper/2021/hash/1f88c7c5d7d94ae08bd752aa3d82108b-Abstract.html
Mingtian Zhang, Andi Zhang, Steven McDonagh
https://papers.nips.cc/paper_files/paper/2021/hash/1f88c7c5d7d94ae08bd752aa3d82108b-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11914-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1f88c7c5d7d94ae08bd752aa3d82108b-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=q1yLPNF0UFV
https://papers.nips.cc/paper_files/paper/2021/file/1f88c7c5d7d94ae08bd752aa3d82108b-Supplemental.pdf
Out-of-distribution (OOD) detection and lossless compression constitute two problems that can be solved by the training of probabilistic models on a first dataset with subsequent likelihood evaluation on a second dataset, where data distributions differ. By defining the generalization of probabilistic models in terms of likelihood we show that, in the case of image models, the OOD generalization ability is dominated by local features. This motivates our proposal of a Local Autoregressive model that exclusively models local image features towards improving OOD performance. We apply the proposed model to OOD detection tasks and achieve state-of-the-art unsupervised OOD detection performance without the introduction of additional data. Additionally, we employ our model to build a new lossless image compressor: NeLLoC (Neural Local Lossless Compressor) and report state-of-the-art compression rates and model size.
null
Exploiting Local Convergence of Quasi-Newton Methods Globally: Adaptive Sample Size Approach
https://papers.nips.cc/paper_files/paper/2021/hash/1f9b616faddedc02339603f3b37d196c-Abstract.html
Qiujiang Jin, Aryan Mokhtari
https://papers.nips.cc/paper_files/paper/2021/hash/1f9b616faddedc02339603f3b37d196c-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11915-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1f9b616faddedc02339603f3b37d196c-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=zJynVlnoObx
https://papers.nips.cc/paper_files/paper/2021/file/1f9b616faddedc02339603f3b37d196c-Supplemental.pdf
In this paper, we study the application of quasi-Newton methods for solving empirical risk minimization (ERM) problems defined over a large dataset. Traditional deterministic and stochastic quasi-Newton methods can be executed to solve such problems; however, it is known that their global convergence rate may not be better than first-order methods, and their local superlinear convergence only appears towards the end of the learning process. In this paper, we use an adaptive sample size scheme that exploits the superlinear convergence of quasi-Newton methods globally and throughout the entire learning process. The main idea of the proposed adaptive sample size algorithms is to start with a small subset of data points and solve their corresponding ERM problem within its statistical accuracy, and then enlarge the sample size geometrically and use the optimal solution of the problem corresponding to the smaller set as an initial point for solving the subsequent ERM problem with more samples. We show that if the initial sample size is sufficiently large and we use quasi-Newton methods to solve each subproblem, the subproblems can be solved superlinearly fast (after at most three iterations), as we guarantee that the iterates always stay within a neighborhood that quasi-Newton methods converge superlinearly. Numerical experiments on various datasets confirm our theoretical results and demonstrate the computational advantages of our method.
null
PDE-GCN: Novel Architectures for Graph Neural Networks Motivated by Partial Differential Equations
https://papers.nips.cc/paper_files/paper/2021/hash/1f9f9d8ff75205aa73ec83e543d8b571-Abstract.html
Moshe Eliasof, Eldad Haber, Eran Treister
https://papers.nips.cc/paper_files/paper/2021/hash/1f9f9d8ff75205aa73ec83e543d8b571-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11916-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1f9f9d8ff75205aa73ec83e543d8b571-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=wWtk6GxJB2x
https://papers.nips.cc/paper_files/paper/2021/file/1f9f9d8ff75205aa73ec83e543d8b571-Supplemental.pdf
Graph neural networks are increasingly becoming the go-to approach in various fields such as computer vision, computational biology and chemistry, where data are naturally explained by graphs. However, unlike traditional convolutional neural networks, deep graph networks do not necessarily yield better performance than shallow graph networks. This behavior usually stems from the over-smoothing phenomenon. In this work, we propose a family of architecturesto control this behavior by design. Our networks are motivated by numerical methods for solving Partial Differential Equations (PDEs) on manifolds, and as such, their behavior can be explained by similar analysis. Moreover, as we demonstrate using an extensive set of experiments, our PDE-motivated networks can generalize and be effective for various types of problems from different fields. Our architectures obtain better or on par with the current state-of-the-art results for problems that are typically approached using different architectures.
null
Information Directed Reward Learning for Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2021/hash/1fa6269f58898f0e809575c9a48747ef-Abstract.html
David Lindner, Matteo Turchetta, Sebastian Tschiatschek, Kamil Ciosek, Andreas Krause
https://papers.nips.cc/paper_files/paper/2021/hash/1fa6269f58898f0e809575c9a48747ef-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11917-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1fa6269f58898f0e809575c9a48747ef-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=t5-Mszu1UkO
https://papers.nips.cc/paper_files/paper/2021/file/1fa6269f58898f0e809575c9a48747ef-Supplemental.pdf
For many reinforcement learning (RL) applications, specifying a reward is difficult. In this paper, we consider an RL setting where the agent can obtain information about the reward only by querying an expert that can, for example, evaluate individual states or provide binary preferences over trajectories. From such expensive feedback, we aim to learn a model of the reward function that allows standard RL algorithms to achieve high expected return with as few expert queries as possible. For this purpose, we propose Information Directed Reward Learning (IDRL), which uses a Bayesian model of the reward function and selects queries that maximize the information gain about the difference in return between potentially optimal policies. In contrast to prior active reward learning methods designed for specific types of queries, IDRL naturally accommodates different query types. Moreover, by shifting the focus from reducing the reward approximation error to improving the policy induced by the reward model, it achieves similar or better performance with significantly fewer queries. We support our findings with extensive evaluations in multiple environments and with different types of queries.
null
SSMF: Shifting Seasonal Matrix Factorization
https://papers.nips.cc/paper_files/paper/2021/hash/1fb2a1c37b18aa4611c3949d6148d0f8-Abstract.html
Koki Kawabata, Siddharth Bhatia, Rui Liu, Mohit Wadhwa, Bryan Hooi
https://papers.nips.cc/paper_files/paper/2021/hash/1fb2a1c37b18aa4611c3949d6148d0f8-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11918-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1fb2a1c37b18aa4611c3949d6148d0f8-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=AqprMSXI1Wn
https://papers.nips.cc/paper_files/paper/2021/file/1fb2a1c37b18aa4611c3949d6148d0f8-Supplemental.zip
Given taxi-ride counts information between departure and destination locations, how can we forecast their future demands? In general, given a data stream of events with seasonal patterns that innovate over time, how can we effectively and efficiently forecast future events? In this paper, we propose Shifting Seasonal Matrix Factorization approach, namely SSMF, that can adaptively learn multiple seasonal patterns (called regimes), as well as switching between them. Our proposed method has the following properties: (a) it accurately forecasts future events by detecting regime shifts in seasonal patterns as the data stream evolves; (b) it works in an online setting, i.e., processes each observation in constant time and memory; (c) it effectively realizes regime shifts without human intervention by using a lossless data compression scheme. We demonstrate that our algorithm outperforms state-of-the-art baseline methods by accurately forecasting upcoming events on three real-world data streams.
null
Associative Memories via Predictive Coding
https://papers.nips.cc/paper_files/paper/2021/hash/1fb36c4ccf88f7e67ead155496f02338-Abstract.html
Tommaso Salvatori, Yuhang Song, Yujian Hong, Lei Sha, Simon Frieder, Zhenghua Xu, Rafal Bogacz, Thomas Lukasiewicz
https://papers.nips.cc/paper_files/paper/2021/hash/1fb36c4ccf88f7e67ead155496f02338-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11919-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1fb36c4ccf88f7e67ead155496f02338-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=VuzPO_TZHPc
https://papers.nips.cc/paper_files/paper/2021/file/1fb36c4ccf88f7e67ead155496f02338-Supplemental.pdf
Associative memories in the brain receive and store patterns of activity registered by the sensory neurons, and are able to retrieve them when necessary. Due to their importance in human intelligence, computational models of associative memories have been developed for several decades now. In this paper, we present a novel neural model for realizing associative memories, which is based on a hierarchical generative network that receives external stimuli via sensory neurons. It is trained using predictive coding, an error-based learning algorithm inspired by information processing in the cortex. To test the model's capabilities, we perform multiple retrieval experiments from both corrupted and incomplete data points. In an extensive comparison, we show that this new model outperforms in retrieval accuracy and robustness popular associative memory models, such as autoencoders trained via backpropagation, and modern Hopfield networks. In particular, in completing partial data points, our model achieves remarkable results on natural image datasets, such as ImageNet, with a surprisingly high accuracy, even when only a tiny fraction of pixels of the original images is presented. Our model provides a plausible framework to study learning and retrieval of memories in the brain, as it closely mimics the behavior of the hippocampus as a memory index and generative model.
null
Robust and differentially private mean estimation
https://papers.nips.cc/paper_files/paper/2021/hash/1fc5309ccc651bf6b5d22470f67561ea-Abstract.html
Xiyang Liu, Weihao Kong, Sham Kakade, Sewoong Oh
https://papers.nips.cc/paper_files/paper/2021/hash/1fc5309ccc651bf6b5d22470f67561ea-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11920-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1fc5309ccc651bf6b5d22470f67561ea-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=CuQoImkKkIj
https://papers.nips.cc/paper_files/paper/2021/file/1fc5309ccc651bf6b5d22470f67561ea-Supplemental.pdf
In statistical learning and analysis from shared data, which is increasingly widely adopted in platforms such as federated learning and meta-learning, there are two major concerns: privacy and robustness. Each participating individual should be able to contribute without the fear of leaking one's sensitive information. At the same time, the system should be robust in the presence of malicious participants inserting corrupted data. Recent algorithmic advances in learning from shared data focus on either one of these threats, leaving the system vulnerable to the other. We bridge this gap for the canonical problem of estimating the mean from i.i.d.~samples. We introduce PRIME, which is the first efficient algorithm that achieves both privacy and robustness for a wide range of distributions. We further complement this result with a novel exponential time algorithm that improves the sample complexity of PRIME, achieving a near-optimal guarantee and matching that of a known lower bound for (non-robust) private mean estimation. This proves that there is no extra statistical cost to simultaneously guaranteeing privacy and robustness.
null
Adaptable Agent Populations via a Generative Model of Policies
https://papers.nips.cc/paper_files/paper/2021/hash/1fc8c3d03b0021478a8c9ebdcd457c67-Abstract.html
Kenneth Derek, Phillip Isola
https://papers.nips.cc/paper_files/paper/2021/hash/1fc8c3d03b0021478a8c9ebdcd457c67-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11921-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/1fc8c3d03b0021478a8c9ebdcd457c67-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=73FeFxePGc
https://papers.nips.cc/paper_files/paper/2021/file/1fc8c3d03b0021478a8c9ebdcd457c67-Supplemental.pdf
In the natural world, life has found innumerable ways to survive and often thrive. Between and even within species, each individual is in some manner unique, and this diversity lends adaptability and robustness to life. In this work, we aim to learn a space of diverse and high-reward policies in a given environment. To this end, we introduce a generative model of policies for reinforcement learning, which maps a low-dimensional latent space to an agent policy space. Our method enables learning an entire population of agent policies, without requiring the use of separate policy parameters. Just as real world populations can adapt and evolve via natural selection, our method is able to adapt to changes in our environment solely by selecting for policies in latent space. We test our generative model’s capabilities in a variety of environments, including an open-ended grid-world and a two-player soccer environment. Code, visualizations, and additional experiments can be found at https://kennyderek.github.io/adap/.
null
A No-go Theorem for Robust Acceleration in the Hyperbolic Plane
https://papers.nips.cc/paper_files/paper/2021/hash/201d546992726352471cfea6b0df0a48-Abstract.html
Linus Hamilton, Ankur Moitra
https://papers.nips.cc/paper_files/paper/2021/hash/201d546992726352471cfea6b0df0a48-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11922-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/201d546992726352471cfea6b0df0a48-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=twz1QqzU0Hp
null
In recent years there has been significant effort to adapt the key tools and ideas in convex optimization to the Riemannian setting. One key challenge has remained: Is there a Nesterov-like accelerated gradient method for geodesically convex functions on a Riemannian manifold? Recent work has given partial answers and the hope was that this ought to be possible. Here we prove that in a noisy setting, there is no analogue of accelerated gradient descent for geodesically convex functions on the hyperbolic plane. Our results apply even when the noise is exponentially small. The key intuition behind our proof is short and simple: In negatively curved spaces, the volume of a ball grows so fast that information about the past gradients is not useful in the future.
null
Privately Learning Mixtures of Axis-Aligned Gaussians
https://papers.nips.cc/paper_files/paper/2021/hash/201d7288b4c18a679e48b31c72c30ded-Abstract.html
Ishaq Aden-Ali, Hassan Ashtiani, Christopher Liaw
https://papers.nips.cc/paper_files/paper/2021/hash/201d7288b4c18a679e48b31c72c30ded-Abstract.html
NIPS 2021
https://papers.nips.cc/paper_files/paper/11923-/bibtex
https://papers.nips.cc/paper_files/paper/2021/file/201d7288b4c18a679e48b31c72c30ded-Paper.pdf
https://papers.nips.cchttps://openreview.net/forum?id=QbYS4dXH0dD
https://papers.nips.cc/paper_files/paper/2021/file/201d7288b4c18a679e48b31c72c30ded-Supplemental.pdf
We consider the problem of learning multivariate Gaussians under the constraint of approximate differential privacy. We prove that $\widetilde{O}(k^2 d \log^{3/2}(1/\delta) / \alpha^2 \varepsilon)$ samples are sufficient to learn a mixture of $k$ axis-aligned Gaussians in $\mathbb{R}^d$ to within total variation distance $\alpha$ while satisfying $(\varepsilon, \delta)$-differential privacy. This is the first result for privately learning mixtures of unbounded axis-aligned (or even unbounded univariate) Gaussians. If the covariance matrices of each of the Gaussians is the identity matrix, we show that $\widetilde{O}(kd/\alpha^2 + kd \log(1/\delta) / \alpha \varepsilon)$ samples are sufficient.To prove our results, we design a new technique for privately learning mixture distributions. A class of distributions $\mathcal{F}$ is said to be list-decodable if there is an algorithm that, given "heavily corrupted" samples from $f \in \mathcal{F}$, outputs a list of distributions one of which approximates $f$. We show that if $\mathcal{F}$ is privately list-decodable then we can learn mixtures of distributions in $\mathcal{F}$. Finally, we show axis-aligned Gaussian distributions are privately list-decodable, thereby proving mixtures of such distributions are privately learnable.
null