title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
Deep Self-Dissimilarities as Powerful Visual Fingerprints | https://papers.nips.cc/paper_files/paper/2021/hash/20479c788fb27378c2c99eadcf207e7f-Abstract.html | Idan Kligvasser, Tamar Shaham, Yuval Bahat, Tomer Michaeli | https://papers.nips.cc/paper_files/paper/2021/hash/20479c788fb27378c2c99eadcf207e7f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11924-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/20479c788fb27378c2c99eadcf207e7f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=R6nFQy2vwQq | https://papers.nips.cc/paper_files/paper/2021/file/20479c788fb27378c2c99eadcf207e7f-Supplemental.pdf | Features extracted from deep layers of classification networks are widely used as image descriptors. Here, we exploit an unexplored property of these features: their internal dissimilarity. While small image patches are known to have similar statistics across image scales, it turns out that the internal distribution of deep features varies distinctively between scales. We show how this deep self dissimilarity (DSD) property can be used as a powerful visual fingerprint. Particularly, we illustrate that full-reference and no-reference image quality measures derived from DSD are highly correlated with human preference. In addition, incorporating DSD as a loss function in training of image restoration networks, leads to results that are at least as photo-realistic as those obtained by GAN based methods, while not requiring adversarial training. | null |
Invariant Causal Imitation Learning for Generalizable Policies | https://papers.nips.cc/paper_files/paper/2021/hash/204904e461002b28511d5880e1c36a0f-Abstract.html | Ioana Bica, Daniel Jarrett, Mihaela van der Schaar | https://papers.nips.cc/paper_files/paper/2021/hash/204904e461002b28511d5880e1c36a0f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11925-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/204904e461002b28511d5880e1c36a0f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=715E7e6j4gU | https://papers.nips.cc/paper_files/paper/2021/file/204904e461002b28511d5880e1c36a0f-Supplemental.pdf | Consider learning an imitation policy on the basis of demonstrated behavior from multiple environments, with an eye towards deployment in an unseen environment. Since the observable features from each setting may be different, directly learning individual policies as mappings from features to actions is prone to spurious correlations---and may not generalize well. However, the expert’s policy is often a function of a shared latent structure underlying those observable features that is invariant across settings. By leveraging data from multiple environments, we propose Invariant Causal Imitation Learning (ICIL), a novel technique in which we learn a feature representation that is invariant across domains, on the basis of which we learn an imitation policy that matches expert behavior. To cope with transition dynamics mismatch, ICIL learns a shared representation of causal features (for all training environments), that is disentangled from the specific representations of noise variables (for each of those environments). Moreover, to ensure that the learned policy matches the observation distribution of the expert's policy, ICIL estimates the energy of the expert's observations and uses a regularization term that minimizes the imitator policy's next state energy. Experimentally, we compare our methods against several benchmarks in control and healthcare tasks and show its effectiveness in learning imitation policies capable of generalizing to unseen environments. | null |
CoAtNet: Marrying Convolution and Attention for All Data Sizes | https://papers.nips.cc/paper_files/paper/2021/hash/20568692db622456cc42a2e853ca21f8-Abstract.html | Zihang Dai, Hanxiao Liu, Quoc V Le, Mingxing Tan | https://papers.nips.cc/paper_files/paper/2021/hash/20568692db622456cc42a2e853ca21f8-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11926-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/20568692db622456cc42a2e853ca21f8-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=dUk5Foj5CLf | https://papers.nips.cc/paper_files/paper/2021/file/20568692db622456cc42a2e853ca21f8-Supplemental.pdf | Transformers have attracted increasing interests in computer vision, but they still fall behind state-of-the-art convolutional networks. In this work, we show that while Transformers tend to have larger model capacity, their generalization can be worse than convolutional networks due to the lack of the right inductive bias. To effectively combine the strengths from both architectures, we present CoAtNets(pronounced "coat" nets), a family of hybrid models built from two key insights: (1) depthwise Convolution and self-Attention can be naturally unified via simple relative attention; (2) vertically stacking convolution layers and attention layers in a principled way is surprisingly effective in improving generalization, capacity and efficiency. Experiments show that our CoAtNets achieve state-of-the-art performance under different resource constraints across various datasets: Without extra data, CoAtNet achieves 86.0% ImageNet top-1 accuracy; When pre-trained with 13M images from ImageNet-21K, our CoAtNet achieves 88.56% top-1 accuracy, matching ViT-huge pre-trained with 300M images from JFT-300M while using 23x less data; Notably, when we further scale up CoAtNet with JFT-3B, it achieves 90.88% top-1 accuracy on ImageNet, establishing a new state-of-the-art result. | null |
Mixed Supervised Object Detection by Transferring Mask Prior and Semantic Similarity | https://papers.nips.cc/paper_files/paper/2021/hash/20885c72ca35d75619d6a378edea9f76-Abstract.html | Yan Liu, Zhijie Zhang, Li Niu, Junjie Chen, Liqing Zhang | https://papers.nips.cc/paper_files/paper/2021/hash/20885c72ca35d75619d6a378edea9f76-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11927-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/20885c72ca35d75619d6a378edea9f76-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=QXDePagJ1X3 | https://papers.nips.cc/paper_files/paper/2021/file/20885c72ca35d75619d6a378edea9f76-Supplemental.pdf | Object detection has achieved promising success, but requires large-scale fully-annotated data, which is time-consuming and labor-extensive. Therefore, we consider object detection with mixed supervision, which learns novel object categories using weak annotations with the help of full annotations of existing base object categories. Previous works using mixed supervision mainly learn the class-agnostic objectness from fully-annotated categories, which can be transferred to upgrade the weak annotations to pseudo full annotations for novel categories. In this paper, we further transfer mask prior and semantic similarity to bridge the gap between novel categories and base categories. Specifically, the ability of using mask prior to help detect objects is learned from base categories and transferred to novel categories. Moreover, the semantic similarity between objects learned from base categories is transferred to denoise the pseudo full annotations for novel categories. Experimental results on three benchmark datasets demonstrate the effectiveness of our method over existing methods. Codes are available at https://github.com/bcmi/TraMaS-Weak-Shot-Object-Detection. | null |
Celebrating Diversity in Shared Multi-Agent Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2021/hash/20aee3a5f4643755a79ee5f6a73050ac-Abstract.html | Chenghao Li, Tonghan Wang, Chengjie Wu, Qianchuan Zhao, Jun Yang, Chongjie Zhang | https://papers.nips.cc/paper_files/paper/2021/hash/20aee3a5f4643755a79ee5f6a73050ac-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11928-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/20aee3a5f4643755a79ee5f6a73050ac-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=CO87OIEOGU8 | https://papers.nips.cc/paper_files/paper/2021/file/20aee3a5f4643755a79ee5f6a73050ac-Supplemental.pdf | Recently, deep multi-agent reinforcement learning (MARL) has shown the promise to solve complex cooperative tasks. Its success is partly because of parameter sharing among agents. However, such sharing may lead agents to behave similarly and limit their coordination capacity. In this paper, we aim to introduce diversity in both optimization and representation of shared multi-agent reinforcement learning. Specifically, we propose an information-theoretical regularization to maximize the mutual information between agents' identities and their trajectories, encouraging extensive exploration and diverse individualized behaviors. In representation, we incorporate agent-specific modules in the shared neural network architecture, which are regularized by L1-norm to promote learning sharing among agents while keeping necessary diversity. Empirical results show that our method achieves state-of-the-art performance on Google Research Football and super hard StarCraft II micromanagement tasks. | null |
Rebounding Bandits for Modeling Satiation Effects | https://papers.nips.cc/paper_files/paper/2021/hash/2109737282d2c2de4fc5534be26c9bb6-Abstract.html | Liu Leqi, Fatma Kilinc Karzan, Zachary Lipton, Alan Montgomery | https://papers.nips.cc/paper_files/paper/2021/hash/2109737282d2c2de4fc5534be26c9bb6-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11929-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2109737282d2c2de4fc5534be26c9bb6-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=pHCuidXEinv | https://papers.nips.cc/paper_files/paper/2021/file/2109737282d2c2de4fc5534be26c9bb6-Supplemental.pdf | Psychological research shows that enjoyment of many goods is subject to satiation, with short-term satisfaction declining after repeated exposures to the same item. Nevertheless, proposed algorithms for powering recommender systems seldom model these dynamics, instead proceeding as though user preferences were fixed in time. In this work, we introduce rebounding bandits, a multi-armed bandit setup, where satiation dynamics are modeled as time-invariant linear dynamical systems. Expected rewards for each arm decline monotonically with consecutive exposures and rebound towards the initial reward whenever that arm is not pulled. Unlike classical bandit algorithms, methods for tackling rebounding bandits must plan ahead and model-based methods rely on estimating the parameters of the satiation dynamics. We characterize the planning problem, showing that the greedy policy is optimal when the arms exhibit identical deterministic dynamics. To address stochastic satiation dynamics with unknown parameters, we propose Explore-Estimate-Plan, an algorithm that pulls arms methodically, estimates the system dynamics, and then plans accordingly. | null |
Sample Complexity of Tree Search Configuration: Cutting Planes and Beyond | https://papers.nips.cc/paper_files/paper/2021/hash/210b7ec74fc9cec6fb8388dbbdaf23f7-Abstract.html | Maria-Florina F. Balcan, Siddharth Prasad, Tuomas Sandholm, Ellen Vitercik | https://papers.nips.cc/paper_files/paper/2021/hash/210b7ec74fc9cec6fb8388dbbdaf23f7-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11930-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/210b7ec74fc9cec6fb8388dbbdaf23f7-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=_OPHJ7nkZoC | https://papers.nips.cc/paper_files/paper/2021/file/210b7ec74fc9cec6fb8388dbbdaf23f7-Supplemental.pdf | Cutting-plane methods have enabled remarkable successes in integer programming over the last few decades. State-of-the-art solvers integrate a myriad of cutting-plane techniques to speed up the underlying tree-search algorithm used to find optimal solutions. In this paper we provide sample complexity bounds for cut-selection in branch-and-cut (B&C). Given a training set of integer programs sampled from an application-specific input distribution and a family of cut selection policies, these guarantees bound the number of samples sufficient to ensure that using any policy in the family, the size of the tree B&C builds on average over the training set is close to the expected size of the tree B&C builds. We first bound the sample complexity of learning cutting planes from the canonical family of Chvátal-Gomory cuts. Our bounds handle any number of waves of any number of cuts and are fine tuned to the magnitudes of the constraint coefficients. Next, we prove sample complexity bounds for more sophisticated cut selection policies that use a combination of scoring rules to choose from a family of cuts. Finally, beyond the realm of cutting planes for integer programming, we develop a general abstraction of tree search that captures key components such as node selection and variable selection. For this abstraction, we bound the sample complexity of learning a good policy for building the search tree. | null |
IQ-Learn: Inverse soft-Q Learning for Imitation | https://papers.nips.cc/paper_files/paper/2021/hash/210f760a89db30aa72ca258a3483cc7f-Abstract.html | Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, Stefano Ermon | https://papers.nips.cc/paper_files/paper/2021/hash/210f760a89db30aa72ca258a3483cc7f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11931-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/210f760a89db30aa72ca258a3483cc7f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Aeo-xqtb5p | https://papers.nips.cc/paper_files/paper/2021/file/210f760a89db30aa72ca258a3483cc7f-Supplemental.pdf | In many sequential decision-making problems (e.g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task. However, imitation learning (IL) from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics. Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence but doesn't utilize any information involving the environment’s dynamics. Many existing methods that exploit dynamics information are difficult to train in practice due to an adversarial optimization process over reward and policy approximators or biased, high variance gradient estimators. We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function, implicitly representing both reward and policy. On standard benchmarks, the implicitly learned rewards show a high positive correlation with the ground-truth rewards, illustrating our method can also be used for inverse reinforcement learning (IRL). Our method, Inverse soft-Q learning (IQ-Learn) obtains state-of-the-art results in offline and online imitation learning settings, significantly outperforming existing methods both in the number of required environment interactions and scalability in high-dimensional spaces, often by more than 3x. | null |
Task-Agnostic Undesirable Feature Deactivation Using Out-of-Distribution Data | https://papers.nips.cc/paper_files/paper/2021/hash/21186d7b1482412ab14f0332b8aee119-Abstract.html | Dongmin Park, Hwanjun Song, Minseok Kim, Jae-Gil Lee | https://papers.nips.cc/paper_files/paper/2021/hash/21186d7b1482412ab14f0332b8aee119-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11932-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/21186d7b1482412ab14f0332b8aee119-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=4orlVaC95Bo | https://papers.nips.cc/paper_files/paper/2021/file/21186d7b1482412ab14f0332b8aee119-Supplemental.pdf | A deep neural network (DNN) has achieved great success in many machine learning tasks by virtue of its high expressive power. However, its prediction can be easily biased to undesirable features, which are not essential for solving the target task and are even imperceptible to a human, thereby resulting in poor generalization. Leveraging plenty of undesirable features in out-of-distribution (OOD) examples has emerged as a potential solution for de-biasing such features, and a recent study shows that softmax-level calibration of OOD examples can successfully remove the contribution of undesirable features to the last fully-connected layer of a classifier. However, its applicability is confined to the classification task, and its impact on a DNN feature extractor is not properly investigated. In this paper, we propose Taufe, a novel regularizer that deactivates many undesirable features using OOD examples in the feature extraction layer and thus removes the dependency on the task-specific softmax layer. To show the task-agnostic nature of Taufe, we rigorously validate its performance on three tasks, classification, regression, and a mix of them, on CIFAR-10, CIFAR-100, ImageNet, CUB200, and CAR datasets. The results demonstrate that Taufe consistently outperforms the state-of-the-art method as well as the baselines without regularization. | null |
Private Non-smooth ERM and SCO in Subquadratic Steps | https://papers.nips.cc/paper_files/paper/2021/hash/211c1e0b83b9c69fa9c4bdede203c1e3-Abstract.html | Janardhan Kulkarni, Yin Tat Lee, Daogao Liu | https://papers.nips.cc/paper_files/paper/2021/hash/211c1e0b83b9c69fa9c4bdede203c1e3-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11933-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/211c1e0b83b9c69fa9c4bdede203c1e3-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=-16dlERMZkO | https://papers.nips.cc/paper_files/paper/2021/file/211c1e0b83b9c69fa9c4bdede203c1e3-Supplemental.pdf | We study the differentially private Empirical Risk Minimization (ERM) and Stochastic Convex Optimization (SCO) problems for non-smooth convex functions. We get a (nearly) optimal bound on the excess empirical risk for ERM with $O(\frac{N^{3/2}}{d^{1/8}}+ \frac{N^2}{d})$ gradient queries, which is achieved with the help of subsampling and smoothing the function via convolution. Combining this result with the iterative localization technique of Feldman et al. \cite{fkt20}, we achieve the optimal excess population loss for the SCO problem with $O(\min\{N^{5/4}d^{1/8},\frac{ N^{3/2}}{d^{1/8}}\})$ gradient queries.Our work makes progress towards resolving a question raised by Bassily et al. \cite{bfgt20}, giving first algorithms for private SCO with subquadratic steps. In a concurrent work, Asi et al. \cite{afkt21} gave other algorithms for private ERM and SCO with subquadratic steps. | null |
Towards Instance-Optimal Offline Reinforcement Learning with Pessimism | https://papers.nips.cc/paper_files/paper/2021/hash/212ab20dbdf4191cbcdcf015511783f4-Abstract.html | Ming Yin, Yu-Xiang Wang | https://papers.nips.cc/paper_files/paper/2021/hash/212ab20dbdf4191cbcdcf015511783f4-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11934-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/212ab20dbdf4191cbcdcf015511783f4-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=1XwPDFrJObw | https://papers.nips.cc/paper_files/paper/2021/file/212ab20dbdf4191cbcdcf015511783f4-Supplemental.pdf | We study the \emph{offline reinforcement learning} (offline RL) problem, where the goal is to learn a reward-maximizing policy in an unknown \emph{Markov Decision Process} (MDP) using the data coming from a policy $\mu$. In particular, we consider the sample complexity problems of offline RL for the finite horizon MDPs. Prior works derive the information-theoretical lower bounds based on different data-coverage assumptions and their upper bounds are expressed by the covering coefficients which lack the explicit characterization of system quantities. In this work, we analyze the \emph{Adaptive Pessimistic Value Iteration} (APVI) algorithm and derive the suboptimality upper bound that nearly matches\[O\left(\sum_{h=1}^H\sum_{s_h,a_h}d^{\pi^\star}_h(s_h,a_h)\sqrt{\frac{\mathrm{Var}_{P_{s_h,a_h}}{(V^\star_{h+1}+r_h)}}{d^\mu_h(s_h,a_h)}}\sqrt{\frac{1}{n}}\right).\]We also prove an information-theoretical lower bound to show this quantity is required under the weak assumption that $d^\mu_h(s_h,a_h)>0$ if $d^{\pi^\star}_h(s_h,a_h)>0$. Here $\pi^\star$ is a optimal policy, $\mu$ is the behavior policy and $d(s_h,a_h)$ is the marginal state-action probability. We call this adaptive bound the \emph{intrinsic offline reinforcement learning bound} since it directly implies all the existing optimal results: minimax rate under uniform data-coverage assumption, horizon-free setting, single policy concentrability, and the tight problem-dependent results. Later, we extend the result to the \emph{assumption-free} regime (where we make no assumption on $\mu$) and obtain the assumption-free intrinsic bound. Due to its generic form, we believe the intrinsic bound could help illuminate what makes a specific problem hard and reveal the fundamental challenges in offline RL. | null |
Speedy Performance Estimation for Neural Architecture Search | https://papers.nips.cc/paper_files/paper/2021/hash/2130eb640e0a272898a51da41363542d-Abstract.html | Robin Ru, Clare Lyle, Lisa Schut, Miroslav Fil, Mark van der Wilk, Yarin Gal | https://papers.nips.cc/paper_files/paper/2021/hash/2130eb640e0a272898a51da41363542d-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11935-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2130eb640e0a272898a51da41363542d-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=8V2hZW0d2aS | https://papers.nips.cc/paper_files/paper/2021/file/2130eb640e0a272898a51da41363542d-Supplemental.pdf | Reliable yet efficient evaluation of generalisation performance of a proposed architecture is crucial to the success of neural architecture search (NAS). Traditional approaches face a variety of limitations: training each architecture to completion is prohibitively expensive, early stopped validation accuracy may correlate poorly with fully trained performance, and model-based estimators require large training sets. We instead propose to estimate the final test performance based on a simple measure of training speed. Our estimator is theoretically motivated by the connection between generalisation and training speed, and is also inspired by the reformulation of a PAC-Bayes bound under the Bayesian setting. Our model-free estimator is simple, efficient, and cheap to implement, and does not require hyperparameter-tuning or surrogate training before deployment. We demonstrate on various NAS search spaces that our estimator consistently outperforms other alternatives in achieving better correlation with the true test performance rankings. We further show that our estimator can be easily incorporated into both query-based and one-shot NAS methods to improve the speed or quality of the search. | null |
How Tight Can PAC-Bayes be in the Small Data Regime? | https://papers.nips.cc/paper_files/paper/2021/hash/214cfbe603b7f9f9bc005d5f53f7a1d3-Abstract.html | Andrew Foong, Wessel Bruinsma, David Burt, Richard Turner | https://papers.nips.cc/paper_files/paper/2021/hash/214cfbe603b7f9f9bc005d5f53f7a1d3-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11936-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/214cfbe603b7f9f9bc005d5f53f7a1d3-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=jV5m8NAWb0E | https://papers.nips.cc/paper_files/paper/2021/file/214cfbe603b7f9f9bc005d5f53f7a1d3-Supplemental.pdf | In this paper, we investigate the question: _Given a small number of datapoints, for example $N = 30$, how tight can PAC-Bayes and test set bounds be made?_ For such small datasets, test set bounds adversely affect generalisation performance by withholding data from the training procedure. In this setting, PAC-Bayes bounds are especially attractive, due to their ability to use all the data to simultaneously learn a posterior and bound its generalisation risk. We focus on the case of i.i.d. data with a bounded loss and consider the generic PAC-Bayes theorem of Germain et al. While their theorem is known to recover many existing PAC-Bayes bounds, it is unclear what the tightest bound derivable from their framework is. For a fixed learning algorithm and dataset, we show that the tightest possible bound coincides with a bound considered by Catoni; and, in the more natural case of distributions over datasets, we establish a lower bound on the best bound achievable in expectation. Interestingly, this lower bound recovers the Chernoff test set bound if the posterior is equal to the prior. Moreover, to illustrate how tight these bounds can be, we study synthetic one-dimensional classification tasks in which it is feasible to meta-learn both the prior and the form of the bound to numerically optimise for the tightest bounds possible. We find that in this simple, controlled scenario, PAC-Bayes bounds are competitive with comparable, commonly used Chernoff test set bounds. However, the sharpest test set bounds still lead to better guarantees on the generalisation error than the PAC-Bayes bounds we consider. | null |
Deep Synoptic Monte-Carlo Planning in Reconnaissance Blind Chess | https://papers.nips.cc/paper_files/paper/2021/hash/215a71a12769b056c3c32e7299f1c5ed-Abstract.html | Gregory Clark | https://papers.nips.cc/paper_files/paper/2021/hash/215a71a12769b056c3c32e7299f1c5ed-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11937-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/215a71a12769b056c3c32e7299f1c5ed-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Joy2imuk604 | https://papers.nips.cc/paper_files/paper/2021/file/215a71a12769b056c3c32e7299f1c5ed-Supplemental.pdf | This paper introduces deep synoptic Monte Carlo planning (DSMCP) for large imperfect information games. The algorithm constructs a belief state with an unweighted particle filter and plans via playouts that start at samples drawn from the belief state. The algorithm accounts for uncertainty by performing inference on "synopses," a novel stochastic abstraction of information states. DSMCP is the basis of the program Penumbra, which won the official 2020 reconnaissance blind chess competition versus 33 other programs. This paper also evaluates algorithm variants that incorporate caution, paranoia, and a novel bandit algorithm. Furthermore, it audits the synopsis features used in Penumbra with per-bit saliency statistics. | null |
Dynamic Analysis of Higher-Order Coordination in Neuronal Assemblies via De-Sparsified Orthogonal Matching Pursuit | https://papers.nips.cc/paper_files/paper/2021/hash/2172fde49301047270b2897085e4319d-Abstract.html | Shoutik Mukherjee, Behtash Babadi | https://papers.nips.cc/paper_files/paper/2021/hash/2172fde49301047270b2897085e4319d-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11938-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2172fde49301047270b2897085e4319d-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=u14Kuxl8fN | https://papers.nips.cc/paper_files/paper/2021/file/2172fde49301047270b2897085e4319d-Supplemental.zip | Coordinated ensemble spiking activity is widely observable in neural recordings and central in the study of population codes, with hypothesized roles including robust stimulus representation, interareal communication of neural information, and learning and memory formation. Model-free measures of synchrony characterize the coherence of pairwise activity, but not higher-order interactions; this limitation is transcended by statistical models of ensemble spiking activity. However, existing model-based analyses often impose assumptions about the relevance of higher-order interactions and require multiple repeated trials in order to characterize dynamics in the correlational structure of ensemble activity. To address these shortcomings, we propose an adaptive greedy filtering algorithm based on a discretized mark point-process model of ensemble spiking and a corresponding precise statistical inference framework to identify significant coordinated higher-order spiking activity. In the course of developing the statistical inference procedures, we also show that confidence intervals can be constructed for greedily estimated parameters. We demonstrate the utility of our proposed methods on simulated neuronal assemblies. Applied to multi-electrode recordings of human cortical ensembles, our proposed methods provide new insights into the dynamics underlying localized population activity during transitions between brain states. | null |
Efficient Training of Retrieval Models using Negative Cache | https://papers.nips.cc/paper_files/paper/2021/hash/2175f8c5cd9604f6b1e576b252d4c86e-Abstract.html | Erik Lindgren, Sashank Reddi, Ruiqi Guo, Sanjiv Kumar | https://papers.nips.cc/paper_files/paper/2021/hash/2175f8c5cd9604f6b1e576b252d4c86e-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11939-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2175f8c5cd9604f6b1e576b252d4c86e-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=824xC-SgWgU | https://papers.nips.cc/paper_files/paper/2021/file/2175f8c5cd9604f6b1e576b252d4c86e-Supplemental.pdf | Factorized models, such as two tower neural network models, are widely used for scoring (query, document) pairs in information retrieval tasks. These models are typically trained by optimizing the model parameters to score relevant positive" pairs higher than the irrelevantnegative" ones. While a large set of negatives typically improves the model performance, limited computation and memory budgets place constraints on the number of negatives used during training. In this paper, we develop a novel negative sampling technique for accelerating training with softmax cross-entropy loss. By using cached (possibly stale) item embeddings, our technique enables training with a large pool of negatives with reduced memory and computation. We also develop a streaming variant of our algorithm geared towards very large datasets. Furthermore, we establish a theoretical basis for our approach by showing that updating a very small fraction of the cache at each iteration can still ensure fast convergence. Finally, we experimentally validate our approach and show that it is efficient and compares favorably with more complex, state-of-the-art approaches. | null |
Understanding Partial Multi-Label Learning via Mutual Information | https://papers.nips.cc/paper_files/paper/2021/hash/217c0e01c1828e7279051f1b6675745d-Abstract.html | Xiuwen Gong, Dong Yuan, Wei Bao | https://papers.nips.cc/paper_files/paper/2021/hash/217c0e01c1828e7279051f1b6675745d-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11940-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/217c0e01c1828e7279051f1b6675745d-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=d9FjReQr-q- | https://papers.nips.cc/paper_files/paper/2021/file/217c0e01c1828e7279051f1b6675745d-Supplemental.pdf | To deal with ambiguities in partial multilabel learning (PML), state-of-the-art methods perform disambiguation by identifying ground-truth labels directly. However, there is an essential question:“Can the ground-truth labels be identified precisely?". If yes, “How can the ground-truth labels be found?". This paper provides affirmative answers to these questions. Instead of adopting hand-made heuristic strategy, we propose a novel Mutual Information Label Identification for Partial Multilabel Learning (MILI-PML), which is derived from a clear probabilistic formulation and could be easily interpreted theoretically from the mutual information perspective, as well as naturally incorporates the feature/label relevancy considerations. Extensive experiments on synthetic and real-world datasets clearly demonstrate the superiorities of the proposed MILI-PML. | null |
Environment Generation for Zero-Shot Compositional Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2021/hash/218344619d8fb95d504ccfa11804073f-Abstract.html | Izzeddin Gur, Natasha Jaques, Yingjie Miao, Jongwook Choi, Manoj Tiwari, Honglak Lee, Aleksandra Faust | https://papers.nips.cc/paper_files/paper/2021/hash/218344619d8fb95d504ccfa11804073f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11941-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/218344619d8fb95d504ccfa11804073f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=CeByDMy0YTL | https://papers.nips.cc/paper_files/paper/2021/file/218344619d8fb95d504ccfa11804073f-Supplemental.pdf | Many real-world problems are compositional – solving them requires completing interdependent sub-tasks, either in series or in parallel, that can be represented as a dependency graph. Deep reinforcement learning (RL) agents often struggle to learn such complex tasks due to the long time horizons and sparse rewards. To address this problem, we present Compositional Design of Environments (CoDE), which trains a Generator agent to automatically build a series of compositional tasks tailored to the RL agent’s current skill level. This automatic curriculum not only enables the agent to learn more complex tasks than it could have otherwise, but also selects tasks where the agent’s performance is weak, enhancing its robustness and ability to generalize zero-shot to unseen tasks at test-time. We analyze why current environment generation techniques are insufficient for the problem of generating compositional tasks, and propose a new algorithm that addresses these issues. Our results assess learning and generalization across multiple compositional tasks, including the real-world problem of learning to navigate and interact with web pages. We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments. We contribute two new benchmark frameworks for generating compositional tasks, compositional MiniGrid and gMiniWoB for web navigation. CoDE yields 4x higher success rate than the strongest baseline, and demonstrates strong performance of real websites learned on 3500 primitive tasks. | null |
Optimizing Conditional Value-At-Risk of Black-Box Functions | https://papers.nips.cc/paper_files/paper/2021/hash/219ece62fae865562d4510ea501cf349-Abstract.html | Quoc Phong Nguyen, Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet | https://papers.nips.cc/paper_files/paper/2021/hash/219ece62fae865562d4510ea501cf349-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11942-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/219ece62fae865562d4510ea501cf349-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Tc6Uk03Te7g | https://papers.nips.cc/paper_files/paper/2021/file/219ece62fae865562d4510ea501cf349-Supplemental.pdf | This paper presents two Bayesian optimization (BO) algorithms with theoretical performance guarantee to maximize the conditional value-at-risk (CVaR) of a black-box function: CV-UCB and CV-TS which are based on the well-established principle of optimism in the face of uncertainty and Thompson sampling, respectively. To achieve this, we develop an upper confidence bound of CVaR and prove the no-regret guarantee of CV-UCB by utilizing an interesting connection between CVaR and value-at-risk (VaR). For CV-TS, though it is straightforwardly performed with Thompson sampling, bounding its Bayesian regret is non-trivial because it requires a tail expectation bound for the distribution of CVaR of a black-box function, which has not been shown in the literature. The performances of both CV-UCB and CV-TS are empirically evaluated in optimizing CVaR of synthetic benchmark functions and simulated real-world optimization problems. | null |
E(n) Equivariant Normalizing Flows | https://papers.nips.cc/paper_files/paper/2021/hash/21b5680d80f75a616096f2e791affac6-Abstract.html | Victor Garcia Satorras, Emiel Hoogeboom, Fabian Fuchs, Ingmar Posner, Max Welling | https://papers.nips.cc/paper_files/paper/2021/hash/21b5680d80f75a616096f2e791affac6-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11943-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/21b5680d80f75a616096f2e791affac6-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=N5hQI_RowVA | https://papers.nips.cc/paper_files/paper/2021/file/21b5680d80f75a616096f2e791affac6-Supplemental.pdf | This paper introduces a generative model equivariant to Euclidean symmetries: E(n) Equivariant Normalizing Flows (E-NFs). To construct E-NFs, we take the discriminative E(n) graph neural networks and integrate them as a differential equation to obtain an invertible equivariant function: a continuous-time normalizing flow. We demonstrate that E-NFs considerably outperform baselines and existing methods from the literature on particle systems such as DW4 and LJ13, and on molecules from QM9 in terms of log-likelihood. To the best of our knowledge, this is the first flow that jointly generates molecule features and positions in 3D. | null |
Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning | https://papers.nips.cc/paper_files/paper/2021/hash/21be992eb8016e541a15953eee90760e-Abstract.html | Chongjian GE, Youwei Liang, YIBING SONG, Jianbo Jiao, Jue Wang, Ping Luo | https://papers.nips.cc/paper_files/paper/2021/hash/21be992eb8016e541a15953eee90760e-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11944-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/21be992eb8016e541a15953eee90760e-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=sRojdWhXJx | https://papers.nips.cc/paper_files/paper/2021/file/21be992eb8016e541a15953eee90760e-Supplemental.zip | Studies on self-supervised visual representation learning (SSL) improve encoder backbones to discriminate training samples without labels. While CNN encoders via SSL achieve comparable recognition performance to those via supervised learning, their network attention is under-explored for further improvement. Motivated by the transformers that explore visual attention effectively in recognition scenarios, we propose a CNN Attention REvitalization (CARE) framework to train attentive CNN encoders guided by transformers in SSL. The proposed CARE framework consists of a CNN stream (C-stream) and a transformer stream (T-stream), where each stream contains two branches. C-stream follows an existing SSL framework with two CNN encoders, two projectors, and a predictor. T-stream contains two transformers, two projectors, and a predictor. T-stream connects to CNN encoders and is in parallel to the remaining C-Stream. During training, we perform SSL in both streams simultaneously and use the T-stream output to supervise C-stream. The features from CNN encoders are modulated in T-stream for visual attention enhancement and become suitable for the SSL scenario. We use these modulated features to supervise C-stream for learning attentive CNN encoders. To this end, we revitalize CNN attention by using transformers as guidance. Experiments on several standard visual recognition benchmarks, including image classification, object detection, and semantic segmentation, show that the proposed CARE framework improves CNN encoder backbones to the state-of-the-art performance. | null |
A Critical Look at the Consistency of Causal Estimation with Deep Latent Variable Models | https://papers.nips.cc/paper_files/paper/2021/hash/21c5bba1dd6aed9ab48c2b34c1a0adde-Abstract.html | Severi Rissanen, Pekka Marttinen | https://papers.nips.cc/paper_files/paper/2021/hash/21c5bba1dd6aed9ab48c2b34c1a0adde-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11945-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/21c5bba1dd6aed9ab48c2b34c1a0adde-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=vU96vWPrWL | https://papers.nips.cc/paper_files/paper/2021/file/21c5bba1dd6aed9ab48c2b34c1a0adde-Supplemental.pdf | Using deep latent variable models in causal inference has attracted considerable interest recently, but an essential open question is their ability to yield consistent causal estimates. While they have demonstrated promising results and theory exists on some simple model formulations, we also know that causal effects are not even identifiable in general with latent variables. We investigate this gap between theory and empirical results with analytical considerations and extensive experiments under multiple synthetic and real-world data sets, using the causal effect variational autoencoder (CEVAE) as a case study. While CEVAE seems to work reliably under some simple scenarios, it does not estimate the causal effect correctly with a misspecified latent variable or a complex data distribution, as opposed to its original motivation. Hence, our results show that more attention should be paid to ensuring the correctness of causal estimates with deep latent variable models. | null |
Improving Robustness using Generated Data | https://papers.nips.cc/paper_files/paper/2021/hash/21ca6d0cf2f25c4dbb35d8dc0b679c3f-Abstract.html | Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, Timothy A Mann | https://papers.nips.cc/paper_files/paper/2021/hash/21ca6d0cf2f25c4dbb35d8dc0b679c3f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11946-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/21ca6d0cf2f25c4dbb35d8dc0b679c3f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=0NXUSlb6oEu | https://papers.nips.cc/paper_files/paper/2021/file/21ca6d0cf2f25c4dbb35d8dc0b679c3f-Supplemental.pdf | Recent work argues that robust training requires substantially larger datasets than those required for standard classification. On CIFAR-10 and CIFAR-100, this translates into a sizable robust-accuracy gap between models trained solely on data from the original training set and those trained with additional data extracted from the "80 Million Tiny Images" dataset (TI-80M). In this paper, we explore how generative models trained solely on the original training set can be leveraged to artificially increase the size of the original training set and improve adversarial robustness to $\ell_p$ norm-bounded perturbations. We identify the sufficient conditions under which incorporating additional generated data can improve robustness, and demonstrate that it is possible to significantly reduce the robust-accuracy gap to models trained with additional real data. Surprisingly, we even show that even the addition of non-realistic random data (generated by Gaussian sampling) can improve robustness. We evaluate our approach on CIFAR-10, CIFAR-100, SVHN and TinyImageNet against $\ell_\infty$ and $\ell_2$ norm-bounded perturbations of size $\epsilon = 8/255$ and $\epsilon = 128/255$, respectively. We show large absolute improvements in robust accuracy compared to previous state-of-the-art methods. Against $\ell_\infty$ norm-bounded perturbations of size $\epsilon = 8/255$, our models achieve 66.10% and 33.49% robust accuracy on CIFAR-10 and CIFAR-100, respectively (improving upon the state-of-the-art by +8.96% and +3.29%). Against $\ell_2$ norm-bounded perturbations of size $\epsilon = 128/255$, our model achieves 78.31% on CIFAR-10 (+3.81%). These results beat most prior works that use external data. | null |
An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias | https://papers.nips.cc/paper_files/paper/2021/hash/21ce689121e39821d07d04faab328370-Abstract.html | Lu Yu, Krishnakumar Balasubramanian, Stanislav Volgushev, Murat A. Erdogdu | https://papers.nips.cc/paper_files/paper/2021/hash/21ce689121e39821d07d04faab328370-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11947-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/21ce689121e39821d07d04faab328370-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=nZnYVf0k0yY | https://papers.nips.cc/paper_files/paper/2021/file/21ce689121e39821d07d04faab328370-Supplemental.pdf | Structured non-convex learning problems, for which critical points have favorable statistical properties, arise frequently in statistical machine learning. Algorithmic convergence and statistical estimation rates are well-understood for such problems. However, quantifying the uncertainty associated with the underlying training algorithm is not well-studied in the non-convex setting. In order to address this shortcoming, in this work, we establish an asymptotic normality result for the constant step size stochastic gradient descent (SGD) algorithm---a widely used algorithm in practice. Specifically, based on the relationship between SGD and Markov Chains [DDB19], we show that the average of SGD iterates is asymptotically normally distributed around the expected value of their unique invariant distribution, as long as the non-convex and non-smooth objective function satisfies a dissipativity property. We also characterize the bias between this expected value and the critical points of the objective function under various local regularity conditions. Together, the above two results could be leveraged to construct confidence intervals for non-convex problems that are trained using the SGD algorithm. | null |
Learning to Learn Graph Topologies | https://papers.nips.cc/paper_files/paper/2021/hash/21e4ef94f2a6b23597efabaec584b504-Abstract.html | Xingyue Pu, Tianyue Cao, Xiaoyun Zhang, Xiaowen Dong, Siheng Chen | https://papers.nips.cc/paper_files/paper/2021/hash/21e4ef94f2a6b23597efabaec584b504-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11948-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/21e4ef94f2a6b23597efabaec584b504-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ZqabiikWeyt | https://papers.nips.cc/paper_files/paper/2021/file/21e4ef94f2a6b23597efabaec584b504-Supplemental.zip | Learning a graph topology to reveal the underlying relationship between data entities plays an important role in various machine learning and data analysis tasks. Under the assumption that structured data vary smoothly over a graph, the problem can be formulated as a regularised convex optimisation over a positive semidefinite cone and solved by iterative algorithms. Classic methods require an explicit convex function to reflect generic topological priors, e.g. the $\ell_1$ penalty for enforcing sparsity, which limits the flexibility and expressiveness in learning rich topological structures. We propose to learn a mapping from node data to the graph structure based on the idea of learning to optimise (L2O). Specifically, our model first unrolls an iterative primal-dual splitting algorithm into a neural network. The key structural proximal projection is replaced with a variational autoencoder that refines the estimated graph with enhanced topological properties. The model is trained in an end-to-end fashion with pairs of node data and graph samples. Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties. | null |
Invertible Tabular GANs: Killing Two Birds with One Stone for Tabular Data Synthesis | https://papers.nips.cc/paper_files/paper/2021/hash/22456f4b545572855c766df5eefc9832-Abstract.html | JAEHOON LEE, Jihyeon Hyeong, Jinsung Jeon, Noseong Park, Jihoon Cho | https://papers.nips.cc/paper_files/paper/2021/hash/22456f4b545572855c766df5eefc9832-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11949-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/22456f4b545572855c766df5eefc9832-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=tvDBe6K8L5o | https://papers.nips.cc/paper_files/paper/2021/file/22456f4b545572855c766df5eefc9832-Supplemental.pdf | Tabular data synthesis has received wide attention in the literature. This is because available data is often limited, incomplete, or cannot be obtained easily, and data privacy is becoming increasingly important. In this work, we present a generalized GAN framework for tabular synthesis, which combines the adversarial training of GANs and the negative log-density regularization of invertible neural networks. The proposed framework can be used for two distinctive objectives. First, we can further improve the synthesis quality, by decreasing the negative log-density of real records in the process of adversarial training. On the other hand, by increasing the negative log-density of real records, realistic fake records can be synthesized in a way that they are not too much close to real records and reduce the chance of potential information leakage. We conduct experiments with real-world datasets for classification, regression, and privacy attacks. In general, the proposed method demonstrates the best synthesis quality (in terms of task-oriented evaluation metrics, e.g., F1) when decreasing the negative log-density during the adversarial training. If increasing the negative log-density, our experimental results show that the distance between real and fake records increases, enhancing robustness against privacy attacks. | null |
Reducing Collision Checking for Sampling-Based Motion Planning Using Graph Neural Networks | https://papers.nips.cc/paper_files/paper/2021/hash/224e5e49814ca908e58c02e28a0462c1-Abstract.html | Chenning Yu, Sicun Gao | https://papers.nips.cc/paper_files/paper/2021/hash/224e5e49814ca908e58c02e28a0462c1-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11950-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/224e5e49814ca908e58c02e28a0462c1-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=DYpstddnfN | https://papers.nips.cc/paper_files/paper/2021/file/224e5e49814ca908e58c02e28a0462c1-Supplemental.pdf | Sampling-based motion planning is a popular approach in robotics for finding paths in continuous configuration spaces. Checking collision with obstacles is the major computational bottleneck in this process. We propose new learning-based methods for reducing collision checking to accelerate motion planning by training graph neural networks (GNNs) that perform path exploration and path smoothing. Given random geometric graphs (RGGs) generated from batch sampling, the path exploration component iteratively predicts collision-free edges to prioritize their exploration. The path smoothing component then optimizes paths obtained from the exploration stage. The methods benefit from the ability of GNNs of capturing geometric patterns from RGGs through batch sampling and generalize better to unseen environments. Experimental results show that the learned components can significantly reduce collision checking and improve overall planning efficiency in challenging high-dimensional motion planning tasks. | null |
Sample Complexity Bounds for Active Ranking from Multi-wise Comparisons | https://papers.nips.cc/paper_files/paper/2021/hash/22508552d3fc22f867e33e6c56b30b16-Abstract.html | Wenbo Ren, Jia Liu, Ness Shroff | https://papers.nips.cc/paper_files/paper/2021/hash/22508552d3fc22f867e33e6c56b30b16-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11951-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/22508552d3fc22f867e33e6c56b30b16-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=_gG02Imo9jO | https://papers.nips.cc/paper_files/paper/2021/file/22508552d3fc22f867e33e6c56b30b16-Supplemental.pdf | We study the sample complexity (i.e., the number of comparisons needed) bounds for actively ranking a set of $n$ items from multi-wise comparisons. Here, a multi-wise comparison takes $m$ items as input and returns a (noisy) result about the best item (the winner feedback) or the order of these items (the full-ranking feedback). We consider two basic ranking problems: top-$k$ items selection and full ranking. Unlike previous works that study ranking from multi-wise comparisons, in this paper, we do not require any parametric model or assumption and work on the fundamental setting where each comparison returns the correct result with probability $1$ or a certain probability larger than $\frac{1}{2}$. This paper helps understand whether and to what degree utilizing multi-wise comparisons can reduce the sample complexity for the ranking problems compared to ranking from pairwise comparisons. Specifically, under the winner feedback setting, one can reduce the sample complexity for top-$k$ selection up to an $m$ factor and that for full ranking up to a $\log{m}$ factor. Under the full-ranking feedback setting, one can reduce the sample complexity for top-$k$ selection up to an $m$ factor and that for full ranking up to an $m\log{m}$ factor. We also conduct numerical simulations to confirm our theoretical results. | null |
Efficient Bayesian network structure learning via local Markov boundary search | https://papers.nips.cc/paper_files/paper/2021/hash/22722a343513ed45f14905eb07621686-Abstract.html | Ming Gao, Bryon Aragam | https://papers.nips.cc/paper_files/paper/2021/hash/22722a343513ed45f14905eb07621686-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11952-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/22722a343513ed45f14905eb07621686-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=fWLDGNIOhYU | null | We analyze the complexity of learning directed acyclic graphical models from observational data in general settings without specific distributional assumptions. Our approach is information-theoretic and uses a local Markov boundary search procedure in order to recursively construct ancestral sets in the underlying graphical model. Perhaps surprisingly, we show that for certain graph ensembles, a simple forward greedy search algorithm (i.e. without a backward pruning phase) suffices to learn the Markov boundary of each node. This substantially improves the sample complexity, which we show is at most polynomial in the number of nodes. This is then applied to learn the entire graph under a novel identifiability condition that generalizes existing conditions from the literature. As a matter of independent interest, we establish finite-sample guarantees for the problem of recovering Markov boundaries from data. Moreover, we apply our results to the special case of polytrees, for which the assumptions simplify, and provide explicit conditions under which polytrees are identifiable and learnable in polynomial time. We further illustrate the performance of the algorithm, which is easy to implement, in a simulation study. Our approach is general, works for discrete or continuous distributions without distributional assumptions, and as such sheds light on the minimal assumptions required to efficiently learn the structure of directed graphical models from data. | null |
Learning Dynamic Graph Representation of Brain Connectome with Spatio-Temporal Attention | https://papers.nips.cc/paper_files/paper/2021/hash/22785dd2577be2ce28ef79febe80db10-Abstract.html | Byung-Hoon Kim, Jong Chul Ye, Jae-Jin Kim | https://papers.nips.cc/paper_files/paper/2021/hash/22785dd2577be2ce28ef79febe80db10-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11953-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/22785dd2577be2ce28ef79febe80db10-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=X7GEA3KiJiH | https://papers.nips.cc/paper_files/paper/2021/file/22785dd2577be2ce28ef79febe80db10-Supplemental.pdf | Functional connectivity (FC) between regions of the brain can be assessed by the degree of temporal correlation measured with functional neuroimaging modalities. Based on the fact that these connectivities build a network, graph-based approaches for analyzing the brain connectome have provided insights into the functions of the human brain. The development of graph neural networks (GNNs) capable of learning representation from graph structured data has led to increased interest in learning the graph representation of the brain connectome. Although recent attempts to apply GNN to the FC network have shown promising results, there is still a common limitation that they usually do not incorporate the dynamic characteristics of the FC network which fluctuates over time. In addition, a few studies that have attempted to use dynamic FC as an input for the GNN reported a reduction in performance compared to static FC methods, and did not provide temporal explainability. Here, we propose STAGIN, a method for learning dynamic graph representation of the brain connectome with spatio-temporal attention. Specifically, a temporal sequence of brain graphs is input to the STAGIN to obtain the dynamic graph representation, while novel READOUT functions and the Transformer encoder provide spatial and temporal explainability with attention, respectively. Experiments on the HCP-Rest and the HCP-Task datasets demonstrate exceptional performance of our proposed method. Analysis of the spatio-temporal attention also provide concurrent interpretation with the neuroscientific knowledge, which further validates our method. Code is available at https://github.com/egyptdj/stagin | null |
Understanding the Generalization Benefit of Model Invariance from a Data Perspective | https://papers.nips.cc/paper_files/paper/2021/hash/2287c6b8641dd2d21ab050eb9ff795f3-Abstract.html | Sicheng Zhu, Bang An, Furong Huang | https://papers.nips.cc/paper_files/paper/2021/hash/2287c6b8641dd2d21ab050eb9ff795f3-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11954-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2287c6b8641dd2d21ab050eb9ff795f3-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=d87PBvj7LA7 | https://papers.nips.cc/paper_files/paper/2021/file/2287c6b8641dd2d21ab050eb9ff795f3-Supplemental.pdf | Machine learning models that are developed to be invariant under certain types of data transformations have shown improved generalization in practice. However, a principled understanding of why invariance benefits generalization is limited. Given a dataset, there is often no principled way to select "suitable" data transformations under which model invariance guarantees better generalization. This paper studies the generalization benefit of model invariance by introducing the sample cover induced by transformations, i.e., a representative subset of a dataset that can approximately recover the whole dataset using transformations. For any data transformations, we provide refined generalization bounds for invariant models based on the sample cover. We also characterize the "suitability" of a set of data transformations by the sample covering number induced by transformations, i.e., the smallest size of its induced sample covers. We show that we may tighten the generalization bounds for "suitable" transformations that have a small sample covering number. In addition, our proposed sample covering number can be empirically evaluated and thus provides a guidance for selecting transformations to develop model invariance for better generalization. In experiments on multiple datasets, we evaluate sample covering numbers for some commonly used transformations and show that the smaller sample covering number for a set of transformations (e.g., the 3D-view transformation) indicates a smaller gap between the test and training error for invariant models, which verifies our propositions. | null |
Improved Variance-Aware Confidence Sets for Linear Bandits and Linear Mixture MDP | https://papers.nips.cc/paper_files/paper/2021/hash/228bbc2f87caeb21bb7f6949fddcb91d-Abstract.html | Zihan Zhang, Jiaqi Yang, Xiangyang Ji, Simon S. Du | https://papers.nips.cc/paper_files/paper/2021/hash/228bbc2f87caeb21bb7f6949fddcb91d-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11955-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/228bbc2f87caeb21bb7f6949fddcb91d-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=DMkdzO--w24 | https://papers.nips.cc/paper_files/paper/2021/file/228bbc2f87caeb21bb7f6949fddcb91d-Supplemental.pdf | This paper presents new \emph{variance-aware} confidence sets for linear bandits and linear mixture Markov Decision Processes (MDPs).With the new confidence sets, we obtain the follow regret bounds:For linear bandits, we obtain an $\widetilde{O}(\mathrm{poly}(d)\sqrt{1 + \sum_{k=1}^{K}\sigma_k^2})$ data-dependent regret bound, where $d$ is the feature dimension, $K$ is the number of rounds, and $\sigma_k^2$ is the \emph{unknown} variance of the reward at the $k$-th round. This is the first regret bound that only scales with the variance and the dimension but \emph{no explicit polynomial dependency on $K$}.When variances are small, this bound can be significantly smaller than the $\widetilde{\Theta}\left(d\sqrt{K}\right)$ worst-case regret bound.For linear mixture MDPs, we obtain an $\widetilde{O}(\mathrm{poly}(d, \log H)\sqrt{K})$ regret bound, where $d$ is the number of base models, $K$ is the number of episodes, and $H$ is the planning horizon. This is the first regret bound that only scales \emph{logarithmically} with $H$ in the reinforcement learning with linear function approximation setting, thus \emph{exponentially improving} existing results, and resolving an open problem in \citep{zhou2020nearly}.We develop three technical ideas that may be of independent interest:1) applications of the peeling technique to both the input norm and the variance magnitude, 2) a recursion-based estimator for the variance, and 3) a new convex potential lemma that generalizes the seminal elliptical potential lemma. | null |
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? | https://papers.nips.cc/paper_files/paper/2021/hash/22b1f2e0983160db6f7bb9f62f4dbb39-Abstract.html | Xinshuai Dong, Anh Tuan Luu, Min Lin, Shuicheng Yan, Hanwang Zhang | https://papers.nips.cc/paper_files/paper/2021/hash/22b1f2e0983160db6f7bb9f62f4dbb39-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11956-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/22b1f2e0983160db6f7bb9f62f4dbb39-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=RErCy8dT7u | null | The fine-tuning of pre-trained language models has a great success in many NLP fields. Yet, it is strikingly vulnerable to adversarial examples, e.g., word substitution attacks using only synonyms can easily fool a BERT-based sentiment analysis model. In this paper, we demonstrate that adversarial training, the prevalent defense technique, does not directly fit a conventional fine-tuning scenario, because it suffers severely from catastrophic forgetting: failing to retain the generic and robust linguistic features that have already been captured by the pre-trained model. In this light, we propose Robust Informative Fine-Tuning (RIFT), a novel adversarial fine-tuning method from an information-theoretical perspective. In particular, RIFT encourages an objective model to retain the features learned from the pre-trained model throughout the entire fine-tuning process, whereas a conventional one only uses the pre-trained weights for initialization. Experimental results show that RIFT consistently outperforms the state-of-the-arts on two popular NLP tasks: sentiment analysis and natural language inference, under different attacks across various pre-trained language models. | null |
Recursive Bayesian Networks: Generalising and Unifying Probabilistic Context-Free Grammars and Dynamic Bayesian Networks | https://papers.nips.cc/paper_files/paper/2021/hash/22fb0cee7e1f3bde58293de743871417-Abstract.html | Robert Lieck, Martin Rohrmeier | https://papers.nips.cc/paper_files/paper/2021/hash/22fb0cee7e1f3bde58293de743871417-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11957-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/22fb0cee7e1f3bde58293de743871417-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=qdphcA9jEbJ | null | Probabilistic context-free grammars (PCFGs) and dynamic Bayesian networks (DBNs) are widely used sequence models with complementary strengths and limitations. While PCFGs allow for nested hierarchical dependencies (tree structures), their latent variables (non-terminal symbols) have to be discrete. In contrast, DBNs allow for continuous latent variables, but the dependencies are strictly sequential (chain structure). Therefore, neither can be applied if the latent variables are assumed to be continuous and also to have a nested hierarchical dependency structure. In this paper, we present Recursive Bayesian Networks (RBNs), which generalise and unify PCFGs and DBNs, combining their strengths and containing both as special cases. RBNs define a joint distribution over tree-structured Bayesian networks with discrete or continuous latent variables. The main challenge lies in performing joint inference over the exponential number of possible structures and the continuous variables. We provide two solutions: 1) For arbitrary RBNs, we generalise inside and outside probabilities from PCFGs to the mixed discrete-continuous case, which allows for maximum posterior estimates of the continuous latent variables via gradient descent, while marginalising over network structures. 2) For Gaussian RBNs, we additionally derive an analytic approximation of the marginal data likelihood (evidence) and marginal posterior distribution, allowing for robust parameter optimisation and Bayesian inference. The capacity and diverse applications of RBNs are illustrated on two examples: In a quantitative evaluation on synthetic data, we demonstrate and discuss the advantage of RBNs for segmentation and tree induction from noisy sequences, compared to change point detection and hierarchical clustering. In an application to musical data, we approach the unsolved problem of hierarchical music analysis from the raw note level and compare our results to expert annotations. | null |
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback | https://papers.nips.cc/paper_files/paper/2021/hash/231141b34c82aa95e48810a9d1b33a79-Abstract.html | Peter Richtarik, Igor Sokolov, Ilyas Fatkhullin | https://papers.nips.cc/paper_files/paper/2021/hash/231141b34c82aa95e48810a9d1b33a79-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11958-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/231141b34c82aa95e48810a9d1b33a79-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=8ygF02Zm51q | https://papers.nips.cc/paper_files/paper/2021/file/231141b34c82aa95e48810a9d1b33a79-Supplemental.pdf | Error feedback (EF), also known as error compensation, is an immensely popular convergence stabilization mechanism in the context of distributed training of supervised machine learning models enhanced by the use of contractive communication compression mechanisms, such as Top-$k$. First proposed by Seide et al [2014] as a heuristic, EF resisted any theoretical understanding until recently [Stich et al., 2018, Alistarh et al., 2018]. While these early breakthroughs were followed by a steady stream of works offering various improvements and generalizations, the current theoretical understanding of EF is still very limited. Indeed, to the best of our knowledge, all existing analyses either i) apply to the single node setting only, ii) rely on very strong and often unreasonable assumptions, such as global boundedness of the gradients, or iterate-dependent assumptions that cannot be checked a-priori and may not hold in practice, or iii) circumvent these issues via the introduction of additional unbiased compressors, which increase the communication cost. In this work we fix all these deficiencies by proposing and analyzing a new EF mechanism, which we call EF21, which consistently and substantially outperforms EF in practice. Moreover, our theoretical analysis relies on standard assumptions only, works in the distributed heterogeneous data setting, and leads to better and more meaningful rates. In particular, we prove that EF21 enjoys a fast $\mathcal{O}(1/T)$ convergence rate for smooth nonconvex problems, beating the previous bound of $\mathcal{O}(1/T^{2/3})$, which was shown under a strong bounded gradients assumption. We further improve this to a fast linear rate for Polyak-Lojasiewicz functions, which is the first linear convergence result for an error feedback method not relying on unbiased compressors. Since EF has a large number of applications where it reigns supreme, we believe that our 2021 variant, EF21, will have a large impact on the practice of communication efficient distributed learning. | null |
Mixture weights optimisation for Alpha-Divergence Variational Inference | https://papers.nips.cc/paper_files/paper/2021/hash/233f1dd0f3f537bcb7a338ea74d63483-Abstract.html | Kamélia Daudel, randal douc | https://papers.nips.cc/paper_files/paper/2021/hash/233f1dd0f3f537bcb7a338ea74d63483-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11959-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/233f1dd0f3f537bcb7a338ea74d63483-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=4lXCXb0Ru04 | https://papers.nips.cc/paper_files/paper/2021/file/233f1dd0f3f537bcb7a338ea74d63483-Supplemental.pdf | This paper focuses on $\alpha$-divergence minimisation methods for Variational Inference. More precisely, we are interested in algorithms optimising the mixture weights of any given mixture model, without any information on the underlying distribution of its mixture components parameters. The Power Descent, defined for all $\alpha \neq 1$, is one such algorithm and we establish in our work the full proof of its convergence towards the optimal mixture weights when $\alpha <1$. Since the $\alpha$-divergence recovers the widely-used forward Kullback-Leibler when $\alpha \to 1$, we then extend the Power Descent to the case $\alpha = 1$ and show that we obtain an Entropic Mirror Descent. This leads us to investigate the link between Power Descent and Entropic Mirror Descent: first-order approximations allow us to introduce the R\'{e}nyi Descent, a novel algorithm for which we prove an $O(1/N)$ convergence rate. Lastly, we compare numerically the behavior of the unbiased Power Descent and of the biased R\'{e}nyi Descent and we discuss the potential advantages of one algorithm over the other. | null |
Instance-dependent Label-noise Learning under a Structural Causal Model | https://papers.nips.cc/paper_files/paper/2021/hash/23451391cd1399019fa0421129066bc6-Abstract.html | Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang | https://papers.nips.cc/paper_files/paper/2021/hash/23451391cd1399019fa0421129066bc6-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11960-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/23451391cd1399019fa0421129066bc6-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=d20KTY2VrNk | https://papers.nips.cc/paper_files/paper/2021/file/23451391cd1399019fa0421129066bc6-Supplemental.pdf | Label noise generally degenerates the performance of deep learning algorithms because deep neural networks easily overfit label errors. Let $X$ and $Y$ denote the instance and clean label, respectively. When $Y$ is a cause of $X$, according to which many datasets have been constructed, e.g., \textit{SVHN} and \textit{CIFAR}, the distributions of $P(X)$ and $P(Y|X)$ are generally entangled. This means that the unsupervised instances are helpful to learn the classifier and thus reduce the side effect of label noise. However, it remains elusive on how to exploit the causal information to handle the label-noise problem. We propose to model and make use of the causal process in order to correct the label-noise effect.Empirically, the proposed method outperforms all state-of-the-art methods on both synthetic and real-world label-noise datasets. | null |
Combining Human Predictions with Model Probabilities via Confusion Matrices and Calibration | https://papers.nips.cc/paper_files/paper/2021/hash/234b941e88b755b7a72a1c1dd5022f30-Abstract.html | Gavin Kerrigan, Padhraic Smyth, Mark Steyvers | https://papers.nips.cc/paper_files/paper/2021/hash/234b941e88b755b7a72a1c1dd5022f30-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11961-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/234b941e88b755b7a72a1c1dd5022f30-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Pkzvd9ONEPr | https://papers.nips.cc/paper_files/paper/2021/file/234b941e88b755b7a72a1c1dd5022f30-Supplemental.pdf | An increasingly common use case for machine learning models is augmenting the abilities of human decision makers. For classification tasks where neither the human nor model are perfectly accurate, a key step in obtaining high performance is combining their individual predictions in a manner that leverages their relative strengths. In this work, we develop a set of algorithms that combine the probabilistic output of a model with the class-level output of a human. We show theoretically that the accuracy of our combination model is driven not only by the individual human and model accuracies, but also by the model's confidence. Empirical results on image classification with CIFAR-10 and a subset of ImageNet demonstrate that such human-model combinations consistently have higher accuracies than the model or human alone, and that the parameters of the combination method can be estimated effectively with as few as ten labeled datapoints. | null |
$\texttt{LeadCache}$: Regret-Optimal Caching in Networks | https://papers.nips.cc/paper_files/paper/2021/hash/2387337ba1e0b0249ba90f55b2ba2521-Abstract.html | Debjit Paria, Abhishek Sinha | https://papers.nips.cc/paper_files/paper/2021/hash/2387337ba1e0b0249ba90f55b2ba2521-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11962-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2387337ba1e0b0249ba90f55b2ba2521-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=yrmvFIh_e5o | https://papers.nips.cc/paper_files/paper/2021/file/2387337ba1e0b0249ba90f55b2ba2521-Supplemental.pdf | We consider an online prediction problem in the context of network caching. Assume that multiple users are connected to several caches via a bipartite network. At any time slot, each user may request an arbitrary file chosen from a large catalog. A user's request at a slot is met if the requested file is cached in at least one of the caches connected to the user. Our objective is to predict, prefetch, and optimally distribute the files on the caches at each slot to maximize the total number of cache hits. The problem is non-trivial due to the non-convex and non-smooth nature of the objective function. In this paper, we propose $\texttt{LeadCache}$ - an efficient online caching policy based on the Follow-the-Perturbed-Leader paradigm. We show that $\texttt{LeadCache}$ is regret-optimal up to a factor of $\tilde{O}(n^{3/8}),$ where $n$ is the number of users. We design two efficient implementations of the $\texttt{LeadCache}$ policy, one based on Pipage rounding and the other based on Madow's sampling, each of which makes precisely one call to an LP-solver per iteration. Furthermore, with a Strong-Law-type assumption, we show that the total number of file fetches under $\texttt{LeadCache}$ remains almost surely finite over an infinite horizon. Finally, we derive an approximately tight regret lower bound using results from graph coloring. We conclude that the learning-based $\texttt{LeadCache}$ policy decisively outperforms the state-of-the-art caching policies both theoretically and empirically. | null |
Probabilistic Attention for Interactive Segmentation | https://papers.nips.cc/paper_files/paper/2021/hash/23937b42f9273974570fb5a56a6652ee-Abstract.html | Prasad Gabbur, Manjot Bilkhu, Javier Movellan | https://papers.nips.cc/paper_files/paper/2021/hash/23937b42f9273974570fb5a56a6652ee-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11963-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/23937b42f9273974570fb5a56a6652ee-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=JpDlWGTBHB | https://papers.nips.cc/paper_files/paper/2021/file/23937b42f9273974570fb5a56a6652ee-Supplemental.pdf | We provide a probabilistic interpretation of attention and show that the standard dot-product attention in transformers is a special case of Maximum A Posteriori (MAP) inference. The proposed approach suggests the use of Expectation Maximization algorithms for on-line adaptation of key and value model parameters. This approach is useful for cases in which external agents, e.g., annotators, provide inference-time information about the correct values of some tokens, e.g., the semantic category of some pixels, and we need for this new information to propagate to other tokens in a principled manner. We illustrate the approach on an interactive semantic segmentation task in which annotators and models collaborate online to improve annotation efficiency. Using standard benchmarks, we observe that key adaptation boosts model performance ($\sim10\%$ mIoU) in the low feedback regime and value propagation improves model responsiveness in the high feedback regime. A PyTorch layer implementation of our probabilistic attention model is available here: https://github.com/apple/ml-probabilistic-attention. | null |
Influence Patterns for Explaining Information Flow in BERT | https://papers.nips.cc/paper_files/paper/2021/hash/239f914f30ea3c948fce2ea07a9efb33-Abstract.html | Kaiji Lu, Zifan Wang, Piotr Mardziel, Anupam Datta | https://papers.nips.cc/paper_files/paper/2021/hash/239f914f30ea3c948fce2ea07a9efb33-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11964-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/239f914f30ea3c948fce2ea07a9efb33-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=FYDE3I9fev0 | https://papers.nips.cc/paper_files/paper/2021/file/239f914f30ea3c948fce2ea07a9efb33-Supplemental.pdf | While attention is all you need may be proving true, we do not know why: attention-based transformer models such as BERT are superior but how information flows from input tokens to output predictions are unclear. We introduce influence patterns, abstractions of sets of paths through a transformer model. Patterns quantify and localize the flow of information to paths passing through a sequence of model nodes. Experimentally, we find that significant portion of information flow in BERT goes through skip connections instead of attention heads. We further show that consistency of patterns across instances is an indicator of BERT’s performance. Finally, we demonstrate that patterns account for far more model performance than previous attention-based and layer-based methods. | null |
Robust Regression Revisited: Acceleration and Improved Estimation Rates | https://papers.nips.cc/paper_files/paper/2021/hash/23b023b22d0bf47626029d5961328028-Abstract.html | Arun Jambulapati, Jerry Li, Tselil Schramm, Kevin Tian | https://papers.nips.cc/paper_files/paper/2021/hash/23b023b22d0bf47626029d5961328028-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11965-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/23b023b22d0bf47626029d5961328028-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=pu6loAVvBZb | https://papers.nips.cc/paper_files/paper/2021/file/23b023b22d0bf47626029d5961328028-Supplemental.pdf | We study fast algorithms for statistical regression problems under the strong contamination model, where the goal is to approximately optimize a generalized linear model (GLM) given adversarially corrupted samples. Prior works in this line of research were based on the \emph{robust gradient descent} framework of \cite{PrasadSBR20}, a first-order method using biased gradient queries, or the \emph{Sever} framework of \cite{DiakonikolasKK019}, an iterative outlier-removal method calling a stationary point finder. We present nearly-linear time algorithms for robust regression problems with improved runtime or estimation guarantees compared to the state-of-the-art. For the general case of smooth GLMs (e.g.\ logistic regression), we show that the robust gradient descent framework of \cite{PrasadSBR20} can be \emph{accelerated}, and show our algorithm extends to optimizing the Moreau envelopes of Lipschitz GLMs (e.g.\ support vector machines), answering several open questions in the literature. For the well-studied case of robust linear regression, we present an alternative approach obtaining improved estimation rates over prior nearly-linear time algorithms. Interestingly, our algorithm starts with an identifiability proof introduced in the context of the sum-of-squares algorithm of \cite{BakshiP21}, which achieved optimal error rates while requiring large polynomial runtime and sample complexity. We reinterpret their proof within the Sever framework and obtain a dramatically faster and more sample-efficient algorithm under fewer distributional assumptions. | null |
Automatic Unsupervised Outlier Model Selection | https://papers.nips.cc/paper_files/paper/2021/hash/23c894276a2c5a16470e6a31f4618d73-Abstract.html | Yue Zhao, Ryan Rossi, Leman Akoglu | https://papers.nips.cc/paper_files/paper/2021/hash/23c894276a2c5a16470e6a31f4618d73-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11966-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/23c894276a2c5a16470e6a31f4618d73-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=KCd-3Pz8VjM | https://papers.nips.cc/paper_files/paper/2021/file/23c894276a2c5a16470e6a31f4618d73-Supplemental.pdf | Given an unsupervised outlier detection task on a new dataset, how can we automatically select a good outlier detection algorithm and its hyperparameter(s) (collectively called a model)? In this work, we tackle the unsupervised outlier model selection (UOMS) problem, and propose MetaOD, a principled, data-driven approach to UOMS based on meta-learning. The UOMS problem is notoriously challenging, as compared to model selection for classification and clustering, since (i) model evaluation is infeasible due to the lack of hold-out data with labels, and (ii) model comparison is infeasible due to the lack of a universal objective function. MetaOD capitalizes on the performances of a large body of detection models on historical outlier detection benchmark datasets, and carries over this prior experience to automatically select an effective model to be employed on a new dataset without any labels, model evaluations or model comparisons. To capture task similarity within our meta-learning framework, we introduce specialized meta-features that quantify outlying characteristics of a dataset. Extensive experiments show that selecting a model by MetaOD significantly outperforms no model selection (e.g. always using the same popular model or the ensemble of many) as well as other meta-learning techniques that we tailored for UOMS. Moreover upon (meta-)training, MetaOD is extremely efficient at test time; selecting from a large pool of 300+ models takes less than 1 second for a new task. We open-source MetaOD and our meta-learning database for practical use and to foster further research on the UOMS problem. | null |
Pruning Randomly Initialized Neural Networks with Iterative Randomization | https://papers.nips.cc/paper_files/paper/2021/hash/23e582ad8087f2c03a5a31c125123f9a-Abstract.html | Daiki Chijiwa, Shin'ya Yamaguchi, Yasutoshi Ida, Kenji Umakoshi, Tomohiro INOUE | https://papers.nips.cc/paper_files/paper/2021/hash/23e582ad8087f2c03a5a31c125123f9a-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11967-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/23e582ad8087f2c03a5a31c125123f9a-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=QCPY2eMXYs | https://papers.nips.cc/paper_files/paper/2021/file/23e582ad8087f2c03a5a31c125123f9a-Supplemental.pdf | Pruning the weights of randomly initialized neural networks plays an important role in the context of lottery ticket hypothesis. Ramanujan et al. (2020) empirically showed that only pruning the weights can achieve remarkable performance instead of optimizing the weight values. However, to achieve the same level of performance as the weight optimization, the pruning approach requires more parameters in the networks before pruning and thus more memory space. To overcome this parameter inefficiency, we introduce a novel framework to prune randomly initialized neural networks with iteratively randomizing weight values (IteRand). Theoretically, we prove an approximation theorem in our framework, which indicates that the randomizing operations are provably effective to reduce the required number of the parameters. We also empirically demonstrate the parameter efficiency in multiple experiments on CIFAR-10 and ImageNet. | null |
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-and-Language Pre-training | https://papers.nips.cc/paper_files/paper/2021/hash/23fa71cc32babb7b91130824466d25a5-Abstract.html | Hongwei Xue, Yupan Huang, Bei Liu, Houwen Peng, Jianlong Fu, Houqiang Li, Jiebo Luo | https://papers.nips.cc/paper_files/paper/2021/hash/23fa71cc32babb7b91130824466d25a5-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11968-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/23fa71cc32babb7b91130824466d25a5-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=e0nZIFEpmYh | https://papers.nips.cc/paper_files/paper/2021/file/23fa71cc32babb7b91130824466d25a5-Supplemental.pdf | Vision-Language Pre-training (VLP) aims to learn multi-modal representations from image-text pairs and serves for downstream vision-language tasks in a fine-tuning fashion. The dominant VLP models adopt a CNN-Transformer architecture, which embeds images with a CNN, and then aligns images and text with a Transformer. Visual relationship between visual contents plays an important role in image understanding and is the basic for inter-modal alignment learning. However, CNNs have limitations in visual relation learning due to local receptive field's weakness in modeling long-range dependencies. Thus the two objectives of learning visual relation and inter-modal alignment are encapsulated in the same Transformer network. Such design might restrict the inter-modal alignment learning in the Transformer by ignoring the specialized characteristic of each objective. To tackle this, we propose a fully Transformer visual embedding for VLP to better learn visual relation and further promote inter-modal alignment. Specifically, we propose a metric named Inter-Modality Flow (IMF) to measure the interaction between vision and language modalities (i.e., inter-modality). We also design a novel masking optimization mechanism named Masked Feature Regression (MFR) in Transformer to further promote the inter-modality learning. To the best of our knowledge, this is the first study to explore the benefit of Transformer for visual feature learning in VLP. We verify our method on a wide range of vision-language tasks, including Visual Question Answering (VQA), Visual Entailment and Visual Reasoning. Our approach not only outperforms the state-of-the-art VLP performance, but also shows benefits on the IMF metric. | null |
Stability and Generalization of Bilevel Programming in Hyperparameter Optimization | https://papers.nips.cc/paper_files/paper/2021/hash/2406a0a94c80406914ff2f6c9fdd67d5-Abstract.html | Fan Bao, Guoqiang Wu, Chongxuan LI, Jun Zhu, Bo Zhang | https://papers.nips.cc/paper_files/paper/2021/hash/2406a0a94c80406914ff2f6c9fdd67d5-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11969-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2406a0a94c80406914ff2f6c9fdd67d5-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=PvWYUN7t4Tb | https://papers.nips.cc/paper_files/paper/2021/file/2406a0a94c80406914ff2f6c9fdd67d5-Supplemental.pdf | The (gradient-based) bilevel programming framework is widely used in hyperparameter optimization and has achieved excellent performance empirically. Previous theoretical work mainly focuses on its optimization properties, while leaving the analysis on generalization largely open. This paper attempts to address the issue by presenting an expectation bound w.r.t. the validation set based on uniform stability. Our results can explain some mysterious behaviours of the bilevel programming in practice, for instance, overfitting to the validation set. We also present an expectation bound for the classical cross-validation algorithm. Our results suggest that gradient-based algorithms can be better than cross-validation under certain conditions in a theoretical perspective. Furthermore, we prove that regularization terms in both the outer and inner levels can relieve the overfitting problem in gradient-based algorithms. In experiments on feature learning and data reweighting for noisy labels, we corroborate our theoretical findings. | null |
Regime Switching Bandits | https://papers.nips.cc/paper_files/paper/2021/hash/240ac9371ec2671ae99847c3ae2e6384-Abstract.html | Xiang Zhou, Yi Xiong, Ningyuan Chen, Xuefeng GAO | https://papers.nips.cc/paper_files/paper/2021/hash/240ac9371ec2671ae99847c3ae2e6384-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11970-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/240ac9371ec2671ae99847c3ae2e6384-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=3stG49d5VA | https://papers.nips.cc/paper_files/paper/2021/file/240ac9371ec2671ae99847c3ae2e6384-Supplemental.pdf | We study a multi-armed bandit problem where the rewards exhibit regime switching. Specifically, the distributions of the random rewards generated from all arms are modulated by a common underlying state modeled as a finite-state Markov chain. The agent does not observe the underlying state and has to learn the transition matrix and the reward distributions. We propose a learning algorithm for this problem, building on spectral method-of-moments estimations for hidden Markov models, belief error control in partially observable Markov decision processes and upper-confidence-bound methods for online learning. We also establish an upper bound $O(T^{2/3}\sqrt{\log T})$ for the proposed learning algorithm where $T$ is the learning horizon. Finally, we conduct proof-of-concept experiments to illustrate the performance of the learning algorithm. | null |
MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps | https://papers.nips.cc/paper_files/paper/2021/hash/240c945bb72980130446fc2b40fbb8e0-Abstract.html | Awais Muhammad, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li | https://papers.nips.cc/paper_files/paper/2021/hash/240c945bb72980130446fc2b40fbb8e0-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11971-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/240c945bb72980130446fc2b40fbb8e0-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=NrEwQwhPODl | https://papers.nips.cc/paper_files/paper/2021/file/240c945bb72980130446fc2b40fbb8e0-Supplemental.pdf | Deep neural networks are susceptible to adversarially crafted, small, and imperceptible changes in the natural inputs. The most effective defense mechanism against these examples is adversarial training which constructs adversarial examples during training by iterative maximization of loss. The model is then trained to minimize the loss on these constructed examples. This min-max optimization requires more data, larger capacity models, and additional computing resources. It also degrades the standard generalization performance of a model. Can we achieve robustness more efficiently? In this work, we explore this question from the perspective of knowledge transfer. First, we theoretically show the transferability of robustness from an adversarially trained teacher model to a student model with the help of mixup augmentation. Second, we propose a novel robustness transfer method called Mixup-Based Activated Channel Maps (MixACM) Transfer. MixACM transfers robustness from a robust teacher to a student by matching activated channel maps generated without expensive adversarial perturbations. Finally, extensive experiments on multiple datasets and different learning scenarios show our method can transfer robustness while also improving generalization on natural images. | null |
Localization, Convexity, and Star Aggregation | https://papers.nips.cc/paper_files/paper/2021/hash/2417dc8af8570f274e6775d4d60496da-Abstract.html | Suhas Vijaykumar | https://papers.nips.cc/paper_files/paper/2021/hash/2417dc8af8570f274e6775d4d60496da-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11972-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2417dc8af8570f274e6775d4d60496da-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=JRM0Umk6mdC | https://papers.nips.cc/paper_files/paper/2021/file/2417dc8af8570f274e6775d4d60496da-Supplemental.pdf | Offset Rademacher complexities have been shown to provide tight upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that the offset complexity can be generalized to any loss that satisfies a certain general convexity condition. Further, we show that this condition is closely related to both exponential concavity and self-concordance, unifying apparently disparate results. By a novel geometric argument, many of our bounds translate to improper learning in a non-convex class with Audibert's star algorithm. Thus, the offset complexity provides a versatile analytic tool that covers both convex empirical risk minimization and improper learning under entropy conditions. Applying the method, we recover the optimal rates for proper and improper learning with the $p$-loss for $1 < p < \infty$, and show that improper variants of empirical risk minimization can attain fast rates for logistic regression and other generalized linear models. | null |
Aligning Silhouette Topology for Self-Adaptive 3D Human Pose Recovery | https://papers.nips.cc/paper_files/paper/2021/hash/242c100dc94f871b6d7215b868a875f8-Abstract.html | Ramesha Rakesh Mugaludi, Jogendra Nath Kundu, Varun Jampani, Venkatesh Babu R | https://papers.nips.cc/paper_files/paper/2021/hash/242c100dc94f871b6d7215b868a875f8-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11973-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/242c100dc94f871b6d7215b868a875f8-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=rsNBA9gtDf4 | https://papers.nips.cc/paper_files/paper/2021/file/242c100dc94f871b6d7215b868a875f8-Supplemental.pdf | Articulation-centric 2D/3D pose supervision forms the core training objective in most existing 3D human pose estimation techniques. Except for synthetic source environments, acquiring such rich supervision for each real target domain at deployment is highly inconvenient. However, we realize that standard foreground silhouette estimation techniques (on static camera feeds) remain unaffected by domain-shifts. Motivated by this, we propose a novel target adaptation framework that relies only on silhouette supervision to adapt a source-trained model-based regressor. However, in the absence of any auxiliary cue (multi-view, depth, or 2D pose), an isolated silhouette loss fails to provide a reliable pose-specific gradient and requires to be employed in tandem with a topology-centric loss. To this end, we develop a series of convolution-friendly spatial transformations in order to disentangle a topological-skeleton representation from the raw silhouette. Such a design paves the way to devise a Chamfer-inspired spatial topological-alignment loss via distance field computation, while effectively avoiding any gradient hindering spatial-to-pointset mapping. Experimental results demonstrate our superiority against prior-arts in self-adapting a source trained model to diverse unlabeled target domains, such as a) in-the-wild datasets, b) low-resolution image domains, and c) adversarially perturbed image domains (via UAP). | null |
Self-Adaptable Point Processes with Nonparametric Time Decays | https://papers.nips.cc/paper_files/paper/2021/hash/243facb29564e7b448834a7c9d901201-Abstract.html | Zhimeng Pan, Zheng Wang, Jeff M Phillips, Shandian Zhe | https://papers.nips.cc/paper_files/paper/2021/hash/243facb29564e7b448834a7c9d901201-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11974-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/243facb29564e7b448834a7c9d901201-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=swur4c3YSyF | https://papers.nips.cc/paper_files/paper/2021/file/243facb29564e7b448834a7c9d901201-Supplemental.pdf | Many applications involve multi-type event data. Understanding the complex influences of the events on each other is critical to discover useful knowledge and to predict future events and their types. Existing methods either ignore or partially account for these influences. Recent works use recurrent neural networks to model the event rate. While being highly expressive, they couple all the temporal dependencies in a black-box and can hardly extract meaningful knowledge. More important, most methods assume an exponential time decay of the influence strength, which is over-simplified and can miss many important strength varying patterns. To overcome these limitations, we propose SPRITE, a $\underline{S}$elf-adaptable $\underline{P}$oint p$\underline{R}$ocess w$\underline{I}$th nonparametric $\underline{T}$ime d$\underline{E}$cays, which can decouple the influences between every pair of the events and capture various time decays of the influence strengths. Specifically, we use an embedding to represent each event type and model the event influence as an unknown function of the embeddings and time span. We derive a general construction that can cover all possible time decaying functions. By placing Gaussian process (GP) priors over the latent functions and using Gauss-Legendre quadrature to obtain the integral in the construction, we can flexibly estimate all kinds of time-decaying influences, without restricting to any specific form or imposing derivative constraints that bring learning difficulties. We then use weight space augmentation of GPs to develop an efficient stochastic variational learning algorithm. We show the advantages of our approach in both the ablation study and real-world applications. | null |
Offline Meta Reinforcement Learning -- Identifiability Challenges and Effective Data Collection Strategies | https://papers.nips.cc/paper_files/paper/2021/hash/248024541dbda1d3fd75fe49d1a4df4d-Abstract.html | Ron Dorfman, Idan Shenfeld, Aviv Tamar | https://papers.nips.cc/paper_files/paper/2021/hash/248024541dbda1d3fd75fe49d1a4df4d-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11975-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/248024541dbda1d3fd75fe49d1a4df4d-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=IBdEfhLveS | https://papers.nips.cc/paper_files/paper/2021/file/248024541dbda1d3fd75fe49d1a4df4d-Supplemental.pdf | Consider the following instance of the Offline Meta Reinforcement Learning (OMRL) problem: given the complete training logs of $N$ conventional RL agents, trained on $N$ different tasks, design a meta-agent that can quickly maximize reward in a new, unseen task from the same task distribution. In particular, while each conventional RL agent explored and exploited its own different task, the meta-agent must identify regularities in the data that lead to effective exploration/exploitation in the unseen task. Here, we take a Bayesian RL (BRL) view, and seek to learn a Bayes-optimal policy from the offline data. Building on the recent VariBAD BRL approach, we develop an off-policy BRL method that learns to plan an exploration strategy based on an adaptive neural belief estimate. However, learning to infer such a belief from offline data brings a new identifiability issue we term MDP ambiguity. We characterize the problem, and suggest resolutions via data collection and modification procedures.Finally, we evaluate our framework on a diverse set of domains, including difficult sparse reward tasks, and demonstrate learning of effective exploration behavior that is qualitatively different from the exploration used by any RL agent in the data. Our code is available online at \url{https://github.com/Rondorf/BOReL}. | null |
RoMA: Robust Model Adaptation for Offline Model-based Optimization | https://papers.nips.cc/paper_files/paper/2021/hash/24b43fb034a10d78bec71274033b4096-Abstract.html | Sihyun Yu, Sungsoo Ahn, Le Song, Jinwoo Shin | https://papers.nips.cc/paper_files/paper/2021/hash/24b43fb034a10d78bec71274033b4096-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11976-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/24b43fb034a10d78bec71274033b4096-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=VH0TRmnqUc | https://papers.nips.cc/paper_files/paper/2021/file/24b43fb034a10d78bec71274033b4096-Supplemental.pdf | We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries. A popular approach to solving this problem is maintaining a proxy model, e.g., a deep neural network (DNN), that approximates the true objective function. Here, the main challenge is how to avoid adversarially optimized inputs during the search, i.e., the inputs where the DNN highly overestimates the true objective function. To handle the issue, we propose a new framework, coined robust model adaptation (RoMA), based on gradient-based optimization of inputs over the DNN. Specifically, it consists of two steps: (a) a pre-training strategy to robustly train the proxy model and (b) a novel adaptation procedure of the proxy model to have robust estimates for a specific set of candidate solutions. At a high level, our scheme utilizes the local smoothness prior to overcome the brittleness of the DNN. Experiments under various tasks show the effectiveness of RoMA compared with previous methods, obtaining state-of-the-art results, e.g., RoMA outperforms all at 4 out of 6 tasks and achieves runner-up results at the remaining tasks. | null |
Flexible Option Learning | https://papers.nips.cc/paper_files/paper/2021/hash/24cceab7ffc1118f5daaace13c670885-Abstract.html | Martin Klissarov, Doina Precup | https://papers.nips.cc/paper_files/paper/2021/hash/24cceab7ffc1118f5daaace13c670885-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11977-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/24cceab7ffc1118f5daaace13c670885-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=L5vbEVIePyb | https://papers.nips.cc/paper_files/paper/2021/file/24cceab7ffc1118f5daaace13c670885-Supplemental.pdf | Temporal abstraction in reinforcement learning (RL), offers the promise of improving generalization and knowledge transfer in complex environments, by propagating information more efficiently over time. Although option learning was initially formulated in a way that allows updating many options simultaneously, using off-policy, intra-option learning (Sutton, Precup & Singh, 1999) , many of the recent hierarchical reinforcement learning approaches only update a single option at a time: the option currently executing. We revisit and extend intra-option learning in the context of deep reinforcement learning, in order to enable updating all options consistent with current primitive action choices, without introducing any additional estimates. Our method can therefore be naturally adopted in most hierarchical RL frameworks. When we combine our approach with the option-critic algorithm for option discovery, we obtain significant improvements in performance and data-efficiency across a wide variety of domains. | null |
Faster Directional Convergence of Linear Neural Networks under Spherically Symmetric Data | https://papers.nips.cc/paper_files/paper/2021/hash/24ec8468b67314c2013d215b77034476-Abstract.html | Dachao Lin, Ruoyu Sun, Zhihua Zhang | https://papers.nips.cc/paper_files/paper/2021/hash/24ec8468b67314c2013d215b77034476-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11978-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/24ec8468b67314c2013d215b77034476-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Q9hZdUBTC9S | https://papers.nips.cc/paper_files/paper/2021/file/24ec8468b67314c2013d215b77034476-Supplemental.pdf | In this paper, we study gradient methods for training deep linear neural networks with binary cross-entropy loss. In particular, we show global directional convergence guarantees from a polynomial rate to a linear rate for (deep) linear networks with spherically symmetric data distribution, which can be viewed as a specific zero-margin dataset. Our results do not require the assumptions in other works such as small initial loss, presumed convergence of weight direction, or overparameterization. We also characterize our findings in experiments. | null |
Online Facility Location with Multiple Advice | https://papers.nips.cc/paper_files/paper/2021/hash/250473494b245120a7eaf8b2e6b1f17c-Abstract.html | Matteo Almanza, Flavio Chierichetti, Silvio Lattanzi, Alessandro Panconesi, Giuseppe Re | https://papers.nips.cc/paper_files/paper/2021/hash/250473494b245120a7eaf8b2e6b1f17c-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11979-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/250473494b245120a7eaf8b2e6b1f17c-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=A9HVNx1J8Pc | https://papers.nips.cc/paper_files/paper/2021/file/250473494b245120a7eaf8b2e6b1f17c-Supplemental.pdf | Clustering is a central topic in unsupervised learning and its online formulation has received a lot of attention in recent years. In this paper, we study the classic facility location problem in the presence of multiple machine-learned advice. We design an algorithm with provable performance guarantees such that, if the advice is good, it outperforms the best-known online algorithms for the problem, and if it is bad it still matches their performance.We complement our theoretical analysis with an in-depth study of the performance of our algorithm, showing its effectiveness on synthetic and real-world data sets. | null |
Credit Assignment in Neural Networks through Deep Feedback Control | https://papers.nips.cc/paper_files/paper/2021/hash/25048eb6a33209cb5a815bff0cf6887c-Abstract.html | Alexander Meulemans, Matilde Tristany Farinha, Javier Garcia Ordonez, Pau Vilimelis Aceituno, João Sacramento, Benjamin F. Grewe | https://papers.nips.cc/paper_files/paper/2021/hash/25048eb6a33209cb5a815bff0cf6887c-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11980-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/25048eb6a33209cb5a815bff0cf6887c-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=REXvo_lsQS9 | https://papers.nips.cc/paper_files/paper/2021/file/25048eb6a33209cb5a815bff0cf6887c-Supplemental.pdf | The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologically-plausible learning methods are either non-local in time, require highly specific connectivity motifs, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment. The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of feedback connectivity patterns. To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing. By combining dynamical system theory with mathematical optimization theory, we provide a strong theoretical foundation for DFC that we corroborate with detailed results on toy experiments and standard computer-vision benchmarks. | null |
Robust Online Correlation Clustering | https://papers.nips.cc/paper_files/paper/2021/hash/250dd56814ad7c50971ee4020519c6f5-Abstract.html | Silvio Lattanzi, Benjamin Moseley, Sergei Vassilvitskii, Yuyan Wang, Rudy Zhou | https://papers.nips.cc/paper_files/paper/2021/hash/250dd56814ad7c50971ee4020519c6f5-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11981-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/250dd56814ad7c50971ee4020519c6f5-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=6Gn6-oMRvu3 | https://papers.nips.cc/paper_files/paper/2021/file/250dd56814ad7c50971ee4020519c6f5-Supplemental.pdf | In correlation clustering we are given a set of points along with recommendations whether each pair of points should be placed in the same cluster or into separate clusters. The goal cluster the points to minimize disagreements from the recommendations. We study the correlation clustering problem in the online setting., where points arrive one at a time, and upon arrival the algorithm must make an irrevocable cluster assignment decision. While the online version is natural, there is a simple lower bound that rules out any algorithm with a non-trivial competitive ratio. In this work we go beyond worst case analysis, and show that the celebrated Pivot algorithm performs well when given access to a small number of random samples from the input. Moreover, we prove that Pivot is robust to additional adversarial perturbations of the sample set in this setting. We conclude with an empirical analysis validating our theoretical findings. | null |
Neural Additive Models: Interpretable Machine Learning with Neural Nets | https://papers.nips.cc/paper_files/paper/2021/hash/251bd0442dfcc53b5a761e050f8022b8-Abstract.html | Rishabh Agarwal, Levi Melnick, Nicholas Frosst, Xuezhou Zhang, Ben Lengerich, Rich Caruana, Geoffrey E. Hinton | https://papers.nips.cc/paper_files/paper/2021/hash/251bd0442dfcc53b5a761e050f8022b8-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11982-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/251bd0442dfcc53b5a761e050f8022b8-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=wHkKTW2wrmm | https://papers.nips.cc/paper_files/paper/2021/file/251bd0442dfcc53b5a761e050f8022b8-Supplemental.pdf | Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks. However, their accuracy comes at the cost of intelligibility: it is usually unclear how they make their decisions. This hinders their applicability to high stakes decision-making domains such as healthcare. We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models. NAMs learn a linear combination of neural networks that each attend to a single input feature. These networks are trained jointly and can learn arbitrarily complex relationships between their input feature and the output. Our experiments on regression and classification datasets show that NAMs are more accurate than widely used intelligible models such as logistic regression and shallow decision trees. They perform similarly to existing state-of-the-art generalized additive models in accuracy, but are more flexible because they are based on neural nets instead of boosted trees. To demonstrate this, we show how NAMs can be used for multitask learning on synthetic data and on the COMPAS recidivism data due to their composability, and demonstrate that the differentiability of NAMs allows them to train more complex interpretable models for COVID-19. | null |
Representation Learning for Event-based Visuomotor Policies | https://papers.nips.cc/paper_files/paper/2021/hash/251c5ffd6b62cc21c446c963c76cf214-Abstract.html | Sai Vemprala, Sami Mian, Ashish Kapoor | https://papers.nips.cc/paper_files/paper/2021/hash/251c5ffd6b62cc21c446c963c76cf214-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11983-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/251c5ffd6b62cc21c446c963c76cf214-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=LAwuz_L9U9j | https://papers.nips.cc/paper_files/paper/2021/file/251c5ffd6b62cc21c446c963c76cf214-Supplemental.pdf | Event-based cameras are dynamic vision sensors that provide asynchronous measurements of changes in per-pixel brightness at a microsecond level. This makes them significantly faster than conventional frame-based cameras, and an appealing choice for high-speed robot navigation. While an interesting sensor modality, this asynchronously streamed event data poses a challenge for machine learning based computer vision techniques that are more suited for synchronous, frame-based data. In this paper, we present an event variational autoencoder through which compact representations can be learnt directly from asynchronous spatiotemporal event data. Furthermore, we show that such pretrained representations can be used for event-based reinforcement learning instead of end-to-end reward driven perception. We validate this framework of learning event-based visuomotor policies by applying it to an obstacle avoidance scenario in simulation. Compared to techniques that treat event data as images, we show that representations learnt from event streams result in faster policy training, adapt to different control capacities, and demonstrate a higher degree of robustness to environmental changes and sensor noise. | null |
Kernel Functional Optimisation | https://papers.nips.cc/paper_files/paper/2021/hash/251e16a2aac0ca4847adf561483381bf-Abstract.html | Arun Kumar Anjanapura Venkatesh, Alistair Shilton, Santu Rana, Sunil Gupta, Svetha Venkatesh | https://papers.nips.cc/paper_files/paper/2021/hash/251e16a2aac0ca4847adf561483381bf-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11984-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/251e16a2aac0ca4847adf561483381bf-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=zDtFO9vohmF | https://papers.nips.cc/paper_files/paper/2021/file/251e16a2aac0ca4847adf561483381bf-Supplemental.pdf | Traditional methods for kernel selection rely on parametric kernel functions or a combination thereof and although the kernel hyperparameters are tuned, these methods often provide sub-optimal results due to the limitations induced by the parametric forms. In this paper, we propose a novel formulation for kernel selection using efficient Bayesian optimisation to find the best fitting non-parametric kernel. The kernel is expressed using a linear combination of functions sampled from a prior Gaussian Process (GP) defined by a hyperkernel. We also provide a mechanism to ensure the positive definiteness of the Gram matrix constructed using the resultant kernels. Our experimental results on GP regression and Support Vector Machine (SVM) classification tasks involving both synthetic functions and several real-world datasets show the superiority of our approach over the state-of-the-art. | null |
Generalized Shape Metrics on Neural Representations | https://papers.nips.cc/paper_files/paper/2021/hash/252a3dbaeb32e7690242ad3b556e626b-Abstract.html | Alex H Williams, Erin Kunz, Simon Kornblith, Scott Linderman | https://papers.nips.cc/paper_files/paper/2021/hash/252a3dbaeb32e7690242ad3b556e626b-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11985-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/252a3dbaeb32e7690242ad3b556e626b-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=L9JM-pxQOl | https://papers.nips.cc/paper_files/paper/2021/file/252a3dbaeb32e7690242ad3b556e626b-Supplemental.pdf | Understanding the operation of biological and artificial networks remains a difficult and important challenge. To identify general principles, researchers are increasingly interested in surveying large collections of networks that are trained on, or biologically adapted to, similar tasks. A standardized set of analysis tools is now needed to identify how network-level covariates---such as architecture, anatomical brain region, and model organism---impact neural representations (hidden layer activations). Here, we provide a rigorous foundation for these analyses by defining a broad family of metric spaces that quantify representational dissimilarity. Using this framework, we modify existing representational similarity measures based on canonical correlation analysis and centered kernel alignment to satisfy the triangle inequality, formulate a novel metric that respects the inductive biases in convolutional layers, and identify approximate Euclidean embeddings that enable network representations to be incorporated into essentially any off-the-shelf machine learning method. We demonstrate these methods on large-scale datasets from biology (Allen Institute Brain Observatory) and deep learning (NAS-Bench-101). In doing so, we identify relationships between neural representations that are interpretable in terms of anatomical features and model performance. | null |
Diverse Message Passing for Attribute with Heterophily | https://papers.nips.cc/paper_files/paper/2021/hash/253614bbac999b38b5b60cae531c4969-Abstract.html | Liang Yang, Mengzhe Li, Liyang Liu, bingxin niu, Chuan Wang, Xiaochun Cao, Yuanfang Guo | https://papers.nips.cc/paper_files/paper/2021/hash/253614bbac999b38b5b60cae531c4969-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11986-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/253614bbac999b38b5b60cae531c4969-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=4jPVcKEYpSZ | https://papers.nips.cc/paper_files/paper/2021/file/253614bbac999b38b5b60cae531c4969-Supplemental.zip | Most of the existing GNNs can be modeled via the Uniform Message Passing framework. This framework considers all the attributes of each node in its entirety, shares the uniform propagation weights along each edge, and focuses on the uniform weight learning. The design of this framework possesses two prerequisites, the simplification of homophily and heterophily to the node-level property and the ignorance of attribute differences. Unfortunately, different attributes possess diverse characteristics. In this paper, the network homophily rate defined with respect to the node labels is extended to attribute homophily rate by taking the attributes as weak labels. Based on this attribute homophily rate, we propose a Diverse Message Passing (DMP) framework, which specifies every attribute propagation weight on each edge. Besides, we propose two specific strategies to significantly reduce the computational complexity of DMP to prevent the overfitting issue. By investigating the spectral characteristics, existing spectral GNNs are actually equivalent to a degenerated version of DMP. From the perspective of numerical optimization, we provide a theoretical analysis to demonstrate DMP's powerful representation ability and the ability of alleviating the over-smoothing issue. Evaluations on various real networks demonstrate the superiority of our DMP on handling the networks with heterophily and alleviating the over-smoothing issue, compared to the existing state-of-the-arts. | null |
Towards Robust Bisimulation Metric Learning | https://papers.nips.cc/paper_files/paper/2021/hash/256bf8e6923a52fda8ddf7dc050a1148-Abstract.html | Mete Kemertas, Tristan Aumentado-Armstrong | https://papers.nips.cc/paper_files/paper/2021/hash/256bf8e6923a52fda8ddf7dc050a1148-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11987-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/256bf8e6923a52fda8ddf7dc050a1148-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ySFGlFjgIfN | https://papers.nips.cc/paper_files/paper/2021/file/256bf8e6923a52fda8ddf7dc050a1148-Supplemental.zip | Learned representations in deep reinforcement learning (DRL) have to extract task-relevant information from complex observations, balancing between robustness to distraction and informativeness to the policy. Such stable and rich representations, often learned via modern function approximation techniques, can enable practical application of the policy improvement theorem, even in high-dimensional continuous state-action spaces. Bisimulation metrics offer one solution to this representation learning problem, by collapsing functionally similar states together in representation space, which promotes invariance to noise and distractors. In this work, we generalize value function approximation bounds for on-policy bisimulation metrics to non-optimal policies and approximate environment dynamics. Our theoretical results help us identify embedding pathologies that may occur in practical use. In particular, we find that these issues stem from an underconstrained dynamics model and an unstable dependence of the embedding norm on the reward signal in environments with sparse rewards. Further, we propose a set of practical remedies: (i) a norm constraint on the representation space, and (ii) an extension of prior approaches with intrinsic rewards and latent space regularization. Finally, we provide evidence that the resulting method is not only more robust to sparse reward functions, but also able to solve challenging continuous control tasks with observational distractions, where prior methods fail. | null |
Beyond BatchNorm: Towards a Unified Understanding of Normalization in Deep Learning | https://papers.nips.cc/paper_files/paper/2021/hash/2578eb9cdf020730f77793e8b58e165a-Abstract.html | Ekdeep S Lubana, Robert Dick, Hidenori Tanaka | https://papers.nips.cc/paper_files/paper/2021/hash/2578eb9cdf020730f77793e8b58e165a-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11988-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2578eb9cdf020730f77793e8b58e165a-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=DbxKZvfOIhu | https://papers.nips.cc/paper_files/paper/2021/file/2578eb9cdf020730f77793e8b58e165a-Supplemental.pdf | Inspired by BatchNorm, there has been an explosion of normalization layers in deep learning. Recent works have identified a multitude of beneficial properties in BatchNorm to explain its success. However, given the pursuit of alternative normalization layers, these properties need to be generalized so that any given layer's success/failure can be accurately predicted. In this work, we take a first step towards this goal by extending known properties of BatchNorm in randomly initialized deep neural networks (DNNs) to several recently proposed normalization layers. Our primary findings follow: (i) similar to BatchNorm, activations-based normalization layers can prevent exponential growth of activations in ResNets, but parametric techniques require explicit remedies; (ii) use of GroupNorm can ensure an informative forward propagation, with different samples being assigned dissimilar activations, but increasing group size results in increasingly indistinguishable activations for different samples, explaining slow convergence speed in models with LayerNorm; and (iii) small group sizes result in large gradient norm in earlier layers, hence explaining training instability issues in Instance Normalization and illustrating a speed-stability tradeoff in GroupNorm. Overall, our analysis reveals a unified set of mechanisms that underpin the success of normalization methods in deep learning, providing us with a compass to systematically explore the vast design space of DNN normalization layers. | null |
Representation Learning Beyond Linear Prediction Functions | https://papers.nips.cc/paper_files/paper/2021/hash/258be18e31c8188555c2ff05b4d542c3-Abstract.html | Ziping Xu, Ambuj Tewari | https://papers.nips.cc/paper_files/paper/2021/hash/258be18e31c8188555c2ff05b4d542c3-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11989-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/258be18e31c8188555c2ff05b4d542c3-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=bc-f0ZBNker | https://papers.nips.cc/paper_files/paper/2021/file/258be18e31c8188555c2ff05b4d542c3-Supplemental.pdf | Recent papers on the theory of representation learning has shown the importance of a quantity called diversity when generalizing from a set of source tasks to a target task. Most of these papers assume that the function mapping shared representations to predictions is linear, for both source and target tasks. In practice, researchers in deep learning use different numbers of extra layers following the pretrained model based on the difficulty of the new task. This motivates us to ask whether diversity can be achieved when source tasks and the target task use different prediction function spaces beyond linear functions. We show that diversity holds even if the target task uses a neural network with multiple layers, as long as source tasks use linear functions. If source tasks use nonlinear prediction functions, we provide a negative result by showing that depth-1 neural networks with ReLu activation function need exponentially many source tasks to achieve diversity. For a general function class, we find that eluder dimension gives a lower bound on the number of tasks required for diversity. Our theoretical results imply that simpler tasks generalize better. Though our theoretical results are shown for the global minimizer of empirical risks, their qualitative predictions still hold true for gradient-based optimization algorithms as verified by our simulations on deep neural networks. | null |
Volume Rendering of Neural Implicit Surfaces | https://papers.nips.cc/paper_files/paper/2021/hash/25e2a30f44898b9f3e978b1786dcd85c-Abstract.html | Lior Yariv, Jiatao Gu, Yoni Kasten, Yaron Lipman | https://papers.nips.cc/paper_files/paper/2021/hash/25e2a30f44898b9f3e978b1786dcd85c-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11990-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/25e2a30f44898b9f3e978b1786dcd85c-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=GlEWs-V9boR | https://papers.nips.cc/paper_files/paper/2021/file/25e2a30f44898b9f3e978b1786dcd85c-Supplemental.zip | Neural volume rendering became increasingly popular recently due to its success in synthesizing novel views of a scene from a sparse set of input images. So far, the geometry learned by neural volume rendering techniques was modeled using a generic density function. Furthermore, the geometry itself was extracted using an arbitrary level set of the density function leading to a noisy, often low fidelity reconstruction.The goal of this paper is to improve geometry representation and reconstruction in neural volume rendering. We achieve that by modeling the volume density as a function of the geometry. This is in contrast to previous work modeling the geometry as a function of the volume density. In more detail, we define the volume density function as Laplace's cumulative distribution function (CDF) applied to a signed distance function (SDF) representation. This simple density representation has three benefits: (i) it provides a useful inductive bias to the geometry learned in the neural volume rendering process; (ii) it facilitates a bound on the opacity approximation error, leading to an accurate sampling of the viewing ray. Accurate sampling is important to provide a precise coupling of geometry and radiance; and (iii) it allows efficient unsupervised disentanglement of shape and appearance in volume rendering.Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions, outperforming relevant baselines. Furthermore, switching shape and appearance between scenes is possible due to the disentanglement of the two. | null |
MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers | https://papers.nips.cc/paper_files/paper/2021/hash/260c2432a0eecc28ce03c10dadc078a4-Abstract.html | Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, Zaid Harchaoui | https://papers.nips.cc/paper_files/paper/2021/hash/260c2432a0eecc28ce03c10dadc078a4-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11991-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/260c2432a0eecc28ce03c10dadc078a4-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Tqx7nJp7PR | https://papers.nips.cc/paper_files/paper/2021/file/260c2432a0eecc28ce03c10dadc078a4-Supplemental.pdf | As major progress is made in open-ended text generation, measuring how close machine-generated text is to human language remains a critical open problem. We introduce Mauve, a comparison measure for open-ended text generation, which directly compares the learnt distribution from a text generation model to the distribution of human-written text using divergence frontiers. Mauve scales up to modern text generation models by computing information divergences in a quantized embedding space. Through an extensive empirical study on three open-ended generation tasks, we find that Mauve identifies known properties of generated text, scales naturally with model size, and correlates with human judgments, with fewer restrictions than existing distributional evaluation metrics. | null |
Accurately Solving Rod Dynamics with Graph Learning | https://papers.nips.cc/paper_files/paper/2021/hash/26337353b7962f533d78c762373b3318-Abstract.html | Han Shao, Tassilo Kugelstadt, Torsten Hädrich, Wojtek Palubicki, Jan Bender, Soeren Pirk, Dominik L Michels | https://papers.nips.cc/paper_files/paper/2021/hash/26337353b7962f533d78c762373b3318-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11992-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/26337353b7962f533d78c762373b3318-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=r2uzPR4AYo | https://papers.nips.cc/paper_files/paper/2021/file/26337353b7962f533d78c762373b3318-Supplemental.pdf | Iterative solvers are widely used to accurately simulate physical systems. These solvers require initial guesses to generate a sequence of improving approximate solutions. In this contribution, we introduce a novel method to accelerate iterative solvers for rod dynamics with graph networks (GNs) by predicting the initial guesses to reduce the number of iterations. Unlike existing methods that aim to learn physical systems in an end-to-end manner, our approach guarantees long-term stability and therefore leads to more accurate solutions. Furthermore, our method improves the run time performance of traditional iterative solvers for rod dynamics. To explore our method we make use of position-based dynamics (PBD) as a common solver for physical systems and evaluate it by simulating the dynamics of elastic rods. Our approach is able to generalize across different initial conditions, discretizations, and realistic material properties. We demonstrate that it also performs well when taking discontinuous effects into account such as collisions between individual rods. Finally, to illustrate the scalability of our approach, we simulate complex 3D tree models composed of over a thousand individual branch segments swaying in wind fields. | null |
Limiting fluctuation and trajectorial stability of multilayer neural networks with mean field training | https://papers.nips.cc/paper_files/paper/2021/hash/2639ba2137371773aa1e64e7735cdb30-Abstract.html | Huy Tuan Pham, Phan-Minh Nguyen | https://papers.nips.cc/paper_files/paper/2021/hash/2639ba2137371773aa1e64e7735cdb30-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11993-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2639ba2137371773aa1e64e7735cdb30-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=jg9LM8QItms | https://papers.nips.cc/paper_files/paper/2021/file/2639ba2137371773aa1e64e7735cdb30-Supplemental.pdf | The mean field theory of multilayer neural networks centers around a particular infinite-width scaling, in which the learning dynamics is shown to be closely tracked by the mean field limit. A random fluctuation around this infinite-width limit is expected from a large-width expansion to the next order. This fluctuation has been studied only in the case of shallow networks, where previous works employ heavily technical notions or additional formulation ideas amenable only to that case. Treatment of the multilayer case has been missing, with the chief difficulty in finding a formulation that must capture the stochastic dependency across not only time but also depth.In this work, we initiate the study of the fluctuation in the case of multilayer networks, at any network depth. Leveraging on the neuronal embedding framework recently introduced by Nguyen and Pham, we systematically derive a system of dynamical equations, called the second-order mean field limit, that captures the limiting fluctuation distribution. We demonstrate through the framework the complex interaction among neurons in this second-order mean field limit, the stochasticity with cross-layer dependency and the nonlinear time evolution inherent in the limiting fluctuation. A limit theorem is proven to relate quantitatively this limit to the fluctuation realized by large-width networks.We apply the result to show a stability property of gradient descent mean field training: in the large-width regime, along the training trajectory, it progressively biases towards a solution with "minimal fluctuation" (in fact, vanishing fluctuation) in the learned output function, even after the network has been initialized at or has converged (sufficiently fast) to a global optimum. This extends a similar phenomenon previously shown only for shallow networks with a squared loss in the empirical risk minimization setting, to multilayer networks with a loss function that is not necessarily convex in a more general setting. | null |
Medical Dead-ends and Learning to Identify High-Risk States and Treatments | https://papers.nips.cc/paper_files/paper/2021/hash/26405399c51ad7b13b504e74eb7c696c-Abstract.html | Mehdi Fatemi, Taylor W. Killian, Jayakumar Subramanian, Marzyeh Ghassemi | https://papers.nips.cc/paper_files/paper/2021/hash/26405399c51ad7b13b504e74eb7c696c-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11994-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/26405399c51ad7b13b504e74eb7c696c-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=4CRpaV4pYp | https://papers.nips.cc/paper_files/paper/2021/file/26405399c51ad7b13b504e74eb7c696c-Supplemental.pdf | Machine learning has successfully framed many sequential decision making problems as either supervised prediction, or optimal decision-making policy identification via reinforcement learning. In data-constrained offline settings, both approaches may fail as they assume fully optimal behavior or rely on exploring alternatives that may not exist. We introduce an inherently different approach that identifies "dead-ends" of a state space. We focus on patient condition in the intensive care unit, where a "medical dead-end" indicates that a patient will expire, regardless of all potential future treatment sequences. We postulate "treatment security" as avoiding treatments with probability proportional to their chance of leading to dead-ends, present a formal proof, and frame discovery as an RL problem. We then train three independent deep neural models for automated state construction, dead-end discovery and confirmation. Our empirical results discover that dead-ends exist in real clinical data among septic patients, and further reveal gaps between secure treatments and those administered. | null |
Overcoming the Convex Barrier for Simplex Inputs | https://papers.nips.cc/paper_files/paper/2021/hash/26657d5ff9020d2abefe558796b99584-Abstract.html | Harkirat Singh Behl, M. Pawan Kumar, Philip Torr, Krishnamurthy Dvijotham | https://papers.nips.cc/paper_files/paper/2021/hash/26657d5ff9020d2abefe558796b99584-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11995-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/26657d5ff9020d2abefe558796b99584-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=JXREUkyHi7u | https://papers.nips.cc/paper_files/paper/2021/file/26657d5ff9020d2abefe558796b99584-Supplemental.pdf | Recent progress in neural network verification has challenged the notion of a convex barrier, that is, an inherent weakness in the convex relaxation of the output of a neural network. Specifically, there now exists a tight relaxation for verifying the robustness of a neural network to $\ell_\infty$ input perturbations, as well as efficient primal and dual solvers for the relaxation. Buoyed by this success, we consider the problem of developing similar techniques for verifying robustness to input perturbations within the probability simplex. We prove a somewhat surprising result that, in this case, not only can one design a tight relaxation that overcomes the convex barrier, but the size of the relaxation remains linear in the number of neurons, thereby leading to simpler and more efficient algorithms. We establish the scalability of our overall approach via the specification of $\ell_1$ robustness for CIFAR-10 and MNIST classification, where our approach improves the state of the art verified accuracy by up to $14.4\%$. Furthermore, we establish its accuracy on a novel and highly challenging task of verifying the robustness of a multi-modal (text and image) classifier to arbitrary changes in its textual input. | null |
High-probability Bounds for Non-Convex Stochastic Optimization with Heavy Tails | https://papers.nips.cc/paper_files/paper/2021/hash/26901debb30ea03f0aa833c9de6b81e9-Abstract.html | Ashok Cutkosky, Harsh Mehta | https://papers.nips.cc/paper_files/paper/2021/hash/26901debb30ea03f0aa833c9de6b81e9-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11996-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/26901debb30ea03f0aa833c9de6b81e9-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=XeeTWJvAQl | https://papers.nips.cc/paper_files/paper/2021/file/26901debb30ea03f0aa833c9de6b81e9-Supplemental.pdf | We consider non-convex stochastic optimization using first-order algorithms for which the gradient estimates may have heavy tails. We show that a combination of gradient clipping, momentum, and normalized gradient descent yields convergence to critical points in high-probability with best-known rates for smooth losses when the gradients only have bounded $\mathfrak{p}$th moments for some $\mathfrak{p}\in(1,2]$. We then consider the case of second-order smooth losses, which to our knowledge have not been studied in this setting, and again obtain high-probability bounds for any $\mathfrak{p}$. Moreover, our results hold for arbitrary smooth norms, in contrast to the typical SGD analysis which requires a Hilbert space norm. Further, we show that after a suitable "burn-in" period, the objective value will monotonically decrease for every iteration until a critical point is identified, which provides intuition behind the popular practice of learning rate "warm-up'' and also yields a last-iterate guarantee. | null |
Batch Normalization Orthogonalizes Representations in Deep Random Networks | https://papers.nips.cc/paper_files/paper/2021/hash/26cd8ecadce0d4efd6cc8a8725cbd1f8-Abstract.html | Hadi Daneshmand, Amir Joudaki, Francis Bach | https://papers.nips.cc/paper_files/paper/2021/hash/26cd8ecadce0d4efd6cc8a8725cbd1f8-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11997-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/26cd8ecadce0d4efd6cc8a8725cbd1f8-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=_RSgXL8gNnx | https://papers.nips.cc/paper_files/paper/2021/file/26cd8ecadce0d4efd6cc8a8725cbd1f8-Supplemental.pdf | This paper underlines an elegant property of batch-normalization (BN): Successive batch normalizations with random linear updates make samples increasingly orthogonal. We establish a non-asymptotic characterization of the interplay between depth, width, and the orthogonality of deep representations. More precisely, we prove, under a mild assumption, the deviation of the representations from orthogonality rapidly decays with depth up to a term inversely proportional to the network width. This result has two main theoretical and practical implications: 1) Theoretically, as the depth grows, the distribution of the outputs contracts to a Wasserstein-2 ball around an isotropic normal distribution. Furthermore, the radius of this Wasserstein ball shrinks with the width of the network. 2) Practically, the orthogonality of the representations directly influences the performance of stochastic gradient descent (SGD). When representations are initially aligned, we observe SGD wastes many iterations to disentangle representations before the classification. Nevertheless, we experimentally show that starting optimization from orthogonal representations is sufficient to accelerate SGD, with no need for BN. | null |
Support vector machines and linear regression coincide with very high-dimensional features | https://papers.nips.cc/paper_files/paper/2021/hash/26d4b4313a7e5828856bc0791fca39a2-Abstract.html | Navid Ardeshir, Clayton Sanford, Daniel J. Hsu | https://papers.nips.cc/paper_files/paper/2021/hash/26d4b4313a7e5828856bc0791fca39a2-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11998-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/26d4b4313a7e5828856bc0791fca39a2-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=9bqxRuRwBlu | https://papers.nips.cc/paper_files/paper/2021/file/26d4b4313a7e5828856bc0791fca39a2-Supplemental.pdf | The support vector machine (SVM) and minimum Euclidean norm least squares regression are two fundamentally different approaches to fitting linear models, but they have recently been connected in models for very high-dimensional data through a phenomenon of support vector proliferation, where every training example used to fit an SVM becomes a support vector. In this paper, we explore the generality of this phenomenon and make the following contributions. First, we prove a super-linear lower bound on the dimension (in terms of sample size) required for support vector proliferation in independent feature models, matching the upper bounds from previous works. We further identify a sharp phase transition in Gaussian feature models, bound the width of this transition, and give experimental support for its universality. Finally, we hypothesize that this phase transition occurs only in much higher-dimensional settings in the $\ell_1$ variant of the SVM, and we present a new geometric characterization of the problem that may elucidate this phenomenon for the general $\ell_p$ case. | null |
Coupled Segmentation and Edge Learning via Dynamic Graph Propagation | https://papers.nips.cc/paper_files/paper/2021/hash/26ddd45b02859e836d13d4b9fde34281-Abstract.html | Zhiding Yu, Rui Huang, Wonmin Byeon, Sifei Liu, Guilin Liu, Thomas Breuel, Anima Anandkumar, Jan Kautz | https://papers.nips.cc/paper_files/paper/2021/hash/26ddd45b02859e836d13d4b9fde34281-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11999-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/26ddd45b02859e836d13d4b9fde34281-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=vRwnHlAgK5x | https://papers.nips.cc/paper_files/paper/2021/file/26ddd45b02859e836d13d4b9fde34281-Supplemental.pdf | Image segmentation and edge detection are both central problems in perceptual grouping. It is therefore interesting to study how these two tasks can be coupled to benefit each other. Indeed, segmentation can be easily transformed into contour edges to guide edge learning. However, the converse is nontrivial since general edges may not always form closed contours. In this paper, we propose a principled end-to-end framework for coupled edge and segmentation learning, where edges are leveraged as pairwise similarity cues to guide segmentation. At the core of our framework is a recurrent module termed as dynamic graph propagation (DGP) layer that performs message passing on dynamically constructed graphs. The layer uses learned gating to dynamically select neighbors for message passing using max-pooling. The output from message passing is further gated with an edge signal to refine segmentation. Experiments demonstrate that the proposed framework is able to let both tasks mutually improve each other. On Cityscapes validation, our best model achieves 83.7% mIoU in semantic segmentation and 78.7% maximum F-score in semantic edge detection. Our method also leads to improved zero-shot robustness on Cityscapes with natural corruptions (Cityscapes-C). | null |
Offline RL Without Off-Policy Evaluation | https://papers.nips.cc/paper_files/paper/2021/hash/274a10ffa06e434f2a94df765cac6bf4-Abstract.html | David Brandfonbrener, Will Whitney, Rajesh Ranganath, Joan Bruna | https://papers.nips.cc/paper_files/paper/2021/hash/274a10ffa06e434f2a94df765cac6bf4-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12000-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/274a10ffa06e434f2a94df765cac6bf4-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=LU687itn08w | https://papers.nips.cc/paper_files/paper/2021/file/274a10ffa06e434f2a94df765cac6bf4-Supplemental.pdf | Most prior approaches to offline reinforcement learning (RL) have taken an iterative actor-critic approach involving off-policy evaluation. In this paper we show that simply doing one step of constrained/regularized policy improvement using an on-policy Q estimate of the behavior policy performs surprisingly well. This one-step algorithm beats the previously reported results of iterative algorithms on a large portion of the D4RL benchmark. The one-step baseline achieves this strong performance while being notably simpler and more robust to hyperparameters than previously proposed iterative algorithms. We argue that the relatively poor performance of iterative approaches is a result of the high variance inherent in doing off-policy evaluation and magnified by the repeated optimization of policies against those estimates. In addition, we hypothesize that the strong performance of the one-step algorithm is due to a combination of favorable structure in the environment and behavior policy. | null |
Continuous vs. Discrete Optimization of Deep Neural Networks | https://papers.nips.cc/paper_files/paper/2021/hash/274ad4786c3abca69fa097b85867d9a4-Abstract.html | Omer Elkabetz, Nadav Cohen | https://papers.nips.cc/paper_files/paper/2021/hash/274ad4786c3abca69fa097b85867d9a4-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12001-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/274ad4786c3abca69fa097b85867d9a4-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=iX0TSH45eOd | null | Existing analyses of optimization in deep learning are either continuous, focusing on (variants of) gradient flow, or discrete, directly treating (variants of) gradient descent. Gradient flow is amenable to theoretical analysis, but is stylized and disregards computational efficiency. The extent to which it represents gradient descent is an open question in the theory of deep learning. The current paper studies this question. Viewing gradient descent as an approximate numerical solution to the initial value problem of gradient flow, we find that the degree of approximation depends on the curvature around the gradient flow trajectory. We then show that over deep neural networks with homogeneous activations, gradient flow trajectories enjoy favorable curvature, suggesting they are well approximated by gradient descent. This finding allows us to translate an analysis of gradient flow over deep linear neural networks into a guarantee that gradient descent efficiently converges to global minimum almost surely under random initialization. Experiments suggest that over simple deep neural networks, gradient descent with conventional step size is indeed close to gradient flow. We hypothesize that the theory of gradient flows will unravel mysteries behind deep learning. | null |
CrypTen: Secure Multi-Party Computation Meets Machine Learning | https://papers.nips.cc/paper_files/paper/2021/hash/2754518221cfbc8d25c13a06a4cb8421-Abstract.html | Brian Knott, Shobha Venkataraman, Awni Hannun, Shubho Sengupta, Mark Ibrahim, Laurens van der Maaten | https://papers.nips.cc/paper_files/paper/2021/hash/2754518221cfbc8d25c13a06a4cb8421-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12002-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2754518221cfbc8d25c13a06a4cb8421-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=dwJyEMPZ04I | https://papers.nips.cc/paper_files/paper/2021/file/2754518221cfbc8d25c13a06a4cb8421-Supplemental.pdf | Secure multi-party computation (MPC) allows parties to perform computations on data while keeping that data private. This capability has great potential for machine-learning applications: it facilitates training of machine-learning models on private data sets owned by different parties, evaluation of one party's private model using another party's private data, etc. Although a range of studies implement machine-learning models via secure MPC, such implementations are not yet mainstream. Adoption of secure MPC is hampered by the absence of flexible software frameworks that `"speak the language" of machine-learning researchers and engineers. To foster adoption of secure MPC in machine learning, we present CrypTen: a software framework that exposes popular secure MPC primitives via abstractions that are common in modern machine-learning frameworks, such as tensor computations, automatic differentiation, and modular neural networks. This paper describes the design of CrypTen and measure its performance on state-of-the-art models for text classification, speech recognition, and image classification. Our benchmarks show that CrypTen's GPU support and high-performance communication between (an arbitrary number of) parties allows it to perform efficient private evaluation of modern machine-learning models under a semi-honest threat model. For example, two parties using CrypTen can securely predict phonemes in speech recordings using Wav2Letter faster than real-time. We hope that CrypTen will spur adoption of secure MPC in the machine-learning community. | null |
Can contrastive learning avoid shortcut solutions? | https://papers.nips.cc/paper_files/paper/2021/hash/27934a1f19d678a1377c257b9a780e80-Abstract.html | Joshua Robinson, Li Sun, Ke Yu, Kayhan Batmanghelich, Stefanie Jegelka, Suvrit Sra | https://papers.nips.cc/paper_files/paper/2021/hash/27934a1f19d678a1377c257b9a780e80-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12003-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/27934a1f19d678a1377c257b9a780e80-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ud-WYSo9JSL | https://papers.nips.cc/paper_files/paper/2021/file/27934a1f19d678a1377c257b9a780e80-Supplemental.pdf | The generalization of representations learned via contrastive learning depends crucially on what features of the data are extracted. However, we observe that the contrastive loss does not always sufficiently guide which features are extracted, a behavior that can negatively impact the performance on downstream tasks via “shortcuts", i.e., by inadvertently suppressing important predictive features. We find that feature extraction is influenced by the difficulty of the so-called instance discrimination task (i.e., the task of discriminating pairs of similar points from pairs of dissimilar ones). Although harder pairs improve the representation of some features, the improvement comes at the cost of suppressing previously well represented features. In response, we propose implicit feature modification (IFM), a method for altering positive and negative samples in order to guide contrastive models towards capturing a wider variety of predictive features. Empirically, we observe that IFM reduces feature suppression, and as a result improves performance on vision and medical imaging tasks. | null |
See More for Scene: Pairwise Consistency Learning for Scene Classification | https://papers.nips.cc/paper_files/paper/2021/hash/27d52bcb3580724eb4cbe9f2718a9365-Abstract.html | Gongwei Chen, Xinhang Song, Bohan Wang, Shuqiang Jiang | https://papers.nips.cc/paper_files/paper/2021/hash/27d52bcb3580724eb4cbe9f2718a9365-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12004-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/27d52bcb3580724eb4cbe9f2718a9365-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=NWYlZ5z8Q-R | https://papers.nips.cc/paper_files/paper/2021/file/27d52bcb3580724eb4cbe9f2718a9365-Supplemental.pdf | Scene classification is a valuable classification subtask and has its own characteristics which still needs more in-depth studies. Basically, scene characteristics are distributed over the whole image, which cause the need of “seeing” comprehensive and informative regions. Previous works mainly focus on region discovery and aggregation, while rarely involves the inherent properties of CNN along with its potential ability to satisfy the requirements of scene classification. In this paper, we propose to understand scene images and the scene classification CNN models in terms of the focus area. From this new perspective, we find that large focus area is preferred in scene classification CNN models as a consequence of learning scene characteristics. Meanwhile, the analysis about existing training schemes helps us to understand the effects of focus area, and also raises the question about optimal training method for scene classification. Pursuing the better usage of scene characteristics, we propose a new learning scheme with a tailored loss in the goal of activating larger focus area on scene images. Since the supervision of the target regions to be enlarged is usually lacked, our alternative learning scheme is to erase already activated area, and allow the CNN models to activate more area during training. The proposed scheme is implemented by keeping the pairwise consistency between the output of the erased image and its original one. In particular, a tailored loss is proposed to keep such pairwise consistency by leveraging category-relevance information. Experiments on Places365 show the significant improvements of our method with various CNNs. Our method shows an inferior result on the object-centric dataset, ImageNet, which experimentally indicates that it captures the unique characteristics of scenes. | null |
Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss | https://papers.nips.cc/paper_files/paper/2021/hash/27debb435021eb68b3965290b5e24c49-Abstract.html | Jeff Z. HaoChen, Colin Wei, Adrien Gaidon, Tengyu Ma | https://papers.nips.cc/paper_files/paper/2021/hash/27debb435021eb68b3965290b5e24c49-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12005-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/27debb435021eb68b3965290b5e24c49-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=mjyMGFL8N2 | https://papers.nips.cc/paper_files/paper/2021/file/27debb435021eb68b3965290b5e24c49-Supplemental.zip | Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm, which learns representations by pushing positive pairs, or similar examples from the same class, closer together while keeping negative pairs far apart. Despite the empirical successes, theoretical foundations are limited -- prior analyses assume conditional independence of the positive pairs given the same class label, but recent empirical applications use heavily correlated positive pairs (i.e., data augmentations of the same image). Our work analyzes contrastive learning without assuming conditional independence of positive pairs using a novel concept of the augmentation graph on data. Edges in this graph connect augmentations of the same data, and ground-truth classes naturally form connected sub-graphs. We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective on neural net representations. Minimizing this objective leads to features with provable accuracy guarantees under linear probe evaluation. By standard generalization bounds, these accuracy guarantees also hold when minimizing the training contrastive loss. In all, this work provides the first provable analysis for contrastive learning where the guarantees can apply to realistic empirical settings. | null |
Greedy Approximation Algorithms for Active Sequential Hypothesis Testing | https://papers.nips.cc/paper_files/paper/2021/hash/27e9661e033a73a6ad8cefcde965c54d-Abstract.html | Kyra Gan, Su Jia, Andrew Li | https://papers.nips.cc/paper_files/paper/2021/hash/27e9661e033a73a6ad8cefcde965c54d-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12006-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/27e9661e033a73a6ad8cefcde965c54d-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=XOSrNXGp_qJ | https://papers.nips.cc/paper_files/paper/2021/file/27e9661e033a73a6ad8cefcde965c54d-Supplemental.pdf | In the problem of \emph{active sequential hypothesis testing} (ASHT), a learner seeks to identify the \emph{true} hypothesis from among a known set of hypotheses. The learner is given a set of actions and knows the random distribution of the outcome of any action under any true hypothesis. Given a target error $\delta>0$, the goal is to sequentially select the fewest number of actions so as to identify the true hypothesis with probability at least $1 - \delta$. Motivated by applications in which the number of hypotheses or actions is massive (e.g., genomics-based cancer detection), we propose efficient (greedy, in fact) algorithms and provide the first approximation guarantees for ASHT, under two types of adaptivity. Both of our guarantees are independent of the number of actions and logarithmic in the number of hypotheses. We numerically evaluate the performance of our algorithms using both synthetic and real-world DNA mutation data, demonstrating that our algorithms outperform previously proposed heuristic policies by large margins. | null |
When False Positive is Intolerant: End-to-End Optimization with Low FPR for Multipartite Ranking | https://papers.nips.cc/paper_files/paper/2021/hash/28267ab848bcf807b2ed53c3a8f8fc8a-Abstract.html | Peisong Wen, Qianqian Xu, Zhiyong Yang, Yuan He, Qingming Huang | https://papers.nips.cc/paper_files/paper/2021/hash/28267ab848bcf807b2ed53c3a8f8fc8a-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12007-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/28267ab848bcf807b2ed53c3a8f8fc8a-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=K_8bE0OQ9vC | https://papers.nips.cc/paper_files/paper/2021/file/28267ab848bcf807b2ed53c3a8f8fc8a-Supplemental.pdf | Multipartite ranking is a basic task in machine learning, where the Area Under the receiver operating characteristics Curve (AUC) is generally applied as the evaluation metric. Despite that AUC reflects the overall performance of the model, it is inconsistent with the expected performance in some application scenarios, where only a low False Positive Rate (FPR) is meaningful. To leverage high performance under low FPRs, we consider an alternative metric for multipartite ranking evaluating the True Positive Rate (TPR) at a given FPR, denoted as TPR@FPR. Unfortunately, the key challenge of direct TPR@FPR optimization is two-fold: \textbf{a)} the original objective function is not differentiable, making gradient backpropagation impossible; \textbf{b)} the loss function could not be written as a sum of independent instance-wise terms, making mini-batch based optimization infeasible. To address these issues, we propose a novel framework on top of the deep learning framework named \textit{Cross-Batch Approximation for Multipartite Ranking (CBA-MR)}. In face of \textbf{a)}, we propose a differentiable surrogate optimization problem where the instances having a short-time effect on FPR are rendered with different weights based on the random walk hypothesis. To tackle \textbf{b)}, we propose a fast ranking estimation method, where the full-batch loss evaluation is replaced by a delayed update scheme with the help of an embedding cache. Finally, experimental results on four real-world benchmarks are provided to demonstrate the effectiveness of the proposed method. | null |
Convex Polytope Trees | https://papers.nips.cc/paper_files/paper/2021/hash/285a25c17f351708754cdb6d56f3962e-Abstract.html | Mohammadreza Armandpour, Ali Sadeghian, Mingyuan Zhou | https://papers.nips.cc/paper_files/paper/2021/hash/285a25c17f351708754cdb6d56f3962e-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12008-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/285a25c17f351708754cdb6d56f3962e-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=MvGKpmPsN7c | https://papers.nips.cc/paper_files/paper/2021/file/285a25c17f351708754cdb6d56f3962e-Supplemental.pdf | A decision tree is commonly restricted to use a single hyperplane to split the covariate space at each of its internal nodes. It often requires a large number of nodes to achieve high accuracy. In this paper, we propose convex polytope trees (CPT) to expand the family of decision trees by an interpretable generalization of their decision boundary. The splitting function at each node of CPT is based on the logical disjunction of a community of differently weighted probabilistic linear decision-makers, which also geometrically corresponds to a convex polytope in the covariate space. We use a nonparametric Bayesian prior at each node to infer the community's size, encouraging simpler decision boundaries by shrinking the number of polytope facets. We develop a greedy method to efficiently construct CPT and scalable end-to-end training algorithms for the tree parameters when the tree structure is given. We empirically demonstrate the efficiency of CPT over existing state-of-the-art decision trees in several real-world classification and regression tasks from diverse domains. | null |
The Skellam Mechanism for Differentially Private Federated Learning | https://papers.nips.cc/paper_files/paper/2021/hash/285baacbdf8fda1de94b19282acd23e2-Abstract.html | Naman Agarwal, Peter Kairouz, Ziyu Liu | https://papers.nips.cc/paper_files/paper/2021/hash/285baacbdf8fda1de94b19282acd23e2-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12009-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/285baacbdf8fda1de94b19282acd23e2-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=dvyUaK4neD0 | https://papers.nips.cc/paper_files/paper/2021/file/285baacbdf8fda1de94b19282acd23e2-Supplemental.pdf | We introduce the multi-dimensional Skellam mechanism, a discrete differential privacy mechanism based on the difference of two independent Poisson random variables. To quantify its privacy guarantees, we analyze the privacy loss distribution via a numerical evaluation and provide a sharp bound on the Rényi divergence between two shifted Skellam distributions. While useful in both centralized and distributed privacy applications, we investigate how it can be applied in the context of federated learning with secure aggregation under communication constraints. Our theoretical findings and extensive experimental evaluations demonstrate that the Skellam mechanism provides the same privacy-accuracy trade-offs as the continuous Gaussian mechanism, even when the precision is low. More importantly, Skellam is closed under summation and sampling from it only requires sampling from a Poisson distribution -- an efficient routine that ships with all machine learning and data analysis software packages. These features, along with its discrete nature and competitive privacy-accuracy trade-offs, make it an attractive practical alternative to the newly introduced discrete Gaussian mechanism. | null |
Stability and Deviation Optimal Risk Bounds with Convergence Rate $O(1/n)$ | https://papers.nips.cc/paper_files/paper/2021/hash/286674e3082feb7e5afb92777e48821f-Abstract.html | Yegor Klochkov, Nikita Zhivotovskiy | https://papers.nips.cc/paper_files/paper/2021/hash/286674e3082feb7e5afb92777e48821f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12010-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/286674e3082feb7e5afb92777e48821f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=yaxePRTOhqk | null | The sharpest known high probability generalization bounds for uniformly stable algorithms (Feldman, Vondrak, NeurIPS 2018, COLT, 2019), (Bousquet, Klochkov, Zhivotovskiy, COLT, 2020) contain a generally inevitable sampling error term of order $\Theta(1/\sqrt{n})$. When applied to excess risk bounds, this leads to suboptimal results in several standard stochastic convex optimization problems. We show that if the so-called Bernstein condition is satisfied, the term $\Theta(1/\sqrt{n})$ can be avoided, and high probability excess risk bounds of order up to $O(1/n)$ are possible via uniform stability. Using this result, we show a high probability excess risk bound with the rate $O(\log n/n)$ for strongly convex and Lipschitz losses valid for \emph{any} empirical risk minimization method. This resolves a question of Shalev-Shwartz, Shamir, Srebro, and Sridharan (COLT, 2009). We discuss how $O(\log n/n)$ high probability excess risk bounds are possible for projected gradient descent in the case of strongly convex and Lipschitz losses without the usual smoothness assumption. | null |
SketchGen: Generating Constrained CAD Sketches | https://papers.nips.cc/paper_files/paper/2021/hash/28891cb4ab421830acc36b1f5fd6c91e-Abstract.html | Wamiq Para, Shariq Bhat, Paul Guerrero, Tom Kelly, Niloy Mitra, Leonidas J. Guibas, Peter Wonka | https://papers.nips.cc/paper_files/paper/2021/hash/28891cb4ab421830acc36b1f5fd6c91e-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12011-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/28891cb4ab421830acc36b1f5fd6c91e-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Oeb2LbHAfJ4 | https://papers.nips.cc/paper_files/paper/2021/file/28891cb4ab421830acc36b1f5fd6c91e-Supplemental.pdf | Computer-aided design (CAD) is the most widely used modeling approach for technical design. The typical starting point in these designs is 2D sketches which can later be extruded and combined to obtain complex three-dimensional assemblies. Such sketches are typically composed of parametric primitives, such as points, lines, and circular arcs, augmented with geometric constraints linking the primitives, such as coincidence, parallelism, or orthogonality. Sketches can be represented as graphs, with the primitives as nodes and the constraints as edges. Training a model to automatically generate CAD sketches can enable several novel workflows, but is challenging due to the complexity of the graphs and the heterogeneity of the primitives and constraints. In particular, each type of primitive and constraint may require a record of different size and parameter types.We propose SketchGen as a generative model based on a transformer architecture to address the heterogeneity problem by carefully designing a sequential language for the primitives and constraints that allows distinguishing between different primitive or constraint types and their parameters, while encouraging our model to re-use information across related parameters, encoding shared structure. A particular highlight of our work is the ability to produce primitives linked via constraints that enables the final output to be further regularized via a constraint solver. We evaluate our model by demonstrating constraint prediction for given sets of primitives and full sketch generation from scratch, showing that our approach significantly out performs the state-of-the-art in CAD sketch generation. | null |
CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation | https://papers.nips.cc/paper_files/paper/2021/hash/288cd2567953f06e460a33951f55daaf-Abstract.html | Ankit Singh | https://papers.nips.cc/paper_files/paper/2021/hash/288cd2567953f06e460a33951f55daaf-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12012-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/288cd2567953f06e460a33951f55daaf-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=1ODSsnoMBav | https://papers.nips.cc/paper_files/paper/2021/file/288cd2567953f06e460a33951f55daaf-Supplemental.pdf | Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models. However, the application of well-known UDA approaches does not generalize well in Semi-Supervised Domain Adaptation (SSDA) scenarios where few labeled samples from the target domain are available.This paper proposes a simple Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap between the labeled and unlabeled target distributions and the inter-domain gap between source and unlabeled target distribution in SSDA. We suggest employing class-wise contrastive learning to reduce the inter-domain gap and instance-level contrastive alignment between the original(input image) and strongly augmented unlabeled target images to minimize the intra-domain discrepancy. We have empirically shown that both of these modules complement each other to achieve superior performance. Experiments on three well-known domain adaptation benchmark datasets, namely DomainNet, Office-Home, and Office31, demonstrate the effectiveness of our approach. CLDA achieves state-of-the-art results on all the above datasets. | null |
Differentially Private n-gram Extraction | https://papers.nips.cc/paper_files/paper/2021/hash/28ce9bc954876829eeb56ff46da8e1ab-Abstract.html | Kunho Kim, Sivakanth Gopi, Janardhan Kulkarni, Sergey Yekhanin | https://papers.nips.cc/paper_files/paper/2021/hash/28ce9bc954876829eeb56ff46da8e1ab-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12013-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/28ce9bc954876829eeb56ff46da8e1ab-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=IQOawME4sqW | https://papers.nips.cc/paper_files/paper/2021/file/28ce9bc954876829eeb56ff46da8e1ab-Supplemental.pdf | We revisit the problem of $n$-gram extraction in the differential privacy setting. In this problem, given a corpus of private text data, the goal is to release as many $n$-grams as possible while preserving user level privacy. Extracting $n$-grams is a fundamental subroutine in many NLP applications such as sentence completion, auto response generation for emails, etc. The problem also arises in other applications such as sequence mining, trajectory analysis, etc., and is a generalization of recently studied differentially private set union (DPSU) by Gopi et al. (2020). In this paper, we develop a new differentially private algorithm for this problem which, in our experiments, significantly outperforms the state-of-the-art. Our improvements stem from combining recent advances in DPSU, privacy accounting, and new heuristics for pruning in the tree-based approach initiated by Chen et al. (2012). | null |
Capturing implicit hierarchical structure in 3D biomedical images with self-supervised hyperbolic representations | https://papers.nips.cc/paper_files/paper/2021/hash/291d43c696d8c3704cdbe0a72ade5f6c-Abstract.html | Joy Hsu, Jeffrey Gu, Gong Wu, Wah Chiu, Serena Yeung | https://papers.nips.cc/paper_files/paper/2021/hash/291d43c696d8c3704cdbe0a72ade5f6c-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12014-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/291d43c696d8c3704cdbe0a72ade5f6c-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=mqWkNXJBX4h | https://papers.nips.cc/paper_files/paper/2021/file/291d43c696d8c3704cdbe0a72ade5f6c-Supplemental.pdf | We consider the task of representation learning for unsupervised segmentation of 3D voxel-grid biomedical images. We show that models that capture implicit hierarchical relationships between subvolumes are better suited for this task. To that end, we consider encoder-decoder architectures with a hyperbolic latent space, to explicitly capture hierarchical relationships present in subvolumes of the data. We propose utilizing a 3D hyperbolic variational autoencoder with a novel gyroplane convolutional layer to map from the embedding space back to 3D images. To capture these relationships, we introduce an essential self-supervised loss---in addition to the standard VAE loss---which infers approximate hierarchies and encourages implicitly related subvolumes to be mapped closer in the embedding space. We present experiments on synthetic datasets along with a dataset from the medical domain to validate our hypothesis. | null |
Noisy Recurrent Neural Networks | https://papers.nips.cc/paper_files/paper/2021/hash/29301521774ff3cbd26652b2d5c95996-Abstract.html | Soon Hoe Lim, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney | https://papers.nips.cc/paper_files/paper/2021/hash/29301521774ff3cbd26652b2d5c95996-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12015-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/29301521774ff3cbd26652b2d5c95996-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=mf9XiRCEgZu | https://papers.nips.cc/paper_files/paper/2021/file/29301521774ff3cbd26652b2d5c95996-Supplemental.pdf | We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states. Specifically, we consider RNNs that can be viewed as discretizations of stochastic differential equations driven by input data. This framework allows us to study the implicit regularization effect of general noise injection schemes by deriving an approximate explicit regularizer in the small noise regime. We find that, under reasonable assumptions, this implicit regularization promotes flatter minima; it biases towards models with more stable dynamics; and, in classification tasks, it favors models with larger classification margin. Sufficient conditions for global stability are obtained, highlighting the phenomenon of stochastic stabilization, where noise injection can improve stability during training. Our theory is supported by empirical results which demonstrate that the RNNs have improved robustness with respect to various input perturbations. | null |
Matrix encoding networks for neural combinatorial optimization | https://papers.nips.cc/paper_files/paper/2021/hash/29539ed932d32f1c56324cded92c07c2-Abstract.html | Yeong-Dae Kwon, Jinho Choo, Iljoo Yoon, Minah Park, Duwon Park, Youngjune Gwon | https://papers.nips.cc/paper_files/paper/2021/hash/29539ed932d32f1c56324cded92c07c2-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12016-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/29539ed932d32f1c56324cded92c07c2-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=C__ChZs8WjU | https://papers.nips.cc/paper_files/paper/2021/file/29539ed932d32f1c56324cded92c07c2-Supplemental.pdf | Machine Learning (ML) can help solve combinatorial optimization (CO) problems better. A popular approach is to use a neural net to compute on the parameters of a given CO problem and extract useful information that guides the search for good solutions. Many CO problems of practical importance can be specified in a matrix form of parameters quantifying the relationship between two groups of items. There is currently no neural net model, however, that takes in such matrix-style relationship data as an input. Consequently, these types of CO problems have been out of reach for ML engineers. In this paper, we introduce Matrix Encoding Network (MatNet) and show how conveniently it takes in and processes parameters of such complex CO problems. Using an end-to-end model based on MatNet, we solve asymmetric traveling salesman (ATSP) and flexible flow shop (FFSP) problems as the earliest neural approach. In particular, for a class of FFSP we have tested MatNet on, we demonstrate a far superior empirical performance to any methods (neural or not) known to date. | null |
When Is Unsupervised Disentanglement Possible? | https://papers.nips.cc/paper_files/paper/2021/hash/29586cb449c90e249f1f09a0a4ee245a-Abstract.html | Daniella Horan, Eitan Richardson, Yair Weiss | https://papers.nips.cc/paper_files/paper/2021/hash/29586cb449c90e249f1f09a0a4ee245a-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12017-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/29586cb449c90e249f1f09a0a4ee245a-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=XqEF9riB93S | https://papers.nips.cc/paper_files/paper/2021/file/29586cb449c90e249f1f09a0a4ee245a-Supplemental.pdf | A common assumption in many domains is that high dimensional data are a smooth nonlinear function of a small number of independent factors. When is it possible to recover the factors from unlabeled data? In the context of deep models this problem is called “disentanglement” and was recently shown to be impossible without additional strong assumptions [17, 19]. In this paper, we show that the assumption of local isometry together with non-Gaussianity of the factors, is sufficient to provably recover disentangled representations from data. We leverage recent advances in deep generative models to construct manifolds of highly realistic images for which the ground truth latent representation is known, and test whether modern and classical methods succeed in recovering the latent factors. For many different manifolds, we find that a spectral method that explicitly optimizes local isometry and non-Gaussianity consistently finds the correct latent factors, while baseline deep autoencoders do not. We propose how to encourage deep autoencoders to find encodings that satisfy local isometry and show that this helps them discover disentangled representations. Overall, our results suggest that in some realistic settings, unsupervised disentanglement is provably possible, without any domain-specific assumptions. | null |
Continuous Latent Process Flows | https://papers.nips.cc/paper_files/paper/2021/hash/2983e3047c0c730d3b7c022584717f3f-Abstract.html | Ruizhi Deng, Marcus A. Brubaker, Greg Mori, Andreas Lehrmann | https://papers.nips.cc/paper_files/paper/2021/hash/2983e3047c0c730d3b7c022584717f3f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12018-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/2983e3047c0c730d3b7c022584717f3f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=KzYIEQ_B1BX | https://papers.nips.cc/paper_files/paper/2021/file/2983e3047c0c730d3b7c022584717f3f-Supplemental.pdf | Partial observations of continuous time-series dynamics at arbitrary time stamps exist in many disciplines. Fitting this type of data using statistical models with continuous dynamics is not only promising at an intuitive level but also has practical benefits, including the ability to generate continuous trajectories and to perform inference on previously unseen time stamps. Despite exciting progress in this area, the existing models still face challenges in terms of their representational power and the quality of their variational approximations. We tackle these challenges with continuous latent process flows (CLPF), a principled architecture decoding continuous latent processes into continuous observable processes using a time-dependent normalizing flow driven by a stochastic differential equation. To optimize our model using maximum likelihood, we propose a novel piecewise construction of a variational posterior process and derive the corresponding variational lower bound using trajectory re-weighting. Our ablation studies demonstrate the effectiveness of our contributions in various inference tasks on irregular time grids. Comparisons to state-of-the-art baselines show our model's favourable performance on both synthetic and real-world time-series data. | null |
Perturbation-based Regret Analysis of Predictive Control in Linear Time Varying Systems | https://papers.nips.cc/paper_files/paper/2021/hash/298f587406c914fad5373bb689300433-Abstract.html | Yiheng Lin, Yang Hu, Guanya Shi, Haoyuan Sun, Guannan Qu, Adam Wierman | https://papers.nips.cc/paper_files/paper/2021/hash/298f587406c914fad5373bb689300433-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12019-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/298f587406c914fad5373bb689300433-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=xwGeq7I4Opv | https://papers.nips.cc/paper_files/paper/2021/file/298f587406c914fad5373bb689300433-Supplemental.pdf | We study predictive control in a setting where the dynamics are time-varying and linear, and the costs are time-varying and well-conditioned. At each time step, the controller receives the exact predictions of costs, dynamics, and disturbances for the future $k$ time steps. We show that when the prediction window $k$ is sufficiently large, predictive control is input-to-state stable and achieves a dynamic regret of $O(\lambda^k T)$, where $\lambda < 1$ is a positive constant. This is the first dynamic regret bound on the predictive control of linear time-varying systems. We also show a variation of predictive control obtains the first competitive bound for the control of linear time-varying systems: $1 + O(\lambda^k)$. Our results are derived using a novel proof framework based on a perturbation bound that characterizes how a small change to the system parameters impacts the optimal trajectory. | null |
Dataset Distillation with Infinitely Wide Convolutional Networks | https://papers.nips.cc/paper_files/paper/2021/hash/299a23a2291e2126b91d54f3601ec162-Abstract.html | Timothy Nguyen, Roman Novak, Lechao Xiao, Jaehoon Lee | https://papers.nips.cc/paper_files/paper/2021/hash/299a23a2291e2126b91d54f3601ec162-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12020-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/299a23a2291e2126b91d54f3601ec162-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=dBE8OI8_ZOa | null | The effectiveness of machine learning algorithms arises from being able to extract useful features from large amounts of data. As model and dataset sizes increase, dataset distillation methods that compress large datasets into significantly smaller yet highly performant ones will become valuable in terms of training efficiency and useful feature extraction. To that end, we apply a novel distributed kernel-based meta-learning framework to achieve state-of-the-art results for dataset distillation using infinitely wide convolutional neural networks. For instance, using only 10 datapoints (0.02% of original dataset), we obtain over 65% test accuracy on CIFAR-10 image classification task, a dramatic improvement over the previous best test accuracy of 40%. Our state-of-the-art results extend across many other settings for MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and SVHN. Furthermore, we perform some preliminary analyses of our distilled datasets to shed light on how they differ from naturally occurring data. | null |
SPANN: Highly-efficient Billion-scale Approximate Nearest Neighborhood Search | https://papers.nips.cc/paper_files/paper/2021/hash/299dc35e747eb77177d9cea10a802da2-Abstract.html | Qi Chen, Bing Zhao, Haidong Wang, Mingqin Li, Chuanjie Liu, Zengzhong Li, Mao Yang, Jingdong Wang | https://papers.nips.cc/paper_files/paper/2021/hash/299dc35e747eb77177d9cea10a802da2-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12021-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/299dc35e747eb77177d9cea10a802da2-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=-1rrzmJCp4 | null | The in-memory algorithms for approximate nearest neighbor search (ANNS) have achieved great success for fast high-recall search, but are extremely expensive when handling very large scale database. Thus, there is an increasing request for the hybrid ANNS solutions with small memory and inexpensive solid-state drive (SSD). In this paper, we present a simple but efficient memory-disk hybrid indexing and search system, named SPANN, that follows the inverted index methodology. It stores the centroid points of the posting lists in the memory and the large posting lists in the disk. We guarantee both disk-access efficiency (low latency) and high recall by effectively reducing the disk-access number and retrieving high-quality posting lists. In the index-building stage, we adopt a hierarchical balanced clustering algorithm to balance the length of posting lists and augment the posting list by adding the points in the closure of the corresponding clusters. In the search stage, we use a query-aware scheme to dynamically prune the access of unnecessary posting lists. Experiment results demonstrate that SPANN is 2X faster than the state-of-the-art ANNS solution DiskANN to reach the same recall quality 90% with same memory cost in three billion-scale datasets. It can reach 90% recall@1 and recall@10 in just around one millisecond with only about 10% of original memory cost. Code is available at: https://github.com/microsoft/SPTAG. | null |
Distilling Object Detectors with Feature Richness | https://papers.nips.cc/paper_files/paper/2021/hash/29c0c0ee223856f336d7ea8052057753-Abstract.html | Du Zhixing, Rui Zhang, Ming Chang, xishan zhang, Shaoli Liu, Tianshi Chen, Yunji Chen | https://papers.nips.cc/paper_files/paper/2021/hash/29c0c0ee223856f336d7ea8052057753-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12022-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/29c0c0ee223856f336d7ea8052057753-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=_bOfK2k_7R | https://papers.nips.cc/paper_files/paper/2021/file/29c0c0ee223856f336d7ea8052057753-Supplemental.pdf | In recent years, large-scale deep models have achieved great success, but the huge computational complexity and massive storage requirements make it a great challenge to deploy them in resource-limited devices. As a model compression and acceleration method, knowledge distillation effectively improves the performance of small models by transferring the dark knowledge from the teacher detector. However, most of the existing distillation-based detection methods mainly imitating features near bounding boxes, which suffer from two limitations. First, they ignore the beneficial features outside the bounding boxes. Second, these methods imitate some features which are mistakenly regarded as the background by the teacher detector. To address the above issues, we propose a novel Feature-Richness Score (FRS) method to choose important features that improve generalized detectability during distilling. The proposed method effectively retrieves the important features outside the bounding boxes and removes the detrimental features within the bounding boxes. Extensive experiments show that our methods achieve excellent performance on both anchor-based and anchor-free detectors. For example, RetinaNet with ResNet-50 achieves 39.7% in mAP on the COCO2017 dataset, which even surpasses the ResNet-101 based teacher detector 38.9% by 0.8%. Our implementation is available at https://github.com/duzhixing/FRS. | null |
Analysis of one-hidden-layer neural networks via the resolvent method | https://papers.nips.cc/paper_files/paper/2021/hash/29d74915e1b323676bfc28f91b3c4802-Abstract.html | Vanessa Piccolo, Dominik Schröder | https://papers.nips.cc/paper_files/paper/2021/hash/29d74915e1b323676bfc28f91b3c4802-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/12023-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/29d74915e1b323676bfc28f91b3c4802-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=wLsA3nurh9W | https://papers.nips.cc/paper_files/paper/2021/file/29d74915e1b323676bfc28f91b3c4802-Supplemental.pdf | In this work, we investigate the asymptotic spectral density of the random feature matrix $M = Y Y^*$ with $Y = f(WX)$ generated by a single-hidden-layer neural network, where $W$ and $X$ are random rectangular matrices with i.i.d. centred entries and $f$ is a non-linear smooth function which is applied entry-wise. We prove that the Stieltjes transform of the limiting spectral distribution approximately satisfies a quartic self-consistent equation, which is exactly the equation obtained by [Pennington, Worah 2017] and [Benigni, Péché 2019] with the moment method. We extend the previous results to the case of additive bias $Y=f(WX+B)$ with $B$ being an independent rank-one Gaussian random matrix, closer modelling the neural network infrastructures encountered in practice. Our key finding is that in the case of additive bias it is impossible to choose an activation function preserving the layer-to-layer singular value distribution, in sharp contrast to the bias-free case where a simple integral constraint is sufficient to achieve isospectrality. To obtain the asymptotics for the empirical spectral density we follow the resolvent method from random matrix theory via the cumulant expansion. We find that this approach is more robust and less combinatorial than the moment method and expect that it will apply also for models where the combinatorics of the former become intractable. The resolvent method has been widely employed, but compared to previous works, it is applied here to non-linear random matrices. | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.