title
stringlengths 15
153
| url
stringlengths 97
97
| authors
stringlengths 6
328
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | Bibtex
stringlengths 54
54
⌀ | Paper
stringlengths 93
93
⌀ | Reviews And Public Comment »
stringlengths 63
65
⌀ | Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 310
2.42k
⌀ | Supplemental Errata
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|
Privately Learning Subspaces | https://papers.nips.cc/paper_files/paper/2021/hash/09b69adcd7cbae914c6204984097d2da-Abstract.html | Vikrant Singhal, Thomas Steinke | https://papers.nips.cc/paper_files/paper/2021/hash/09b69adcd7cbae914c6204984097d2da-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11724-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/09b69adcd7cbae914c6204984097d2da-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=YBanVDVEbVe | https://papers.nips.cc/paper_files/paper/2021/file/09b69adcd7cbae914c6204984097d2da-Supplemental.pdf | Private data analysis suffers a costly curse of dimensionality. However, the data often has an underlying low-dimensional structure. For example, when optimizing via gradient descent, the gradients often lie in or near a low-dimensional subspace. If that low-dimensional structure can be identified, then we can avoid paying (in terms of privacy or accuracy) for the high ambient dimension. We present differentially private algorithms that take input data sampled from a low-dimensional linear subspace (possibly with a small amount of error) and output that subspace (or an approximation to it). These algorithms can serve as a pre-processing step for other procedures. | null |
On the Value of Interaction and Function Approximation in Imitation Learning | https://papers.nips.cc/paper_files/paper/2021/hash/09dbc1177211571ef3e1ca961cc39363-Abstract.html | Nived Rajaraman, Yanjun Han, Lin Yang, Jingbo Liu, Jiantao Jiao, Kannan Ramchandran | https://papers.nips.cc/paper_files/paper/2021/hash/09dbc1177211571ef3e1ca961cc39363-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11725-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/09dbc1177211571ef3e1ca961cc39363-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Xa9Ba6NsJ6 | https://papers.nips.cc/paper_files/paper/2021/file/09dbc1177211571ef3e1ca961cc39363-Supplemental.pdf | We study the statistical guarantees for the Imitation Learning (IL) problem in episodic MDPs.Rajaraman et al. (2020) show an information theoretic lower bound that in the worst case, a learner which can even actively query the expert policy suffers from a suboptimality growing quadratically in the length of the horizon, $H$. We study imitation learning under the $\mu$-recoverability assumption of Ross et al. (2011) which assumes that the difference in the $Q$-value under the expert policy across different actions in a state do not deviate beyond $\mu$ from the maximum. We show that the reduction proposed by Ross et al. (2010) is statistically optimal: the resulting algorithm upon interacting with the MDP for $N$ episodes results in a suboptimality bound of $\widetilde{\mathcal{O}} \left( \mu |\mathcal{S}| H / N \right)$ which we show is optimal up to log-factors. In contrast, we show that any algorithm which does not interact with the MDP and uses an offline dataset of $N$ expert trajectories must incur suboptimality growing as $\gtrsim |\mathcal{S}| H^2/N$ even under the $\mu$-recoverability assumption. This establishes a clear and provable separation of the minimax rates between the active setting and the no-interaction setting. We also study IL with linear function approximation. When the expert plays actions according to a linear classifier of known state-action features, we use the reduction to multi-class classification to show that with high probability, the suboptimality of behavior cloning is $\widetilde{O}(dH^2/N)$ given $N$ rollouts from the optimal policy. This is optimal up to log-factors but can be improved to $\widetilde{O}(dH/N)$ if we have a linear expert with parameter-sharing across time steps. In contrast, when the MDP transition structure is known to the learner such as in the case of simulators, we demonstrate fundamental differences compared to the tabular setting in terms of the performance of an optimal algorithm, Mimic-MD (Rajaraman et al. (2020)) when extended to the function approximation setting. Here, we introduce a new problem called confidence set linear classification, that can be used to construct sample-efficient IL algorithms. | null |
Shapeshifter: a Parameter-efficient Transformer using Factorized Reshaped Matrices | https://papers.nips.cc/paper_files/paper/2021/hash/09def3ebbc44ff3426b28fcd88c83554-Abstract.html | Aliakbar Panahi, Seyran Saeedi, Tom Arodz | https://papers.nips.cc/paper_files/paper/2021/hash/09def3ebbc44ff3426b28fcd88c83554-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11726-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/09def3ebbc44ff3426b28fcd88c83554-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ZjGr1tMVbjw | null | Language models employ a very large number of trainable parameters. Despite being highly overparameterized, these networks often achieve good out-of-sample test performance on the original task and easily fine-tune to related tasks. Recent observations involving, for example, intrinsic dimension of the objective landscape and the lottery ticket hypothesis, indicate that often training actively involves only a small fraction of the parameter space. Thus, a question remains how large a parameter space needs to be in the first place –- the evidence from recent work on model compression, parameter sharing, factorized representations, and knowledge distillation increasingly shows that models can be made much smaller and still perform well. Here, we focus on factorized representations of matrices that underpin dense, embedding, and self-attention layers. We use low-rank factorized representation of a reshaped and rearranged original matrix to achieve space efficient and expressive linear layers. We prove that stacking such low-rank layers increases their expressiveness, providing theoretical understanding for their effectiveness in deep networks. In Transformer models, our approach leads to more than ten-fold reduction in the number of total trainable parameters, including embedding, attention, and feed-forward layers, with little degradation in on-task performance. The approach operates out-of-the-box, replacing each parameter matrix with its compact equivalent while maintaining the architecture of the network. | null |
The Adaptive Doubly Robust Estimator and a Paradox Concerning Logging Policy | https://papers.nips.cc/paper_files/paper/2021/hash/09e7655fc1dc8fa7c9d6c4478313d5e6-Abstract.html | Masahiro Kato, Kenichiro McAlinn, Shota Yasui | https://papers.nips.cc/paper_files/paper/2021/hash/09e7655fc1dc8fa7c9d6c4478313d5e6-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11727-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/09e7655fc1dc8fa7c9d6c4478313d5e6-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=zyD5AiyLuzG | https://papers.nips.cc/paper_files/paper/2021/file/09e7655fc1dc8fa7c9d6c4478313d5e6-Supplemental.pdf | The doubly robust (DR) estimator, which consists of two nuisance parameters, the conditional mean outcome and the logging policy (the probability of choosing an action), is crucial in causal inference. This paper proposes a DR estimator for dependent samples obtained from adaptive experiments. To obtain an asymptotically normal semiparametric estimator from dependent samples without non-Donsker nuisance estimators, we propose adaptive-fitting as a variant of sample-splitting. We also report an empirical paradox that our proposed DR estimator tends to show better performances compared to other estimators utilizing the true logging policy. While a similar phenomenon is known for estimators with i.i.d. samples, traditional explanations based on asymptotic efficiency cannot elucidate our case with dependent samples. We confirm this hypothesis through simulation studies. | null |
Regularized Softmax Deep Multi-Agent Q-Learning | https://papers.nips.cc/paper_files/paper/2021/hash/0a113ef6b61820daa5611c870ed8d5ee-Abstract.html | Ling Pan, Tabish Rashid, Bei Peng, Longbo Huang, Shimon Whiteson | https://papers.nips.cc/paper_files/paper/2021/hash/0a113ef6b61820daa5611c870ed8d5ee-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11728-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0a113ef6b61820daa5611c870ed8d5ee-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=BGS3o8SpjI3 | https://papers.nips.cc/paper_files/paper/2021/file/0a113ef6b61820daa5611c870ed8d5ee-Supplemental.pdf | Tackling overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning, but has received comparatively little attention in the multi-agent setting. In this work, we empirically demonstrate that QMIX, a popular $Q$-learning algorithm for cooperative multi-agent reinforcement learning (MARL), suffers from a more severe overestimation in practice than previously acknowledged, and is not mitigated by existing approaches. We rectify this with a novel regularization-based update scheme that penalizes large joint action-values that deviate from a baseline and demonstrate its effectiveness in stabilizing learning. Furthermore, we propose to employ a softmax operator, which we efficiently approximate in a novel way in the multi-agent setting, to further reduce the potential overestimation bias. Our approach, Regularized Softmax (RES) Deep Multi-Agent $Q$-Learning, is general and can be applied to any $Q$-learning based MARL algorithm. We demonstrate that, when applied to QMIX, RES avoids severe overestimation and significantly improves performance, yielding state-of-the-art results in a variety of cooperative multi-agent tasks, including the challenging StarCraft II micromanagement benchmarks. | null |
Physics-Aware Downsampling with Deep Learning for Scalable Flood Modeling | https://papers.nips.cc/paper_files/paper/2021/hash/0a3b5a7a477d359746061d41c3a04fd6-Abstract.html | Niv Giladi, Zvika Ben-Haim, Sella Nevo, Yossi Matias, Daniel Soudry | https://papers.nips.cc/paper_files/paper/2021/hash/0a3b5a7a477d359746061d41c3a04fd6-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11729-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0a3b5a7a477d359746061d41c3a04fd6-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=A_TVp2HtxPS | https://papers.nips.cc/paper_files/paper/2021/file/0a3b5a7a477d359746061d41c3a04fd6-Supplemental.pdf | Background. Floods are the most common natural disaster in the world, affecting the lives of hundreds of millions. Flood forecasting is therefore a vitally important endeavor, typically achieved using physical water flow simulations, which rely on accurate terrain elevation maps. However, such simulations, based on solving partial differential equations, are computationally prohibitive on a large scale. This scalability issue is commonly alleviated using a coarse grid representation of the elevation map, though this representation may distort crucial terrain details, leading to significant inaccuracies in the simulation.\Contributions. We train a deep neural network to perform physics-informed downsampling of the terrain map: we optimize the coarse grid representation of the terrain maps, so that the flood prediction will match the fine grid solution. For the learning process to succeed, we configure a dataset specifically for this task. We demonstrate that with this method, it is possible to achieve a significant reduction in computational cost, while maintaining an accurate solution. A reference implementation accompanies the paper as well as documentation and code for dataset reproduction. | null |
Systematic Generalization with Edge Transformers | https://papers.nips.cc/paper_files/paper/2021/hash/0a4dc6dae338c9cb08947c07581f77a2-Abstract.html | Leon Bergen, Timothy O'Donnell, Dzmitry Bahdanau | https://papers.nips.cc/paper_files/paper/2021/hash/0a4dc6dae338c9cb08947c07581f77a2-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11730-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0a4dc6dae338c9cb08947c07581f77a2-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=UUds0Jr_XWk | https://papers.nips.cc/paper_files/paper/2021/file/0a4dc6dae338c9cb08947c07581f77a2-Supplemental.pdf | Recent research suggests that systematic generalization in natural language understanding remains a challenge for state-of-the-art neural models such as Transformers and Graph Neural Networks. To tackle this challenge, we propose Edge Transformer, a new model that combines inspiration from Transformers and rule-based symbolic AI. The first key idea in Edge Transformers is to associate vector states with every edge, that is, with every pair of input nodes---as opposed to just every node, as it is done in the Transformer model. The second major innovation is a triangular attention mechanism that updates edge representations in a way that is inspired by unification from logic programming. We evaluate Edge Transformer on compositional generalization benchmarks in relational reasoning, semantic parsing, and dependency parsing. In all three settings, the Edge Transformer outperforms Relation-aware, Universal and classical Transformer baselines. | null |
TransformerFusion: Monocular RGB Scene Reconstruction using Transformers | https://papers.nips.cc/paper_files/paper/2021/hash/0a87257e5308197df43230edf4ad1dae-Abstract.html | Aljaz Bozic, Pablo Palafox, Justus Thies, Angela Dai, Matthias Niessner | https://papers.nips.cc/paper_files/paper/2021/hash/0a87257e5308197df43230edf4ad1dae-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11731-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0a87257e5308197df43230edf4ad1dae-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ZEoMBPtvqey | https://papers.nips.cc/paper_files/paper/2021/file/0a87257e5308197df43230edf4ad1dae-Supplemental.zip | We introduce TransformerFusion, a transformer-based 3D scene reconstruction approach. From an input monocular RGB video, the video frames are processed by a transformer network that fuses the observations into a volumetric feature grid representing the scene; this feature grid is then decoded into an implicit 3D scene representation. Key to our approach is the transformer architecture that enables the network to learn to attend to the most relevant image frames for each 3D location in the scene, supervised only by the scene reconstruction task. Features are fused in a coarse-to-fine fashion, storing fine-level features only where needed, requiring lower memory storage and enabling fusion at interactive rates. The feature grid is then decoded to a higher-resolution scene reconstruction, using an MLP-based surface occupancy prediction from interpolated coarse-to-fine 3D features. Our approach results in an accurate surface reconstruction, outperforming state-of-the-art multi-view stereo depth estimation methods, fully-convolutional 3D reconstruction approaches, and approaches using LSTM- or GRU-based recurrent networks for video sequence fusion. | null |
Maximum Likelihood Training of Score-Based Diffusion Models | https://papers.nips.cc/paper_files/paper/2021/hash/0a9fdbb17feb6ccb7ec405cfb85222c4-Abstract.html | Yang Song, Conor Durkan, Iain Murray, Stefano Ermon | https://papers.nips.cc/paper_files/paper/2021/hash/0a9fdbb17feb6ccb7ec405cfb85222c4-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11732-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0a9fdbb17feb6ccb7ec405cfb85222c4-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=AklttWFnxS9 | https://papers.nips.cc/paper_files/paper/2021/file/0a9fdbb17feb6ccb7ec405cfb85222c4-Supplemental.pdf | Score-based diffusion models synthesize samples by reversing a stochastic process that diffuses data to noise, and are trained by minimizing a weighted combination of score matching losses. The log-likelihood of score-based diffusion models can be tractably computed through a connection to continuous normalizing flows, but log-likelihood is not directly optimized by the weighted combination of score matching losses. We show that for a specific weighting scheme, the objective upper bounds the negative log-likelihood, thus enabling approximate maximum likelihood training of score-based diffusion models. We empirically observe that maximum likelihood training consistently improves the likelihood of score-based diffusion models across multiple datasets, stochastic processes, and model architectures. Our best models achieve negative log-likelihoods of 2.83 and 3.76 bits/dim on CIFAR-10 and ImageNet $32\times 32$ without any data augmentation, on a par with state-of-the-art autoregressive models on these tasks. | null |
Global Convergence of Gradient Descent for Asymmetric Low-Rank Matrix Factorization | https://papers.nips.cc/paper_files/paper/2021/hash/0af854284f4ab0cfea8fcfd889cbb41a-Abstract.html | Tian Ye, Simon S. Du | https://papers.nips.cc/paper_files/paper/2021/hash/0af854284f4ab0cfea8fcfd889cbb41a-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11733-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0af854284f4ab0cfea8fcfd889cbb41a-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=sMIMAXqiqj3 | https://papers.nips.cc/paper_files/paper/2021/file/0af854284f4ab0cfea8fcfd889cbb41a-Supplemental.pdf | We study the asymmetric low-rank factorization problem:\[\min_{\mathbf{U} \in \mathbb{R}^{m \times d}, \mathbf{V} \in \mathbb{R}^{n \times d}} \frac{1}{2}\|\mathbf{U}\mathbf{V}^\top -\mathbf{\Sigma}\|_F^2\]where $\mathbf{\Sigma}$ is a given matrix of size $m \times n$ and rank $d$. This is a canonical problem that admits two difficulties in optimization: 1) non-convexity and 2) non-smoothness (due to unbalancedness of $\mathbf{U}$ and $\mathbf{V}$). This is also a prototype for more complex problems such as asymmetric matrix sensing and matrix completion. Despite being non-convex and non-smooth, it has been observed empirically that the randomly initialized gradient descent algorithm can solve this problem in polynomial time. Existing theories to explain this phenomenon all require artificial modifications of the algorithm, such as adding noise in each iteration and adding a balancing regularizer to balance the $\mathbf{U}$ and $\mathbf{V}$.This paper presents the first proof that shows randomly initialized gradient descent converges to a global minimum of the asymmetric low-rank factorization problem with a polynomial rate. For the proof, we develop 1) a new symmetrization technique to capture the magnitudes of the symmetry and asymmetry, and 2) a quantitative perturbation analysis to approximate matrix derivatives. We believe both are useful for other related non-convex problems. | null |
Adaptive Data Augmentation on Temporal Graphs | https://papers.nips.cc/paper_files/paper/2021/hash/0b0b0994d12ad343511adfbfc364256e-Abstract.html | Yiwei Wang, Yujun Cai, Yuxuan Liang, Henghui Ding, Changhu Wang, Siddharth Bhatia, Bryan Hooi | https://papers.nips.cc/paper_files/paper/2021/hash/0b0b0994d12ad343511adfbfc364256e-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11734-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0b0b0994d12ad343511adfbfc364256e-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=G5l8qucT8A | https://papers.nips.cc/paper_files/paper/2021/file/0b0b0994d12ad343511adfbfc364256e-Supplemental.pdf | Temporal Graph Networks (TGNs) are powerful on modeling temporal graph data based on their increased complexity. Higher complexity carries with it a higher risk of overfitting, which makes TGNs capture random noise instead of essential semantic information. To address this issue, our idea is to transform the temporal graphs using data augmentation (DA) with adaptive magnitudes, so as to effectively augment the input features and preserve the essential semantic information. Based on this idea, we present the MeTA (Memory Tower Augmentation) module: a multi-level module that processes the augmented graphs of different magnitudes on separate levels, and performs message passing across levels to provide adaptively augmented inputs for every prediction. MeTA can be flexibly applied to the training of popular TGNs to improve their effectiveness without increasing their time complexity. To complement MeTA, we propose three DA strategies to realistically model noise by modifying both the temporal and topological features. Empirical results on standard datasets show that MeTA yields significant gains for the popular TGN models on edge prediction and node classification in an efficient manner. | null |
Regularized Frank-Wolfe for Dense CRFs: Generalizing Mean Field and Beyond | https://papers.nips.cc/paper_files/paper/2021/hash/0b0d29e5d5c8a7a25dced6405bd022a9-Abstract.html | Đ.Khuê Lê-Huu, Karteek Alahari | https://papers.nips.cc/paper_files/paper/2021/hash/0b0d29e5d5c8a7a25dced6405bd022a9-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11735-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0b0d29e5d5c8a7a25dced6405bd022a9-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=lI2To0NGe3Q | https://papers.nips.cc/paper_files/paper/2021/file/0b0d29e5d5c8a7a25dced6405bd022a9-Supplemental.pdf | We introduce regularized Frank-Wolfe, a general and effective algorithm for inference and learning of dense conditional random fields (CRFs). The algorithm optimizes a nonconvex continuous relaxation of the CRF inference problem using vanilla Frank-Wolfe with approximate updates, which are equivalent to minimizing a regularized energy function. Our proposed method is a generalization of existing algorithms such as mean field or concave-convex procedure. This perspective not only offers a unified analysis of these algorithms, but also allows an easy way of exploring different variants that potentially yield better performance. We illustrate this in our empirical results on standard semantic segmentation datasets, where several instantiations of our regularized Frank-Wolfe outperform mean field inference, both as a standalone component and as an end-to-end trainable layer in a neural network. We also show that dense CRFs, coupled with our new algorithms, produce significant improvements over strong CNN baselines. | null |
Terra: Imperative-Symbolic Co-Execution of Imperative Deep Learning Programs | https://papers.nips.cc/paper_files/paper/2021/hash/0b32f1a9efe5edf3dd2f38b0c0052bfe-Abstract.html | Taebum Kim, Eunji Jeong, Geon-Woo Kim, Yunmo Koo, Sehoon Kim, Gyeongin Yu, Byung-Gon Chun | https://papers.nips.cc/paper_files/paper/2021/hash/0b32f1a9efe5edf3dd2f38b0c0052bfe-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11736-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0b32f1a9efe5edf3dd2f38b0c0052bfe-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=um7zVEeyVH1 | https://papers.nips.cc/paper_files/paper/2021/file/0b32f1a9efe5edf3dd2f38b0c0052bfe-Supplemental.pdf | Imperative programming allows users to implement their deep neural networks (DNNs) easily and has become an essential part of recent deep learning (DL) frameworks. Recently, several systems have been proposed to combine the usability of imperative programming with the optimized performance of symbolic graph execution. Such systems convert imperative Python DL programs to optimized symbolic graphs and execute them. However, they cannot fully support the usability of imperative programming. For example, if an imperative DL program contains a Python feature with no corresponding symbolic representation (e.g., third-party library calls or unsupported dynamic control flows) they fail to execute the program. To overcome this limitation, we propose Terra, an imperative-symbolic co-execution system that can handle any imperative DL programs while achieving the optimized performance of symbolic graph execution. To achieve this, Terra builds a symbolic graph by decoupling DL operations from Python features. Then, Terra conducts the imperative execution to support all Python features, while delegating the decoupled operations to the symbolic execution. We evaluated Terra’s performance improvement and coverage with ten imperative DL programs for several DNN architectures. The results show that Terra can speed up the execution of all ten imperative DL programs, whereas AutoGraph, one of the state-of-the-art systems, fails to execute five of them. | null |
Uniform Sampling over Episode Difficulty | https://papers.nips.cc/paper_files/paper/2021/hash/0b3f44d9054402de39441e165a4bdfe0-Abstract.html | Sébastien Arnold, Guneet Dhillon, Avinash Ravichandran, Stefano Soatto | https://papers.nips.cc/paper_files/paper/2021/hash/0b3f44d9054402de39441e165a4bdfe0-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11737-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0b3f44d9054402de39441e165a4bdfe0-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=3GpcwM1slH8 | https://papers.nips.cc/paper_files/paper/2021/file/0b3f44d9054402de39441e165a4bdfe0-Supplemental.pdf | Episodic training is a core ingredient of few-shot learning to train models on tasks with limited labelled data. Despite its success, episodic training remains largely understudied, prompting us to ask the question: what is the best way to sample episodes? In this paper, we first propose a method to approximate episode sampling distributions based on their difficulty. Building on this method, we perform an extensive analysis and find that sampling uniformly over episode difficulty outperforms other sampling schemes, including curriculum and easy-/hard-mining. As the proposed sampling method is algorithm agnostic, we can leverage these insights to improve few-shot learning accuracies across many episodic training algorithms. We demonstrate the efficacy of our method across popular few-shot learning datasets, algorithms, network architectures, and protocols. | null |
Scalable Intervention Target Estimation in Linear Models | https://papers.nips.cc/paper_files/paper/2021/hash/0b94ce08688c6389ce7b68c52ce3f8c7-Abstract.html | Burak Varici, Karthikeyan Shanmugam, Prasanna Sattigeri, Ali Tajer | https://papers.nips.cc/paper_files/paper/2021/hash/0b94ce08688c6389ce7b68c52ce3f8c7-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11738-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0b94ce08688c6389ce7b68c52ce3f8c7-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=sLVJXf-BkIt | https://papers.nips.cc/paper_files/paper/2021/file/0b94ce08688c6389ce7b68c52ce3f8c7-Supplemental.zip | This paper considers the problem of estimating the unknown intervention targets in a causal directed acyclic graph from observational and interventional data. The focus is on soft interventions in linear structural equation models (SEMs). Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets even for linear SEMs. This severely limits their scalability and sample complexity. This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets. The pivotal idea is to estimate the intervention sites from the difference between the precision matrices associated with the observational and interventional datasets. It involves repeatedly estimating such sites in different subsets of variables. The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class. Consistency, Markov equivalency, and sample complexity are established analytically. Finally, simulation results on both real and synthetic data demonstrate the gains of the proposed approach for scalable causal structure recovery. Implementation of the algorithm and the code to reproduce the simulation results are available at \url{https://github.com/bvarici/intervention-estimation}. | null |
Play to Grade: Testing Coding Games as Classifying Markov Decision Process | https://papers.nips.cc/paper_files/paper/2021/hash/0b9b6d6d154e98ce34b3f2e4ef76eae9-Abstract.html | Allen Nie, Emma Brunskill, Chris Piech | https://papers.nips.cc/paper_files/paper/2021/hash/0b9b6d6d154e98ce34b3f2e4ef76eae9-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11739-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0b9b6d6d154e98ce34b3f2e4ef76eae9-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=4pciaBbRL4B | null | Contemporary coding education often presents students with the task of developing programs that have user interaction and complex dynamic systems, such as mouse based games. While pedagogically compelling, there are no contemporary autonomous methods for providing feedback. Notably, interactive programs are impossible to grade by traditional unit tests. In this paper we formalize the challenge of providing feedback to interactive programs as a task of classifying Markov Decision Processes (MDPs). Each student's program fully specifies an MDP where the agent needs to operate and decide, under reasonable generalization, if the dynamics and reward model of the input MDP should be categorized as correct or broken. We demonstrate that by designing a cooperative objective between an agent and an autoregressive model, we can use the agent to sample differential trajectories from the input MDP that allows a classifier to determine membership: Play to Grade. Our method enables an automatic feedback system for interactive code assignments. We release a dataset of 711,274 anonymized student submissions to a single assignment with hand-coded bug labels to support future research. | null |
Distributional Reinforcement Learning for Multi-Dimensional Reward Functions | https://papers.nips.cc/paper_files/paper/2021/hash/0b9e57c46de934cee33b0e8d1839bfc2-Abstract.html | Pushi Zhang, Xiaoyu Chen, Li Zhao, Wei Xiong, Tao Qin, Tie-Yan Liu | https://papers.nips.cc/paper_files/paper/2021/hash/0b9e57c46de934cee33b0e8d1839bfc2-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11740-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0b9e57c46de934cee33b0e8d1839bfc2-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=u7oKU1iXTa9 | https://papers.nips.cc/paper_files/paper/2021/file/0b9e57c46de934cee33b0e8d1839bfc2-Supplemental.pdf | A growing trend for value-based reinforcement learning (RL) algorithms is to capture more information than scalar value functions in the value network. One of the most well-known methods in this branch is distributional RL, which models return distribution instead of scalar value. In another line of work, hybrid reward architectures (HRA) in RL have studied to model source-specific value functions for each source of reward, which is also shown to be beneficial in performance. To fully inherit the benefits of distributional RL and hybrid reward architectures, we introduce Multi-Dimensional Distributional DQN (MD3QN), which extends distributional RL to model the joint return distribution from multiple reward sources. As a by-product of joint distribution modeling, MD3QN can capture not only the randomness in returns for each source of reward, but also the rich reward correlation between the randomness of different sources. We prove the convergence for the joint distributional Bellman operator and build our empirical algorithm by minimizing the Maximum Mean Discrepancy between joint return distribution and its Bellman target. In experiments, our method accurately models the joint return distribution in environments with richly correlated reward functions, and outperforms previous RL methods utilizing multi-dimensional reward functions in the control setting. | null |
Differentiable Unsupervised Feature Selection based on a Gated Laplacian | https://papers.nips.cc/paper_files/paper/2021/hash/0bc10d8a74dbafbf242e30433e83aa56-Abstract.html | Ofir Lindenbaum, Uri Shaham, Erez Peterfreund, Jonathan Svirsky, Nicolas Casey, Yuval Kluger | https://papers.nips.cc/paper_files/paper/2021/hash/0bc10d8a74dbafbf242e30433e83aa56-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11741-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0bc10d8a74dbafbf242e30433e83aa56-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=OUH25e12YyH | https://papers.nips.cc/paper_files/paper/2021/file/0bc10d8a74dbafbf242e30433e83aa56-Supplemental.pdf | Scientific observations may consist of a large number of variables (features). Selecting a subset of meaningful features is often crucial for identifying patterns hidden in the ambient space. In this paper, we present a method for unsupervised feature selection, and we demonstrate its advantage in clustering, a common unsupervised task. We propose a differentiable loss that combines a graph Laplacian-based score that favors low-frequency features with a gating mechanism for removing nuisance features. Our method improves upon the naive graph Laplacian score by replacing it with a gated variant computed on a subset of low-frequency features. We identify this subset by learning the parameters of continuously relaxed Bernoulli variables, which gate the entire feature space. We mathematically motivate the proposed approach and demonstrate that it is crucial to compute the graph Laplacian on the gated inputs rather than on the full feature set in the high noise regime. Using several real-world examples, we demonstrate the efficacy and advantage of the proposed approach over leading baselines. | null |
Smooth Bilevel Programming for Sparse Regularization | https://papers.nips.cc/paper_files/paper/2021/hash/0bed45bd5774ffddc95ffe500024f628-Abstract.html | Clarice Poon, Gabriel Peyré | https://papers.nips.cc/paper_files/paper/2021/hash/0bed45bd5774ffddc95ffe500024f628-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11742-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0bed45bd5774ffddc95ffe500024f628-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=vnHjsF7NSMw | https://papers.nips.cc/paper_files/paper/2021/file/0bed45bd5774ffddc95ffe500024f628-Supplemental.pdf | Iteratively reweighted least square (IRLS) is a popular approach to solve sparsity-enforcing regression problems in machine learning. State of the art approaches are more efficient but typically rely on specific coordinate pruning schemes. In this work, we show how a surprisingly simple re-parametrization of IRLS, coupled with a bilevel resolution (instead of an alternating scheme) is able to achieve top performances on a wide range of sparsity (such as Lasso, group Lasso and trace norm regularizations), regularization strength (including hard constraints), and design matrices (ranging from correlated designs to differential operators). Similarly to IRLS, our method only involves linear systems resolutions, but in sharp contrast, corresponds to the minimization of a smooth function. Despite being non-convex, we show that there is no spurious minima and that saddle points are "ridable'', so that there always exists a descent direction. We thus advocate for the use of a BFGS quasi-Newton solver, which makes our approach simple, robust and efficient. We perform a numerical benchmark of the convergence speed of our algorithm against state of the art solvers for Lasso, group Lasso, trace norm and linearly constrained problems. These results highlight the versatility of our approach, removing the need to use different solvers depending on the specificity of the ML problem under study. | null |
Grounding Representation Similarity Through Statistical Testing | https://papers.nips.cc/paper_files/paper/2021/hash/0c0bf917c7942b5a08df71f9da626f97-Abstract.html | Frances Ding, Jean-Stanislas Denain, Jacob Steinhardt | https://papers.nips.cc/paper_files/paper/2021/hash/0c0bf917c7942b5a08df71f9da626f97-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11743-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0c0bf917c7942b5a08df71f9da626f97-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=_kwj6V53ZqB | https://papers.nips.cc/paper_files/paper/2021/file/0c0bf917c7942b5a08df71f9da626f97-Supplemental.pdf | To understand neural network behavior, recent works quantitatively compare different networks' learned representations using canonical correlation analysis (CCA), centered kernel alignment (CKA), and other dissimilarity measures. Unfortunately, these widely used measures often disagree on fundamental observations, such as whether deep networks differing only in random initialization learn similar representations. These disagreements raise the question: which, if any, of these dissimilarity measures should we believe? We provide a framework to ground this question through a concrete test: measures should have \emph{sensitivity} to changes that affect functional behavior, and \emph{specificity} against changes that do not. We quantify this through a variety of functional behaviors including probing accuracy and robustness to distribution shift, and examine changes such as varying random initialization and deleting principal components. We find that current metrics exhibit different weaknesses, note that a classical baseline performs surprisingly well, and highlight settings where all metrics appear to fail, thus providing a challenge set for further improvement. | null |
A Consciousness-Inspired Planning Agent for Model-Based Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2021/hash/0c215f194276000be6a6df6528067151-Abstract.html | Mingde Zhao, Zhen Liu, Sitao Luan, Shuyuan Zhang, Doina Precup, Yoshua Bengio | https://papers.nips.cc/paper_files/paper/2021/hash/0c215f194276000be6a6df6528067151-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11744-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0c215f194276000be6a6df6528067151-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=jh1lAmTMOJp | https://papers.nips.cc/paper_files/paper/2021/file/0c215f194276000be6a6df6528067151-Supplemental.pdf | We present an end-to-end, model-based deep reinforcement learning agent which dynamically attends to relevant parts of its state during planning. The agent uses a bottleneck mechanism over a set-based representation to force the number of entities to which the agent attends at each planning step to be small. In experiments, we investigate the bottleneck mechanism with several sets of customized environments featuring different challenges. We consistently observe that the design allows the planning agents to generalize their learned task-solving abilities in compatible unseen environments by attending to the relevant objects, leading to better out-of-distribution generalization performance. | null |
Reward-Free Model-Based Reinforcement Learning with Linear Function Approximation | https://papers.nips.cc/paper_files/paper/2021/hash/0cb929eae7a499e50248a3a78f7acfc7-Abstract.html | Weitong ZHANG, Dongruo Zhou, Quanquan Gu | https://papers.nips.cc/paper_files/paper/2021/hash/0cb929eae7a499e50248a3a78f7acfc7-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11745-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0cb929eae7a499e50248a3a78f7acfc7-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=IoEnnwAP7aP | null | We study the model-based reward-free reinforcement learning with linear function approximation for episodic Markov decision processes (MDPs). In this setting, the agent works in two phases. In the exploration phase, the agent interacts with the environment and collects samples without the reward. In the planning phase, the agent is given a specific reward function and uses samples collected from the exploration phase to learn a good policy. We propose a new provably efficient algorithm, called UCRL-RFE under the Linear Mixture MDP assumption, where the transition probability kernel of the MDP can be parameterized by a linear function over certain feature mappings defined on the triplet of state, action, and next state. We show that to obtain an $\epsilon$-optimal policy for arbitrary reward function, UCRL-RFE needs to sample at most $\tilde O(H^5d^2\epsilon^{-2})$ episodes during the exploration phase. Here, $H$ is the length of the episode, $d$ is the dimension of the feature mapping. We also propose a variant of UCRL-RFE using Bernstein-type bonus and show that it needs to sample at most $\tilde O(H^4d(H + d)\epsilon^{-2})$ to achieve an $\epsilon$-optimal policy. By constructing a special class of linear Mixture MDPs, we also prove that for any reward-free algorithm, it needs to sample at least $\tilde \Omega(H^2d\epsilon^{-2})$ episodes to obtain an $\epsilon$-optimal policy. Our upper bound matches the lower bound in terms of the dependence on $\epsilon$ and the dependence on $d$ if $H \ge d$. | null |
Beltrami Flow and Neural Diffusion on Graphs | https://papers.nips.cc/paper_files/paper/2021/hash/0cbed40c0d920b94126eaf5e707be1f5-Abstract.html | Benjamin Chamberlain, James Rowbottom, Davide Eynard, Francesco Di Giovanni, Xiaowen Dong, Michael Bronstein | https://papers.nips.cc/paper_files/paper/2021/hash/0cbed40c0d920b94126eaf5e707be1f5-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11746-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0cbed40c0d920b94126eaf5e707be1f5-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=4YlE2huxEsl | https://papers.nips.cc/paper_files/paper/2021/file/0cbed40c0d920b94126eaf5e707be1f5-Supplemental.pdf | We propose a novel class of graph neural networks based on the discretized Beltrami flow, a non-Euclidean diffusion PDE. In our model, node features are supplemented with positional encodings derived from the graph topology and jointly evolved by the Beltrami flow, producing simultaneously continuous feature learning, topology evolution. The resulting model generalizes many popular graph neural networks and achieves state-of-the-art results on several benchmarks. | null |
Think Big, Teach Small: Do Language Models Distil Occam’s Razor? | https://papers.nips.cc/paper_files/paper/2021/hash/0cd6a652ed1f7811192db1f700c8f0e7-Abstract.html | Gonzalo Jaimovitch-Lopez, David Castellano Falcón, Cesar Ferri, José Hernández-Orallo | https://papers.nips.cc/paper_files/paper/2021/hash/0cd6a652ed1f7811192db1f700c8f0e7-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11747-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0cd6a652ed1f7811192db1f700c8f0e7-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=F6gvhOgTM-4 | https://papers.nips.cc/paper_files/paper/2021/file/0cd6a652ed1f7811192db1f700c8f0e7-Supplemental.pdf | Large language models have recently shown a remarkable ability for few-shot learning, including patterns of algorithmic nature. However, it is still an open question to determine what kind of patterns these models can capture and how many examples they need in their prompts. We frame this question as a teaching problem with strong priors, and study whether language models can identify simple algorithmic concepts from small witness sets. In particular, we explore how several GPT architectures, program induction systems and humans perform in terms of the complexity of the concept and the number of additional examples, and how much their behaviour differs. This first joint analysis of language models and machine teaching can address key questions for artificial intelligence and machine learning, such as whether some strong priors, and Occam’s razor in particular, can be distilled from data, making learning from a few examples possible. | null |
Disentangling Identifiable Features from Noisy Data with Structured Nonlinear ICA | https://papers.nips.cc/paper_files/paper/2021/hash/0cdbb4e65815fbaf79689b15482e7575-Abstract.html | Hermanni Hälvä, Sylvain Le Corff, Luc Lehéricy, Jonathan So, Yongjie Zhu, Elisabeth Gassiat, Aapo Hyvarinen | https://papers.nips.cc/paper_files/paper/2021/hash/0cdbb4e65815fbaf79689b15482e7575-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11748-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0cdbb4e65815fbaf79689b15482e7575-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=52XXcK8jY0J | https://papers.nips.cc/paper_files/paper/2021/file/0cdbb4e65815fbaf79689b15482e7575-Supplemental.pdf | We introduce a new general identifiable framework for principled disentanglement referred to as Structured Nonlinear Independent Component Analysis (SNICA). Our contribution is to extend the identifiability theory of deep generative models for a very broad class of structured models. While previous works have shown identifiability for specific classes of time-series models, our theorems extend this to more general temporal structures as well as to models with more complex structures such as spatial dependencies. In particular, we establish the major result that identifiability for this framework holds even in the presence of noise of unknown distribution. Finally, as an example of our framework's flexibility, we introduce the first nonlinear ICA model for time-series that combines the following very useful properties: it accounts for both nonstationarity and autocorrelation in a fully unsupervised setting; performs dimensionality reduction; models hidden states; and enables principled estimation and inference by variational maximum-likelihood. | null |
Conditionally Parameterized, Discretization-Aware Neural Networks for Mesh-Based Modeling of Physical Systems | https://papers.nips.cc/paper_files/paper/2021/hash/0cddb7c06f1cd518e1efdc0e20b70c31-Abstract.html | Jiayang Xu, Aniruddhe Pradhan, Karthikeyan Duraisamy | https://papers.nips.cc/paper_files/paper/2021/hash/0cddb7c06f1cd518e1efdc0e20b70c31-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11749-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0cddb7c06f1cd518e1efdc0e20b70c31-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=0yMGEUQKd2D | https://papers.nips.cc/paper_files/paper/2021/file/0cddb7c06f1cd518e1efdc0e20b70c31-Supplemental.pdf | Simulations of complex physical systems are typically realized by discretizing partial differential equations (PDEs) on unstructured meshes. While neural networks have recently been explored for the surrogate and reduced order modeling of PDE solutions, they often ignore interactions or hierarchical relations between input features, and process them as concatenated mixtures. We generalize the idea of conditional parameterization -- using trainable functions of input parameters to generate the weights of a neural network, and extend them in a flexible way to encode critical information. Inspired by discretized numerical methods, choices of the parameters include physical quantities and mesh topology features. The functional relation between the modeled features and the parameters is built into the network architecture. The method is implemented on different networks and applied to frontier scientific machine learning tasks including the discovery of unmodeled physics, super-resolution of coarse fields, and the simulation of unsteady flows with chemical reactions. The results show that the conditionally-parameterized networks provide superior performance compared to their traditional counterparts. The CP-GNet - an architecture that can be trained on very few data snapshots - is proposed as the first deep learning model capable of standalone prediction of reacting flows on irregular meshes. | null |
USCO-Solver: Solving Undetermined Stochastic Combinatorial Optimization Problems | https://papers.nips.cc/paper_files/paper/2021/hash/0d3180d672e08b4c5312dcdafdf6ef36-Abstract.html | Guangmo Tong | https://papers.nips.cc/paper_files/paper/2021/hash/0d3180d672e08b4c5312dcdafdf6ef36-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11750-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0d3180d672e08b4c5312dcdafdf6ef36-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=P85jauwfNCV | https://papers.nips.cc/paper_files/paper/2021/file/0d3180d672e08b4c5312dcdafdf6ef36-Supplemental.zip | Real-world decision-making systems are often subject to uncertainties that have to be resolved through observational data. Therefore, we are frequently confronted with combinatorial optimization problems of which the objective function is unknown and thus has to be debunked using empirical evidence. In contrast to the common practice that relies on a learning-and-optimization strategy, we consider the regression between combinatorial spaces, aiming to infer high-quality optimization solutions from samples of input-solution pairs -- without the need to learn the objective function. Our main deliverable is a universal solver that is able to handle abstract undetermined stochastic combinatorial optimization problems. For learning foundations, we present learning-error analysis under the PAC-Bayesian framework using a new margin-based analysis. In empirical studies, we demonstrate our design using proof-of-concept experiments, and compare it with other methods that are potentially applicable. Overall, we obtain highly encouraging experimental results for several classic combinatorial problems on both synthetic and real-world datasets. | null |
Adaptive Conformal Inference Under Distribution Shift | https://papers.nips.cc/paper_files/paper/2021/hash/0d441de75945e5acbc865406fc9a2559-Abstract.html | Isaac Gibbs, Emmanuel Candes | https://papers.nips.cc/paper_files/paper/2021/hash/0d441de75945e5acbc865406fc9a2559-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11751-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0d441de75945e5acbc865406fc9a2559-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=6vaActvpcp3 | https://papers.nips.cc/paper_files/paper/2021/file/0d441de75945e5acbc865406fc9a2559-Supplemental.pdf | We develop methods for forming prediction sets in an online setting where the data generating distribution is allowed to vary over time in an unknown fashion. Our framework builds on ideas from conformal inference to provide a general wrapper that can be combined with any black box method that produces point predictions of the unseen label or estimated quantiles of its distribution. While previous conformal inference methods rely on the assumption that the data are exchangeable, our adaptive approach provably achieves the desired coverage frequency over long-time intervals irrespective of the true data generating process. We accomplish this by modelling the distribution shift as a learning problem in a single parameter whose optimal value is varying over time and must be continuously re-estimated. We test our method, adaptive conformal inference, on two real world datasets and find that its predictions are robust to visible and significant distribution shifts. | null |
Periodic Activation Functions Induce Stationarity | https://papers.nips.cc/paper_files/paper/2021/hash/0d5a4a5a748611231b945d28436b8ece-Abstract.html | Lassi Meronen, Martin Trapp, Arno Solin | https://papers.nips.cc/paper_files/paper/2021/hash/0d5a4a5a748611231b945d28436b8ece-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11752-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0d5a4a5a748611231b945d28436b8ece-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=gRwh5HkdaTm | https://papers.nips.cc/paper_files/paper/2021/file/0d5a4a5a748611231b945d28436b8ece-Supplemental.pdf | Neural network models are known to reinforce hidden data biases, making them unreliable and difficult to interpret. We seek to build models that `know what they do not know' by introducing inductive biases in the function space. We show that periodic activation functions in Bayesian neural networks establish a connection between the prior on the network weights and translation-invariant, stationary Gaussian process priors. Furthermore, we show that this link goes beyond sinusoidal (Fourier) activations by also covering triangular wave and periodic ReLU activation functions. In a series of experiments, we show that periodic activation functions obtain comparable performance for in-domain data and capture sensitivity to perturbed inputs in deep neural networks for out-of-domain detection. | null |
Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation | https://papers.nips.cc/paper_files/paper/2021/hash/0d5bd023a3ee11c7abca5b42a93c4866-Abstract.html | David Acuna, Jonah Philion, Sanja Fidler | https://papers.nips.cc/paper_files/paper/2021/hash/0d5bd023a3ee11c7abca5b42a93c4866-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11753-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0d5bd023a3ee11c7abca5b42a93c4866-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ZfIO21FYv4 | https://papers.nips.cc/paper_files/paper/2021/file/0d5bd023a3ee11c7abca5b42a93c4866-Supplemental.pdf | Autonomous driving relies on a huge volume of real-world data to be labeled to high precision. Alternative solutions seek to exploit driving simulators that can generate large amounts of labeled data with a plethora of content variations. However, the domain gap between the synthetic and real data remains, raising the following important question: What are the best way to utilize a self-driving simulator for perception tasks?. In this work, we build on top of recent advances in domain-adaptation theory, and from this perspective, propose ways to minimize the reality gap. We primarily focus on the use of labels in the synthetic domain alone. Our approach introduces both a principled way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator. Our method is easy to implement in practice as it is agnostic of the network architecture and the choice of the simulator. We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data (cameras, lidar) using an open-source simulator (CARLA), and evaluate the entire framework on a real-world dataset (nuScenes). Last but not least, we show what types of variations (e.g. weather conditions, number of assets, map design and color diversity) matter to perception networks when trained with driving simulators, and which ones can be compensated for with our domain adaptation technique. | null |
KS-GNN: Keywords Search over Incomplete Graphs via Graphs Neural Network | https://papers.nips.cc/paper_files/paper/2021/hash/0d7363894acdee742caf7fe4e97c4d49-Abstract.html | YU HAO, Xin Cao, Yufan Sheng, Yixiang Fang, Wei Wang | https://papers.nips.cc/paper_files/paper/2021/hash/0d7363894acdee742caf7fe4e97c4d49-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11754-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0d7363894acdee742caf7fe4e97c4d49-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Sh_MDcDUD5e | https://papers.nips.cc/paper_files/paper/2021/file/0d7363894acdee742caf7fe4e97c4d49-Supplemental.pdf | Keyword search is a fundamental task to retrieve information that is the most relevant to the query keywords. Keyword search over graphs aims to find subtrees or subgraphs containing all query keywords ranked according to some criteria. Existing studies all assume that the graphs have complete information. However, real-world graphs may contain some missing information (such as edges or keywords), thus making the problem much more challenging. To solve the problem of keyword search over incomplete graphs, we propose a novel model named KS-GNN based on the graph neural network and the auto-encoder. By considering the latent relationships and the frequency of different keywords, the proposed KS-GNN aims to alleviate the effect of missing information and is able to learn low-dimensional representative node embeddings that preserve both graph structure and keyword features. Our model can effectively answer keyword search queries with linear time complexity over incomplete graphs. The experiments on four real-world datasets show that our model consistently achieves better performance than state-of-the-art baseline methods in graphs having missing information. | null |
Reconstruction for Powerful Graph Representations | https://papers.nips.cc/paper_files/paper/2021/hash/0d8080853a54f8985276b0130266a657-Abstract.html | Leonardo Cotta, Christopher Morris, Bruno Ribeiro | https://papers.nips.cc/paper_files/paper/2021/hash/0d8080853a54f8985276b0130266a657-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11755-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0d8080853a54f8985276b0130266a657-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ZKbZ4mebI9l | https://papers.nips.cc/paper_files/paper/2021/file/0d8080853a54f8985276b0130266a657-Supplemental.pdf | Graph neural networks (GNNs) have limited expressive power, failing to represent many graph classes correctly. While more expressive graph representation learning (GRL) alternatives can distinguish some of these classes, they are significantly harder to implement, may not scale well, and have not been shown to outperform well-tuned GNNs in real-world tasks. Thus, devising simple, scalable, and expressive GRL architectures that also achieve real-world improvements remains an open challenge. In this work, we show the extent to which graph reconstruction---reconstructing a graph from its subgraphs---can mitigate the theoretical and practical problems currently faced by GRL architectures. First, we leverage graph reconstruction to build two new classes of expressive graph representations. Secondly, we show how graph reconstruction boosts the expressive power of any GNN architecture while being a (provably) powerful inductive bias for invariances to vertex removals. Empirically, we show how reconstruction can boost GNN's expressive power---while maintaining its invariance to permutations of the vertices---by solving seven graph property tasks not solvable by the original GNN. Further, we demonstrate how it boosts state-of-the-art GNN's performance across nine real-world benchmark datasets. | null |
Revealing and Protecting Labels in Distributed Training | https://papers.nips.cc/paper_files/paper/2021/hash/0d924f0e6b3fd0d91074c22727a53966-Abstract.html | Trung Dang, Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Peter Chin, Françoise Beaufays | https://papers.nips.cc/paper_files/paper/2021/hash/0d924f0e6b3fd0d91074c22727a53966-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11756-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0d924f0e6b3fd0d91074c22727a53966-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=WBuLBaoEKNK | null | Distributed learning paradigms such as federated learning often involve transmission of model updates, or gradients, over a network, thereby avoiding transmission of private data. However, it is possible for sensitive information about the training data to be revealed from such gradients. Prior works have demonstrated that labels can be revealed analytically from the last layer of certain models (e.g., ResNet), or they can be reconstructed jointly with model inputs by using Gradients Matching [Zhu et al.] with additional knowledge about the current state of the model. In this work, we propose a method to discover the set of labels of training samples from only the gradient of the last layer and the id to label mapping. Our method is applicable to a wide variety of model architectures across multiple domains. We demonstrate the effectiveness of our method for model training in two domains - image classification, and automatic speech recognition. Furthermore, we show that existing reconstruction techniques improve their efficacy when used in conjunction with our method. Conversely, we demonstrate that gradient quantization and sparsification can significantly reduce the success of the attack. | null |
Solving Graph-based Public Goods Games with Tree Search and Imitation Learning | https://papers.nips.cc/paper_files/paper/2021/hash/0db2e204010400f5c506620adcd1ae68-Abstract.html | Victor-Alexandru Darvariu, Stephen Hailes, Mirco Musolesi | https://papers.nips.cc/paper_files/paper/2021/hash/0db2e204010400f5c506620adcd1ae68-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11757-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0db2e204010400f5c506620adcd1ae68-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=XIuDe2A0jDL | https://papers.nips.cc/paper_files/paper/2021/file/0db2e204010400f5c506620adcd1ae68-Supplemental.zip | Public goods games represent insightful settings for studying incentives for individual agents to make contributions that, while costly for each of them, benefit the wider society. In this work, we adopt the perspective of a central planner with a global view of a network of self-interested agents and the goal of maximizing some desired property in the context of a best-shot public goods game. Existing algorithms for this known NP-complete problem find solutions that are sub-optimal and cannot optimize for criteria other than social welfare.In order to efficiently solve public goods games, our proposed method directly exploits the correspondence between equilibria and the Maximal Independent Set (mIS) structural property of graphs. In particular, we define a Markov Decision Process which incrementally generates an mIS, and adopt a planning method to search for equilibria, outperforming existing methods. Furthermore, we devise a graph imitation learning technique that uses demonstrations of the search to obtain a graph neural network parametrized policy which quickly generalizes to unseen game instances. Our evaluation results show that this policy is able to reach 99.5\% of the performance of the planning method while being three orders of magnitude faster to evaluate on the largest graphs tested. The methods presented in this work can be applied to a large class of public goods games of potentially high societal impact and more broadly to other graph combinatorial optimization problems. | null |
Stochastic Optimization of Areas Under Precision-Recall Curves with Provable Convergence | https://papers.nips.cc/paper_files/paper/2021/hash/0dd1bc593a91620daecf7723d2235624-Abstract.html | Qi Qi, Youzhi Luo, Zhao Xu, Shuiwang Ji, Tianbao Yang | https://papers.nips.cc/paper_files/paper/2021/hash/0dd1bc593a91620daecf7723d2235624-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11758-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0dd1bc593a91620daecf7723d2235624-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Q_64PF6XNut | https://papers.nips.cc/paper_files/paper/2021/file/0dd1bc593a91620daecf7723d2235624-Supplemental.pdf | Areas under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems. Compared with AUROC, AUPRC is a more appropriate metric for highly imbalanced datasets. While stochastic optimization of AUROC has been studied extensively, principled stochastic optimization of AUPRC has been rarely explored. In this work, we propose a principled technical method to optimize AUPRC for deep learning. Our approach is based on maximizing the averaged precision (AP), which is an unbiased point estimator of AUPRC. We cast the objective into a sum of dependent compositional functions with inner functions dependent on random variables of the outer level. We propose efficient adaptive and non-adaptive stochastic algorithms named SOAP with provable convergence guarantee under mild conditions by leveraging recent advances in stochastic compositional optimization. Extensive experimental results on image and graph datasets demonstrate that our proposed method outperforms prior methods on imbalanced problems in terms of AUPRC. To the best of our knowledge, our work represents the first attempt to optimize AUPRC with provable convergence. The SOAP has been implemented in the libAUC library at https://libauc.org/. | null |
Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization | https://papers.nips.cc/paper_files/paper/2021/hash/0dd6049f5fa537d41753be6d37859430-Abstract.html | Qi Zhu, Carl Yang, Yidan Xu, Haonan Wang, Chao Zhang, Jiawei Han | https://papers.nips.cc/paper_files/paper/2021/hash/0dd6049f5fa537d41753be6d37859430-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11759-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0dd6049f5fa537d41753be6d37859430-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=CzVPfeqPOBu | https://papers.nips.cc/paper_files/paper/2021/file/0dd6049f5fa537d41753be6d37859430-Supplemental.zip | Graph neural networks (GNNs) have achieved superior performance in various applications, but training dedicated GNNs can be costly for large-scale graphs. Some recent work started to study the pre-training of GNNs. However, none of them provide theoretical insights into the design of their frameworks, or clear requirements and guarantees towards their transferability. In this work, we establish a theoretically grounded and practically useful framework for the transfer learning of GNNs. Firstly, we propose a novel view towards the essential graph information and advocate the capturing of it as the goal of transferable GNN training, which motivates the design of EGI (Ego-Graph Information maximization) to analytically achieve this goal. Secondly,when node features are structure-relevant, we conduct an analysis of EGI transferability regarding the difference between the local graph Laplacians of the source and target graphs. We conduct controlled synthetic experiments to directly justify our theoretical conclusions. Comprehensive experiments on two real-world network datasets show consistent results in the analyzed setting of direct-transfering, while those on large-scale knowledge graphs show promising results in the more practical setting of transfering with fine-tuning. | null |
You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership | https://papers.nips.cc/paper_files/paper/2021/hash/0dfd8a39e2a5dd536c185e19a804a73b-Abstract.html | Xuxi Chen, Tianlong Chen, Zhenyu Zhang, Zhangyang Wang | https://papers.nips.cc/paper_files/paper/2021/hash/0dfd8a39e2a5dd536c185e19a804a73b-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11760-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0dfd8a39e2a5dd536c185e19a804a73b-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=-Z7FuZGUzv | https://papers.nips.cc/paper_files/paper/2021/file/0dfd8a39e2a5dd536c185e19a804a73b-Supplemental.pdf | Despite tremendous success in many application scenarios, the training and inference costs of using deep learning are also rapidly increasing over time. The lottery ticket hypothesis (LTH) emerges as a promising framework to leverage a special sparse subnetwork (i.e., $\textit{winning ticket}$) instead of a full model for both training and inference, that can lower both costs without sacrificing the performance. The main resource bottleneck of LTH is however the extraordinary cost to find the sparse mask of the winning ticket. That makes the found winning ticket become a valuable asset to the owners, highlighting the necessity of protecting its copyright. Our setting adds a new dimension to the recently soaring interest in protecting against the intellectual property (IP) infringement of deep models and verifying their ownerships, since they take owners' massive/unique resources to develop or train. While existing methods explored encrypted weights or predictions, we investigate a unique way to leverage sparse topological information to perform $\textit{lottery verification}$, by developing several graph-based signatures that can be embedded as credentials. By further combining trigger set-based methods, our proposal can work in both white-box and black-box verification scenarios. Through extensive experiments, we demonstrate the effectiveness of lottery verification in diverse models (ResNet-20, ResNet-18, ResNet-50) on CIFAR-10 and CIFAR-100. Specifically, our verification is shown to be robust to removal attacks such as model fine-tuning and pruning, as well as several ambiguity attacks. Our codes are available at https://github.com/VITA-Group/NO-stealing-LTH. | null |
Complexity Lower Bounds for Nonconvex-Strongly-Concave Min-Max Optimization | https://papers.nips.cc/paper_files/paper/2021/hash/0e105949d99a32ca1751703e94ece601-Abstract.html | Haochuan Li, Yi Tian, Jingzhao Zhang, Ali Jadbabaie | https://papers.nips.cc/paper_files/paper/2021/hash/0e105949d99a32ca1751703e94ece601-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11761-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e105949d99a32ca1751703e94ece601-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Kug2s3rHiG3 | https://papers.nips.cc/paper_files/paper/2021/file/0e105949d99a32ca1751703e94ece601-Supplemental.pdf | We provide a first-order oracle complexity lower bound for finding stationary points of min-max optimization problems where the objective function is smooth, nonconvex in the minimization variable, and strongly concave in the maximization variable. We establish a lower bound of $\Omega\left(\sqrt{\kappa}\epsilon^{-2}\right)$ for deterministic oracles, where $\epsilon$ defines the level of approximate stationarity and $\kappa$ is the condition number. Our lower bound matches the best existing upper bound in the $\epsilon$ and $\kappa$ dependence up to logarithmic factors. For stochastic oracles, we provide a lower bound of $\Omega\left(\sqrt{\kappa}\epsilon^{-2} + \kappa^{1/3}\epsilon^{-4}\right)$. It suggests that there is a gap between the best existing upper bound $\mathcal{O}(\kappa^3 \epsilon^{-4})$ and our lower bound in the condition number dependence. | null |
Early-stopped neural networks are consistent | https://papers.nips.cc/paper_files/paper/2021/hash/0e1ebad68af7f0ae4830b7ac92bc3c6f-Abstract.html | Ziwei Ji, Justin Li, Matus Telgarsky | https://papers.nips.cc/paper_files/paper/2021/hash/0e1ebad68af7f0ae4830b7ac92bc3c6f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11762-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e1ebad68af7f0ae4830b7ac92bc3c6f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=vPVTsuJtGky | null | This work studies the behavior of shallow ReLU networks trained with the logistic loss via gradient descent on binary classification data where the underlying data distribution is general, and the (optimal) Bayes risk is not necessarily zero. In this setting, it is shown that gradient descent with early stopping achieves population risk arbitrarily close to optimal in terms of not just logistic and misclassification losses, but also in terms of calibration, meaning the sigmoid mapping of its outputs approximates the true underlying conditional distribution arbitrarily finely. Moreover, the necessary iteration, sample, and architectural complexities of this analysis all scale naturally with a certain complexity measure of the true conditional model. Lastly, while it is not shown that early stopping is necessary, it is shown that any classifier satisfying a basic local interpolation property is inconsistent. | null |
NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM | https://papers.nips.cc/paper_files/paper/2021/hash/0e4f5cc9f4f3f7f1651a6b9f9214e5b1-Abstract.html | Connor Holmes, Minjia Zhang, Yuxiong He, Bo Wu | https://papers.nips.cc/paper_files/paper/2021/hash/0e4f5cc9f4f3f7f1651a6b9f9214e5b1-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11763-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e4f5cc9f4f3f7f1651a6b9f9214e5b1-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=bSgieZ8-be | https://papers.nips.cc/paper_files/paper/2021/file/0e4f5cc9f4f3f7f1651a6b9f9214e5b1-Supplemental.pdf | Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained Transformer networks. However, these models often contain hundreds of millions or even billions of parameters, bringing challenges to online deployment due to latency constraints. Recently, hardware manufacturers have introduced dedicated hardware for NxM sparsity to provide the flexibility of unstructured pruning with the runtime efficiency of structured approaches. NxM sparsity permits arbitrarily selecting M parameters to retain from a contiguous group of N in the dense representation. However, due to the extremely high complexity of pre-trained models, the standard sparse fine-tuning techniques often fail to generalize well on downstream tasks, which have limited data resources. To address such an issue in a principled manner, we introduce a new learning framework, called NxMTransformer, to induce NxM semi-structured sparsity on pretrained language models for natural language understanding to obtain better performance. In particular, we propose to formulate the NxM sparsity as a constrained optimization problem and use Alternating Direction Method of Multipliers (ADMM) to optimize the downstream tasks while taking the underlying hardware constraints into consideration. ADMM decomposes the NxM sparsification problem into two sub-problems that can be solved sequentially, generating sparsified Transformer networks that achieve high accuracy while being able to effectively execute on newly released hardware. We apply our approach to a wide range of NLP tasks, and our proposed method is able to achieve 1.7 points higher accuracy in GLUE score than current best practices. Moreover, we perform detailed analysis on our approach and shed light on how ADMM affects fine-tuning accuracy for downstream tasks. Finally, we illustrate how NxMTransformer achieves additional performance improvement with knowledge distillation based methods. | null |
Reliable Decisions with Threshold Calibration | https://papers.nips.cc/paper_files/paper/2021/hash/0e65972dce68dad4d52d063967f0a705-Abstract.html | Roshni Sahoo, Shengjia Zhao, Alyssa Chen, Stefano Ermon | https://papers.nips.cc/paper_files/paper/2021/hash/0e65972dce68dad4d52d063967f0a705-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11764-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e65972dce68dad4d52d063967f0a705-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Mx-iNoxLU4t | null | Decision makers rely on probabilistic forecasts to predict the loss of different decision rules before deployment. When the forecasted probabilities match the true frequencies, predicted losses will be accurate. Although perfect forecasts are typically impossible, probabilities can be calibrated to match the true frequencies on average. However, we find that this \textit{average} notion of calibration, which is typically used in practice, does not necessarily guarantee accurate decision loss prediction. Specifically in the regression setting, the loss of threshold decisions, which are decisions based on whether the forecasted outcome falls above or below a cutoff, might not be predicted accurately. We propose a stronger notion of calibration called threshold calibration, which is exactly the condition required to ensure that decision loss is predicted accurately for threshold decisions. We provide an efficient algorithm which takes an uncalibrated forecaster as input and provably outputs a threshold-calibrated forecaster. Our procedure allows downstream decision makers to confidently estimate the loss of any threshold decision under any threshold loss function. Empirically, threshold calibration improves decision loss prediction without compromising on the quality of the decisions in two real-world settings: hospital scheduling decisions and resource allocation decisions. | null |
End-to-End Weak Supervision | https://papers.nips.cc/paper_files/paper/2021/hash/0e674a918ebca3f78bfe02e2f387689d-Abstract.html | Salva Rühling Cachay, Benedikt Boecking, Artur Dubrawski | https://papers.nips.cc/paper_files/paper/2021/hash/0e674a918ebca3f78bfe02e2f387689d-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11765-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e674a918ebca3f78bfe02e2f387689d-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=gbcsmD3Iznu | https://papers.nips.cc/paper_files/paper/2021/file/0e674a918ebca3f78bfe02e2f387689d-Supplemental.pdf | Aggregating multiple sources of weak supervision (WS) can ease the data-labeling bottleneck prevalent in many machine learning applications, by replacing the tedious manual collection of ground truth labels. Current state of the art approaches that do not use any labeled training data, however, require two separate modeling steps: Learning a probabilistic latent variable model based on the WS sources -- making assumptions that rarely hold in practice -- followed by downstream model training. Importantly, the first step of modeling does not consider the performance of the downstream model.To address these caveats we propose an end-to-end approach for directly learning the downstream model by maximizing its agreement with probabilistic labels generated by reparameterizing previous probabilistic posteriors with a neural network. Our results show improved performance over prior work in terms of end model performance on downstream test sets, as well as in terms of improved robustness to dependencies among weak supervision sources. | null |
Shift Invariance Can Reduce Adversarial Robustness | https://papers.nips.cc/paper_files/paper/2021/hash/0e7c7d6c41c76b9ee6445ae01cc0181d-Abstract.html | Vasu Singla, Songwei Ge, Basri Ronen, David Jacobs | https://papers.nips.cc/paper_files/paper/2021/hash/0e7c7d6c41c76b9ee6445ae01cc0181d-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11766-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e7c7d6c41c76b9ee6445ae01cc0181d-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=tqi_45ApQzF | https://papers.nips.cc/paper_files/paper/2021/file/0e7c7d6c41c76b9ee6445ae01cc0181d-Supplemental.zip | Shift invariance is a critical property of CNNs that improves performance on classification. However, we show that invariance to circular shifts can also lead to greater sensitivity to adversarial attacks. We first characterize the margin between classes when a shift-invariant {\em linear} classifier is used. We show that the margin can only depend on the DC component of the signals. Then, using results about infinitely wide networks, we show that in some simple cases, fully connected and shift-invariant neural networks produce linear decision boundaries. Using this, we prove that shift invariance in neural networks produces adversarial examples for the simple case of two classes, each consisting of a single image with a black or white dot on a gray background. This is more than a curiosity; we show empirically that with real datasets and realistic architectures, shift invariance reduces adversarial robustness. Finally, we describe initial experiments using synthetic data to probe the source of this connection. | null |
Wisdom of the Crowd Voting: Truthful Aggregation of Voter Information and Preferences | https://papers.nips.cc/paper_files/paper/2021/hash/0e900ad84f63618452210ab8baae0218-Abstract.html | Grant Schoenebeck, Biaoshuai Tao | https://papers.nips.cc/paper_files/paper/2021/hash/0e900ad84f63618452210ab8baae0218-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11767-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e900ad84f63618452210ab8baae0218-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=C5jDWzrZak | https://papers.nips.cc/paper_files/paper/2021/file/0e900ad84f63618452210ab8baae0218-Supplemental.pdf | We consider two-alternative elections where voters' preferences depend on a state variable that is not directly observable. Each voter receives a private signal that is correlated to the state variable. As a special case, our model captures the common scenario where voters can be categorized into three types: those who always prefer one alternative, those who always prefer the other, and those contingent voters whose preferences depends on the state. In this setting, even if every voter is a contingent voter, agents voting according to their private information need not result in the adoption of the universally preferred alternative, because the signals can be systematically biased.We present a mechanism that elicits and aggregates the private signals from the voters, and outputs the alternative that is favored by the majority. In particular, voters truthfully reporting their signals forms a strong Bayes Nash equilibrium (where no coalition of voters can deviate and receive a better outcome). | null |
Replay-Guided Adversarial Environment Design | https://papers.nips.cc/paper_files/paper/2021/hash/0e915db6326b6fb6a3c56546980a8c93-Abstract.html | Minqi Jiang, Michael Dennis, Jack Parker-Holder, Jakob Foerster, Edward Grefenstette, Tim Rocktäschel | https://papers.nips.cc/paper_files/paper/2021/hash/0e915db6326b6fb6a3c56546980a8c93-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11768-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e915db6326b6fb6a3c56546980a8c93-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=5UZ-AcwFDKJ | https://papers.nips.cc/paper_files/paper/2021/file/0e915db6326b6fb6a3c56546980a8c93-Supplemental.pdf | Deep reinforcement learning (RL) agents may successfully generalize to new settings if trained on an appropriately diverse set of environment and task configurations. Unsupervised Environment Design (UED) is a promising self-supervised RL paradigm, wherein the free parameters of an underspecified environment are automatically adapted during training to the agent's capabilities, leading to the emergence of diverse training environments. Here, we cast Prioritized Level Replay (PLR), an empirically successful but theoretically unmotivated method that selectively samples randomly-generated training levels, as UED. We argue that by curating completely random levels, PLR, too, can generate novel and complex levels for effective training. This insight reveals a natural class of UED methods we call Dual Curriculum Design (DCD). Crucially, DCD includes both PLR and a popular UED algorithm, PAIRED, as special cases and inherits similar theoretical guarantees. This connection allows us to develop novel theory for PLR, providing a version with a robustness guarantee at Nash equilibria. Furthermore, our theory suggests a highly counterintuitive improvement to PLR: by stopping the agent from updating its policy on uncurated levels (training on less data), we can improve the convergence to Nash equilibria. Indeed, our experiments confirm that our new method, PLR$^{\perp}$, obtains better results on a suite of out-of-distribution, zero-shot transfer tasks, in addition to demonstrating that PLR$^{\perp}$ improves the performance of PAIRED, from which it inherited its theoretical framework. | null |
There Is No Turning Back: A Self-Supervised Approach for Reversibility-Aware Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2021/hash/0e98aeeb54acf612b9eb4e48a269814c-Abstract.html | Nathan Grinsztajn, Johan Ferret, Olivier Pietquin, philippe preux, Matthieu Geist | https://papers.nips.cc/paper_files/paper/2021/hash/0e98aeeb54acf612b9eb4e48a269814c-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11769-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e98aeeb54acf612b9eb4e48a269814c-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=3X65eaS4PtP | https://papers.nips.cc/paper_files/paper/2021/file/0e98aeeb54acf612b9eb4e48a269814c-Supplemental.zip | We propose to learn to distinguish reversible from irreversible actions for better informed decision-making in Reinforcement Learning (RL). From theoretical considerations, we show that approximate reversibility can be learned through a simple surrogate task: ranking randomly sampled trajectory events in chronological order. Intuitively, pairs of events that are always observed in the same order are likely to be separated by an irreversible sequence of actions. Conveniently, learning the temporal order of events can be done in a fully self-supervised way, which we use to estimate the reversibility of actions from experience, without any priors.We propose two different strategies that incorporate reversibility in RL agents, one strategy for exploration (RAE) and one strategy for control (RAC). We demonstrate the potential of reversibility-aware agents in several environments, including the challenging Sokoban game. In synthetic tasks, we show that we can learn control policies that never fail and reduce to zero the side-effects of interactions, even without access to the reward function. | null |
Learning to Execute: Efficient Learning of Universal Plan-Conditioned Policies in Robotics | https://papers.nips.cc/paper_files/paper/2021/hash/0e9b734aa25ca8096cb7b56dc0dd8929-Abstract.html | Ingmar Schubert, Danny Driess, Ozgur S. Oguz, Marc Toussaint | https://papers.nips.cc/paper_files/paper/2021/hash/0e9b734aa25ca8096cb7b56dc0dd8929-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11770-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0e9b734aa25ca8096cb7b56dc0dd8929-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=lEkPb2Rhm7 | https://papers.nips.cc/paper_files/paper/2021/file/0e9b734aa25ca8096cb7b56dc0dd8929-Supplemental.zip | Applications of Reinforcement Learning (RL) in robotics are often limited by high data demand. On the other hand, approximate models are readily available in many robotics scenarios, making model-based approaches like planning a data-efficient alternative. Still, the performance of these methods suffers if the model is imprecise or wrong. In this sense, the respective strengths and weaknesses of RL and model-based planners are complementary. In the present work, we investigate how both approaches can be integrated into one framework that combines their strengths. We introduce Learning to Execute (L2E), which leverages information contained in approximate plans to learn universal policies that are conditioned on plans. In our robotic manipulation experiments, L2E exhibits increased performance when compared to pure RL, pure planning, or baseline methods combining learning and planning. | null |
Self-Diagnosing GAN: Diagnosing Underrepresented Samples in Generative Adversarial Networks | https://papers.nips.cc/paper_files/paper/2021/hash/0ebcc77dc72360d0eb8e9504c78d38bd-Abstract.html | Jinhee Lee, Haeri Kim, Youngkyu Hong, Hye Won Chung | https://papers.nips.cc/paper_files/paper/2021/hash/0ebcc77dc72360d0eb8e9504c78d38bd-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11771-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0ebcc77dc72360d0eb8e9504c78d38bd-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=SGZn06ZXcG | https://papers.nips.cc/paper_files/paper/2021/file/0ebcc77dc72360d0eb8e9504c78d38bd-Supplemental.pdf | Despite remarkable performance in producing realistic samples, Generative Adversarial Networks (GANs) often produce low-quality samples near low-density regions of the data manifold, e.g., samples of minor groups. Many techniques have been developed to improve the quality of generated samples, either by post-processing generated samples or by pre-processing the empirical data distribution, but at the cost of reduced diversity. To promote diversity in sample generation without degrading the overall quality, we propose a simple yet effective method to diagnose and emphasize underrepresented samples during training of a GAN. The main idea is to use the statistics of the discrepancy between the data distribution and the model distribution at each data instance. Based on the observation that the underrepresented samples have a high average discrepancy or high variability in discrepancy, we propose a method to emphasize those samples during training of a GAN. Our experimental results demonstrate that the proposed method improves GAN performance on various datasets, and it is especially effective in improving the quality and diversity of sample generation for minor groups. | null |
Online Multi-Armed Bandits with Adaptive Inference | https://papers.nips.cc/paper_files/paper/2021/hash/0ec04cb3912c4f08874dd03716f80df1-Abstract.html | Maria Dimakopoulou, Zhimei Ren, Zhengyuan Zhou | https://papers.nips.cc/paper_files/paper/2021/hash/0ec04cb3912c4f08874dd03716f80df1-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11772-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0ec04cb3912c4f08874dd03716f80df1-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=kVHxBqPcn_ | https://papers.nips.cc/paper_files/paper/2021/file/0ec04cb3912c4f08874dd03716f80df1-Supplemental.pdf | During online decision making in Multi-Armed Bandits (MAB), one needs to conduct inference on the true mean reward of each arm based on data collected so far at each step. However, since the arms are adaptively selected--thereby yielding non-iid data--conducting inference accurately is not straightforward. In particular, sample averaging, which is used in the family of UCB and Thompson sampling (TS) algorithms, does not provide a good choice as it suffers from bias and a lack of good statistical properties (e.g. asymptotic normality). Our thesis in this paper is that more sophisticated inference schemes that take into account the adaptive nature of the sequentially collected data can unlock further performance gains, even though both UCB and TS type algorithms are optimal in the worst case. In particular, we propose a variant of TS-style algorithms--which we call doubly adaptive TS--that leverages recent advances in causal inference and adaptively reweights the terms of a doubly robust estimator on the true mean reward of each arm. Through 20 synthetic domain experiments and a semi-synthetic experiment based on data from an A/B test of a web service, we demonstrate that using an adaptive inferential scheme (while still retaining the exploration efficacy of TS) provides clear benefits in online decision making: the proposed DATS algorithm has superior empirical performance to existing baselines (UCB and TS) in terms of regret and sample complexity in identifying the best arm. In addition, we also provide a finite-time regret bound of doubly adaptive TS that matches (up to log factors) those of UCB and TS algorithms, thereby establishing that its improved practical benefits do not come at the expense of worst-case suboptimality. | null |
Efficient Truncated Linear Regression with Unknown Noise Variance | https://papers.nips.cc/paper_files/paper/2021/hash/0ed8861dc36bee580d100f91283d0559-Abstract.html | Constantinos Daskalakis, Patroklos Stefanou, Rui Yao, Emmanouil Zampetakis | https://papers.nips.cc/paper_files/paper/2021/hash/0ed8861dc36bee580d100f91283d0559-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11773-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0ed8861dc36bee580d100f91283d0559-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=oyHWvdvkZDv | https://papers.nips.cc/paper_files/paper/2021/file/0ed8861dc36bee580d100f91283d0559-Supplemental.pdf | Truncated linear regression is a classical challenge in Statistics, wherein a label, $y = w^T x + \varepsilon$, and its corresponding feature vector, $x \in \mathbb{R}^k$, are only observed if the label falls in some subset $S \subseteq \mathbb{R}$; otherwise the existence of the pair $(x, y)$ is hidden from observation. Linear regression with truncated observations has remained a challenge, in its general form, since the early works of [Tobin'58, Amemiya '73]. When the distribution of the error is normal with known variance, recent work of [Daskalakis et al. '19] provides computationally and statistically efficient estimators of the linear model, $w$. In this paper, we provide the first computationally and statistically efficient estimators for truncated linear regression when the noise variance is unknown, estimating both the linear model and the variance of the noise. Our estimator is based on an efficient implementation of Projected Stochastic Gradient Descent on the negative log-likelihood of the truncated sample. Importantly, we show that the error of our estimates is asymptotically normal, and we use this to provide explicit confidence regions for our estimates. | null |
Breaking the Dilemma of Medical Image-to-image Translation | https://papers.nips.cc/paper_files/paper/2021/hash/0f2818101a7ac4b96ceeba38de4b934c-Abstract.html | Lingke Kong, Chenyu Lian, Detian Huang, zhenjiang li, Yanle Hu, Qichao Zhou | https://papers.nips.cc/paper_files/paper/2021/hash/0f2818101a7ac4b96ceeba38de4b934c-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11774-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0f2818101a7ac4b96ceeba38de4b934c-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=C0GmZH2RnVR | https://papers.nips.cc/paper_files/paper/2021/file/0f2818101a7ac4b96ceeba38de4b934c-Supplemental.pdf | Supervised Pix2Pix and unsupervised Cycle-consistency are two modes that dominate the field of medical image-to-image translation. However, neither modes are ideal. The Pix2Pix mode has excellent performance. But it requires paired and well pixel-wise aligned images, which may not always be achievable due to respiratory motion or anatomy change between times that paired images are acquired. The Cycle-consistency mode is less stringent with training data and works well on unpaired or misaligned images. But its performance may not be optimal. In order to break the dilemma of the existing modes, we propose a new unsupervised mode called RegGAN for medical image-to-image translation. It is based on the theory of "loss-correction". In RegGAN, the misaligned target images are considered as noisy labels and the generator is trained with an additional registration network to fit the misaligned noise distribution adaptively. The goal is to search for the common optimal solution to both image-to-image translation and registration tasks. We incorporated RegGAN into a few state-of-the-art image-to-image translation methods and demonstrated that RegGAN could be easily combined with these methods to improve their performances. Such as a simple CycleGAN in our mode surpasses latest NICEGAN even though using less network parameters. Based on our results, RegGAN outperformed both Pix2Pix on aligned data and Cycle-consistency on misaligned or unpaired data. RegGAN is insensitive to noises which makes it a better choice for a wide range of scenarios, especially for medical image-to-image translation tasks in which well pixel-wise aligned data are not available. Code and dataset are available at https://github.com/Kid-Liet/Reg-GAN. | null |
Temporally Abstract Partial Models | https://papers.nips.cc/paper_files/paper/2021/hash/0f3d014eead934bbdbacb62a01dc4831-Abstract.html | Khimya Khetarpal, Zafarali Ahmed, Gheorghe Comanici, Doina Precup | https://papers.nips.cc/paper_files/paper/2021/hash/0f3d014eead934bbdbacb62a01dc4831-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11775-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0f3d014eead934bbdbacb62a01dc4831-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=LGvlCcMgWqb | https://papers.nips.cc/paper_files/paper/2021/file/0f3d014eead934bbdbacb62a01dc4831-Supplemental.pdf | Humans and animals have the ability to reason and make predictions about different courses of action at many time scales. In reinforcement learning, option models (Sutton, Precup \& Singh, 1999; Precup, 2000) provide the framework for this kind of temporally abstract prediction and reasoning. Natural intelligent agents are also able to focus their attention on courses of action that are relevant or feasible in a given situation, sometimes termed affordable actions. In this paper, we define a notion of affordances for options, and develop temporally abstract partial option models, that take into account the fact that an option might be affordable only in certain situations. We analyze the trade-offs between estimation and approximation error in planning and learning when using such models, and identify some interesting special cases. Additionally, we empirically demonstrate the ability to learn both affordances and partial option models online resulting in improved sample efficiency and planning time in the Taxi domain. | null |
TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification | https://papers.nips.cc/paper_files/paper/2021/hash/0f49c89d1e7298bb9930789c8ed59d48-Abstract.html | Shengcai Liao, Ling Shao | https://papers.nips.cc/paper_files/paper/2021/hash/0f49c89d1e7298bb9930789c8ed59d48-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11776-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0f49c89d1e7298bb9930789c8ed59d48-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=I3yGrFoH8DF | https://papers.nips.cc/paper_files/paper/2021/file/0f49c89d1e7298bb9930789c8ed59d48-Supplemental.pdf | Transformers have recently gained increasing attention in computer vision. However, existing studies mostly use Transformers for feature representation learning, e.g. for image classification and dense predictions, and the generalizability of Transformers is unknown. In this work, we further investigate the possibility of applying Transformers for image matching and metric learning given pairs of images. We find that the Vision Transformer (ViT) and the vanilla Transformer with decoders are not adequate for image matching due to their lack of image-to-image attention. Thus, we further design two naive solutions, i.e. query-gallery concatenation in ViT, and query-gallery cross-attention in the vanilla Transformer. The latter improves the performance, but it is still limited. This implies that the attention mechanism in Transformers is primarily designed for global feature aggregation, which is not naturally suitable for image matching. Accordingly, we propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity computation. Additionally, global max pooling and a multilayer perceptron (MLP) head are applied to decode the matching result. This way, the simplified decoder is computationally more efficient, while at the same time more effective for image matching. The proposed method, called TransMatcher, achieves state-of-the-art performance in generalizable person re-identification, with up to 6.1% and 5.7% performance gains in Rank-1 and mAP, respectively, on several popular datasets. Code is available at https://github.com/ShengcaiLiao/QAConv. | null |
Multi-Objective SPIBB: Seldonian Offline Policy Improvement with Safety Constraints in Finite MDPs | https://papers.nips.cc/paper_files/paper/2021/hash/0f65caf0a7d00afd2b87c028e88fe931-Abstract.html | harsh satija, Philip S. Thomas, Joelle Pineau, Romain Laroche | https://papers.nips.cc/paper_files/paper/2021/hash/0f65caf0a7d00afd2b87c028e88fe931-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11777-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0f65caf0a7d00afd2b87c028e88fe931-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=XzH3QMBKIJ | https://papers.nips.cc/paper_files/paper/2021/file/0f65caf0a7d00afd2b87c028e88fe931-Supplemental.zip | We study the problem of Safe Policy Improvement (SPI) under constraints in the offline Reinforcement Learning (RL) setting. We consider the scenario where: (i) we have a dataset collected under a known baseline policy, (ii) multiple reward signals are received from the environment inducing as many objectives to optimize. We present an SPI formulation for this RL setting that takes into account the preferences of the algorithm’s user for handling the trade-offs for different reward signals while ensuring that the new policy performs at least as well as the baseline policy along each individual objective. We build on traditional SPI algorithms and propose a novel method based on Safe Policy Iteration with Baseline Bootstrapping (SPIBB, Laroche et al., 2019) that provides high probability guarantees on the performance of the agent in the true environment. We show the effectiveness of our method on a synthetic grid-world safety task as well as in a real-world critical care context to learn a policy for the administration of IV fluids and vasopressors to treat sepsis. | null |
Is Automated Topic Model Evaluation Broken? The Incoherence of Coherence | https://papers.nips.cc/paper_files/paper/2021/hash/0f83556a305d789b1d71815e8ea4f4b0-Abstract.html | Alexander Hoyle, Pranav Goel, Andrew Hian-Cheong, Denis Peskov, Jordan Boyd-Graber, Philip Resnik | https://papers.nips.cc/paper_files/paper/2021/hash/0f83556a305d789b1d71815e8ea4f4b0-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11778-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0f83556a305d789b1d71815e8ea4f4b0-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=tjdHCnPqoo | https://papers.nips.cc/paper_files/paper/2021/file/0f83556a305d789b1d71815e8ea4f4b0-Supplemental.pdf | Topic model evaluation, like evaluation of other unsupervised methods, can be contentious. However, the field has coalesced around automated estimates of topic coherence, which rely on the frequency of word co-occurrences in a reference corpus. Contemporary neural topic models surpass classical ones according to these metrics. At the same time, topic model evaluation suffers from a validation gap: automated coherence, developed for classical models, has not been validated using human experimentation for neural models. In addition, a meta-analysis of topic modeling literature reveals a substantial standardization gap in automated topic modeling benchmarks. To address the validation gap, we compare automated coherence with the two most widely accepted human judgment tasks: topic rating and word intrusion. To address the standardization gap, we systematically evaluate a dominant classical model and two state-of-the-art neural models on two commonly used datasets. Automated evaluations declare a winning model when corresponding human evaluations do not, calling into question the validity of fully automatic evaluations independent of human judgments. | null |
INDIGO: GNN-Based Inductive Knowledge Graph Completion Using Pair-Wise Encoding | https://papers.nips.cc/paper_files/paper/2021/hash/0fd600c953cde8121262e322ef09f70e-Abstract.html | Shuwen Liu, Bernardo Grau, Ian Horrocks, Egor Kostylev | https://papers.nips.cc/paper_files/paper/2021/hash/0fd600c953cde8121262e322ef09f70e-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11779-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0fd600c953cde8121262e322ef09f70e-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=m4k66oJFK9P | https://papers.nips.cc/paper_files/paper/2021/file/0fd600c953cde8121262e322ef09f70e-Supplemental.zip | The aim of knowledge graph (KG) completion is to extend an incomplete KG with missing triples. Popular approaches based on graph embeddings typically work by first representing the KG in a vector space, and then applying a predefined scoring function to the resulting vectors to complete the KG. These approaches work well in transductive settings, where predicted triples involve only constants seen during training; however, they are not applicable in inductive settings, where the KG on which the model was trained is extended with new constants or merged with other KGs. The use of Graph Neural Networks (GNNs) has recently been proposed as a way to overcome these limitations; however, existing approaches do not fully exploit the capabilities of GNNs and still rely on heuristics and ad-hoc scoring functions. In this paper, we propose a novel approach, where the KG is fully encoded into a GNN in a transparent way, and where the predicted triples can be read out directly from the last layer of the GNN without the need for additional components or scoring functions. Our experiments show that our model outperforms state-of-the-art approaches on inductive KG completion benchmarks. | null |
Do Input Gradients Highlight Discriminative Features? | https://papers.nips.cc/paper_files/paper/2021/hash/0fe6a94848e5c68a54010b61b3e94b0e-Abstract.html | Harshay Shah, Prateek Jain, Praneeth Netrapalli | https://papers.nips.cc/paper_files/paper/2021/hash/0fe6a94848e5c68a54010b61b3e94b0e-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11780-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/0fe6a94848e5c68a54010b61b3e94b0e-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=pR3dPOHrbfy | https://papers.nips.cc/paper_files/paper/2021/file/0fe6a94848e5c68a54010b61b3e94b0e-Supplemental.pdf | Post-hoc gradient-based interpretability methods [Simonyan et al., 2013, Smilkov et al., 2017] that provide instance-specific explanations of model predictions are often based on assumption (A): magnitude of input gradients—gradients of logits with respect to input—noisily highlight discriminative task-relevant features. In this work, we test the validity of assumption (A) using a three-pronged approach:1. We develop an evaluation framework, DiffROAR, to test assumption (A) on four image classification benchmarks. Our results suggest that (i) input gradients of standard models (i.e., trained on original data) may grossly violate (A), whereas (ii) input gradients of adversarially robust models satisfy (A).2. We then introduce BlockMNIST, an MNIST-based semi-real dataset, that by design encodes a priori knowledge of discriminative features. Our analysis on BlockMNIST leverages this information to validate as well as characterize differences between input gradient attributions of standard and robust models.3. Finally, we theoretically prove that our empirical findings hold on a simplified version of the BlockMNIST dataset. Specifically, we prove that input gradients of standard one-hidden-layer MLPs trained on this dataset do not highlight instance-specific signal coordinates, thus grossly violating assumption (A).Our findings motivate the need to formalize and test common assumptions in interpretability in a falsifiable manner [Leavitt and Morcos, 2020]. We believe that the DiffROAR evaluation framework and BlockMNIST-based datasets can serve as sanity checks to audit instance-specific interpretability methods; code and data available at https://github.com/harshays/inputgradients. | null |
Improving Conditional Coverage via Orthogonal Quantile Regression | https://papers.nips.cc/paper_files/paper/2021/hash/1006ff12c465532f8c574aeaa4461b16-Abstract.html | Shai Feldman, Stephen Bates, Yaniv Romano | https://papers.nips.cc/paper_files/paper/2021/hash/1006ff12c465532f8c574aeaa4461b16-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11781-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1006ff12c465532f8c574aeaa4461b16-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=pTe-8qCdDqy | https://papers.nips.cc/paper_files/paper/2021/file/1006ff12c465532f8c574aeaa4461b16-Supplemental.zip | We develop a method to generate prediction intervals that have a user-specified coverage level across all regions of feature-space, a property called conditional coverage. A typical approach to this task is to estimate the conditional quantiles with quantile regression---it is well-known that this leads to correct coverage in the large-sample limit, although it may not be accurate in finite samples. We find in experiments that traditional quantile regression can have poor conditional coverage. To remedy this, we modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event. For the true conditional quantiles, these two quantities are independent (orthogonal), so the modified loss function continues to be valid. Moreover, we empirically show that the modified loss function leads to improved conditional coverage, as evaluated by several metrics. We also introduce two new metrics that check conditional coverage by looking at the strength of the dependence between the interval size and the indicator of miscoverage. | null |
Minimizing Polarization and Disagreement in Social Networks via Link Recommendation | https://papers.nips.cc/paper_files/paper/2021/hash/101951fe7ebe7bd8c77d14f75746b4bc-Abstract.html | Liwang Zhu, Qi Bao, Zhongzhi Zhang | https://papers.nips.cc/paper_files/paper/2021/hash/101951fe7ebe7bd8c77d14f75746b4bc-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11782-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/101951fe7ebe7bd8c77d14f75746b4bc-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ye-NP0VZtLC | https://papers.nips.cc/paper_files/paper/2021/file/101951fe7ebe7bd8c77d14f75746b4bc-Supplemental.pdf | Individual's opinions are fundamentally shaped and evolved by their interactions with other people, and social phenomena such as disagreement and polarization are now tightly woven into daily life. The quantification and optimization of these concepts have been the subject of much recent research behind a wealth of high-impact data mining applications. In particular, researchers have addressed the question of how such concepts can be optimized by influencing the opinion of a small number of individuals or by designing the network from scratch.Here, rather than a “design-from-scratch” approach or altering the initial opinion, we study the optimization problem of recommending $k$ new links to minimize the sum of polarization and disagreement in a social network with $n$ nodes and $m$ edges. We show that our objective function of this combinatorial optimization problem is not submodular, although it is monotone. We propose a simple greedy algorithm with a constant-factor approximation that solves the problem in cubic running time, and we provide theoretical analysis of the approximation guarantee for the algorithm. To overcome the computation challenge for large networks, we also provide a fast algorithm with computation complexity $\Otil (mk\eps^{-2})$ for any $\eps>0$, where the $\Otil (\cdot)$ notation suppresses the ${\rm poly} (\log n)$ factors. Extensive experiments on real datasets demonstrate both the efficiency and effectiveness of our algorithms. | null |
Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power of Geometric Transformations | https://papers.nips.cc/paper_files/paper/2021/hash/103303dd56a731e377d01f6a37badae3-Abstract.html | Shasha Li, Abhishek Aich, Shitong Zhu, Salman Asif, Chengyu Song, Amit Roy-Chowdhury, Srikanth Krishnamurthy | https://papers.nips.cc/paper_files/paper/2021/hash/103303dd56a731e377d01f6a37badae3-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11783-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/103303dd56a731e377d01f6a37badae3-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=-7EhrbfbK31 | https://papers.nips.cc/paper_files/paper/2021/file/103303dd56a731e377d01f6a37badae3-Supplemental.pdf | When compared to the image classification models, black-box adversarial attacks against video classification models have been largely understudied. This could be possible because, with video, the temporal dimension poses significant additional challenges in gradient estimation. Query-efficient black-box attacks rely on effectively estimated gradients towards maximizing the probability of misclassifying the target video. In this work, we demonstrate that such effective gradients can be searched for by parameterizing the temporal structure of the search space with geometric transformations. Specifically, we design a novel iterative algorithm GEOmetric TRAnsformed Perturbations (GEO-TRAP), for attacking video classification models. GEO-TRAP employs standard geometric transformation operations to reduce the search space for effective gradients into searching for a small group of parameters that define these operations. This group of parameters describes the geometric progression of gradients, resulting in a reduced and structured search space. Our algorithm inherently leads to successful perturbations with surprisingly few queries. For example, adversarial examples generated from GEO-TRAP have better attack success rates with ~73.55% fewer queries compared to the state-of-the-art method for video adversarial attacks on the widely used Jester dataset. Overall, our algorithm exposes vulnerabilities of diverse video classification models and achieves new state-of-the-art results under black-box settings on two large datasets. | null |
Optimal Rates for Random Order Online Optimization | https://papers.nips.cc/paper_files/paper/2021/hash/107030ca685076c0ed5e054e2c3ed940-Abstract.html | Uri Sherman, Tomer Koren, Yishay Mansour | https://papers.nips.cc/paper_files/paper/2021/hash/107030ca685076c0ed5e054e2c3ed940-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11784-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/107030ca685076c0ed5e054e2c3ed940-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=dfyjet3BMKA | null | We study online convex optimization in the random order model, recently proposed by Garber et al. (2020), where the loss functions may be chosen by an adversary, but are then presented to the online algorithm in a uniformly random order. Focusing on the scenario where the cumulative loss function is (strongly) convex, yet individual loss functions are smooth but might be non-convex, we give algorithms that achieve the optimal bounds and significantly outperform the results of Garber et al. (2020), completely removing the dimension dependence and improve their scaling with respect to the strong convexity parameter. Our analysis relies on novel connections between algorithmic stability and generalization for sampling without-replacement analogous to those studied in the with-replacement i.i.d. setting, as well as on a refined average stability analysis of stochastic gradient descent. | null |
Discrete-Valued Neural Communication | https://papers.nips.cc/paper_files/paper/2021/hash/10907813b97e249163587e6246612e21-Abstract.html | Dianbo Liu, Alex M. Lamb, Kenji Kawaguchi, Anirudh Goyal ALIAS PARTH GOYAL, Chen Sun, Michael C. Mozer, Yoshua Bengio | https://papers.nips.cc/paper_files/paper/2021/hash/10907813b97e249163587e6246612e21-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11785-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/10907813b97e249163587e6246612e21-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=YSYXmOzlrou | https://papers.nips.cc/paper_files/paper/2021/file/10907813b97e249163587e6246612e21-Supplemental.pdf | Deep learning has advanced from fully connected architectures to structured models organized into components, e.g., the transformer composed of positional elements, modular architectures divided into slots, and graph neural nets made up of nodes. The nature of structured models is that communication among the components has a bottleneck, typically achieved by restricted connectivity and attention. In this work, we further tighten the bottleneck via discreteness of the representations transmitted between components. We hypothesize that this constraint serves as a useful form of inductive bias. Our hypothesis is motivated by past empirical work showing the benefits of discretization in non-structured architectures as well as our own theoretical results showing that discretization increases noise robustness and reduces the underlying dimensionality of the model. Building on an existing technique for discretization from the VQ-VAE, we consider multi-headed discretization with shared codebooks as the output of each architectural component. One motivating intuition is human language in which communication occurs through multiple discrete symbols. This form of communication is hypothesized to facilitate transmission of information between functional components of the brain by providing a common interlingua, just as it does for human-to-human communication. Our experiments show that discrete-valued neural communication (DVNC) substantially improves systematic generalization in a variety of architectures—transformers, modular architectures, and graph neural networks. We also show that the DVNC is robust to the choice of hyperparameters, making the method useful in practice. | null |
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method | https://papers.nips.cc/paper_files/paper/2021/hash/10a7cdd970fe135cf4f7bb55c0e3b59f-Abstract.html | Yifan Chen, Qi Zeng, Heng Ji, Yun Yang | https://papers.nips.cc/paper_files/paper/2021/hash/10a7cdd970fe135cf4f7bb55c0e3b59f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11786-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/10a7cdd970fe135cf4f7bb55c0e3b59f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=pZCYG7gjkKz | https://papers.nips.cc/paper_files/paper/2021/file/10a7cdd970fe135cf4f7bb55c0e3b59f-Supplemental.pdf | Transformers are expensive to train due to the quadratic time and space complexity in the self-attention mechanism. On the other hand, although kernel machines suffer from the same computation bottleneck in pairwise dot products, several approximation schemes have been successfully incorporated to considerably reduce their computational cost without sacrificing too much accuracy. In this work, we leverage the computation methods for kernel machines to alleviate the high computational cost and introduce Skyformer, which replaces the softmax structure with a Gaussian kernel to stabilize the model training and adapts the Nyström method to a non-positive semidefinite matrix to accelerate the computation. We further conduct theoretical analysis by showing that the matrix approximation error of our proposed method is small in the spectral norm. Experiments on Long Range Arena benchmark show that the proposed method is sufficient in getting comparable or even better performance than the full self-attention while requiring fewer computation resources. | null |
TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification | https://papers.nips.cc/paper_files/paper/2021/hash/10c272d06794d3e5785d5e7c5356e9ff-Abstract.html | Zhuchen Shao, Hao Bian, Yang Chen, Yifeng Wang, Jian Zhang, Xiangyang Ji, yongbing zhang | https://papers.nips.cc/paper_files/paper/2021/hash/10c272d06794d3e5785d5e7c5356e9ff-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11787-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/10c272d06794d3e5785d5e7c5356e9ff-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=LKUfuWxajHc | https://papers.nips.cc/paper_files/paper/2021/file/10c272d06794d3e5785d5e7c5356e9ff-Supplemental.pdf | Multiple instance learning (MIL) is a powerful tool to solve the weakly supervised classification in whole slide image (WSI) based pathology diagnosis. However, the current MIL methods are usually based on independent and identical distribution hypothesis, thus neglect the correlation among different instances. To address this problem, we proposed a new framework, called correlated MIL, and provided a proof for convergence. Based on this framework, we devised a Transformer based MIL (TransMIL), which explored both morphological and spatial information. The proposed TransMIL can effectively deal with unbalanced/balanced and binary/multiple classification with great visualization and interpretability. We conducted various experiments for three different computational pathology problems and achieved better performance and faster convergence compared with state-of-the-art methods. The test AUC for the binary tumor classification can be up to 93.09% over CAMELYON16 dataset. And the AUC over the cancer subtypes classification can be up to 96.03% and 98.82% over TCGA-NSCLC dataset and TCGA-RCC dataset, respectively. Implementation is available at: https://github.com/szc19990412/TransMIL. | null |
Multi-view Contrastive Graph Clustering | https://papers.nips.cc/paper_files/paper/2021/hash/10c66082c124f8afe3df4886f5e516e0-Abstract.html | ErLin Pan, Zhao Kang | https://papers.nips.cc/paper_files/paper/2021/hash/10c66082c124f8afe3df4886f5e516e0-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11788-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/10c66082c124f8afe3df4886f5e516e0-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=NlB8_hXkbby | null | With the explosive growth of information technology, multi-view graph data have become increasingly prevalent and valuable. Most existing multi-view clustering techniques either focus on the scenario of multiple graphs or multi-view attributes. In this paper, we propose a generic framework to cluster multi-view attributed graph data. Specifically, inspired by the success of contrastive learning, we propose multi-view contrastive graph clustering (MCGC) method to learn a consensus graph since the original graph could be noisy or incomplete and is not directly applicable. Our method composes of two key steps: we first filter out the undesirable high-frequency noise while preserving the graph geometric features via graph filtering and obtain a smooth representation of nodes; we then learn a consensus graph regularized by graph contrastive loss. Results on several benchmark datasets show the superiority of our method with respect to state-of-the-art approaches. In particular, our simple approach outperforms existing deep learning-based methods. | null |
Inverse-Weighted Survival Games | https://papers.nips.cc/paper_files/paper/2021/hash/10fb6cfa4c990d2bad5ddef4f70e8ba2-Abstract.html | Xintian Han, Mark Goldstein, Aahlad Puli, Thomas Wies, Adler Perotte, Rajesh Ranganath | https://papers.nips.cc/paper_files/paper/2021/hash/10fb6cfa4c990d2bad5ddef4f70e8ba2-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11789-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=j4oYd8SGop | https://papers.nips.cc/paper_files/paper/2021/file/10fb6cfa4c990d2bad5ddef4f70e8ba2-Supplemental.pdf | Deep models trained through maximum likelihood have achieved state-of-the-art results for survival analysis. Despite this training scheme, practitioners evaluate models under other criteria, such as binary classification losses at a chosen set of time horizons, e.g. Brier score (BS) and Bernoulli log likelihood (BLL). Models trained with maximum likelihood may have poor BS or BLL since maximum likelihood does not directly optimize these criteria. Directly optimizing criteria like BS requires inverse-weighting by the censoring distribution. However, estimating the censoring model under these metrics requires inverse-weighting by the failure distribution. The objective for each model requires the other, but neither are known. To resolve this dilemma, we introduce Inverse-Weighted Survival Games. In these games, objectives for each model are built from re-weighted estimates featuring the other model, where the latter is held fixed during training. When the loss is proper, we show that the games always have the true failure and censoring distributions as a stationary point. This means models in the game do not leave the correct distributions once reached. We construct one case where this stationary point is unique. We show that these games optimize BS on simulations and then apply these principles on real world cancer and critically-ill patient data. | null |
Generalization Bounds for Meta-Learning via PAC-Bayes and Uniform Stability | https://papers.nips.cc/paper_files/paper/2021/hash/1102a326d5f7c9e04fc3c89d0ede88c9-Abstract.html | Alec Farid, Anirudha Majumdar | https://papers.nips.cc/paper_files/paper/2021/hash/1102a326d5f7c9e04fc3c89d0ede88c9-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11790-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1102a326d5f7c9e04fc3c89d0ede88c9-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=RloMRU3keo3 | https://papers.nips.cc/paper_files/paper/2021/file/1102a326d5f7c9e04fc3c89d0ede88c9-Supplemental.pdf | We are motivated by the problem of providing strong generalization guarantees in the context of meta-learning. Existing generalization bounds are either challenging to evaluate or provide vacuous guarantees in even relatively simple settings. We derive a probably approximately correct (PAC) bound for gradient-based meta-learning using two different generalization frameworks in order to deal with the qualitatively different challenges of generalization at the "base" and "meta" levels. We employ bounds for uniformly stable algorithms at the base level and bounds from the PAC-Bayes framework at the meta level. The result of this approach is a novel PAC bound that is tighter when the base learner adapts quickly, which is precisely the goal of meta-learning. We show that our bound provides a tighter guarantee than other bounds on a toy non-convex problem on the unit sphere and a text-based classification example. We also present a practical regularization scheme motivated by the bound in settings where the bound is loose and demonstrate improved performance over baseline techniques. | null |
Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement | https://papers.nips.cc/paper_files/paper/2021/hash/11704817e347269b7254e744b5e22dac-Abstract.html | Samuel Daulton, Maximilian Balandat, Eytan Bakshy | https://papers.nips.cc/paper_files/paper/2021/hash/11704817e347269b7254e744b5e22dac-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11791-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/11704817e347269b7254e744b5e22dac-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=A7pvvrlv68 | https://papers.nips.cc/paper_files/paper/2021/file/11704817e347269b7254e744b5e22dac-Supplemental.pdf | Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization (MOBO) is a sample-efficient approach for identifying the optimal trade-offs between the objectives. However, many existing methods perform poorly when the observations are corrupted by noise. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian treatment to the popular expected hypervolume improvement (EHVI) criterion and integrating over this uncertainty in the Pareto frontier. We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique. Through this lens, we derive a natural parallel variant, qNEHVI, that reduces computational complexity of parallel EHVI from exponential to polynomial with respect to the batch size. qNEHVI is one-step Bayes-optimal for hypervolume maximization in both noisy and noiseless environments, and we show that it can be optimized effectively with gradient-based methods via sample average approximation. Empirically, we demonstrate not only that qNEHVI is substantially more robust to observation noise than existing MOBO approaches, but also that it achieves state-of-the-art optimization performance and competitive wall-times in large-batch environments. | null |
Evolution Gym: A Large-Scale Benchmark for Evolving Soft Robots | https://papers.nips.cc/paper_files/paper/2021/hash/118921efba23fc329e6560b27861f0c2-Abstract.html | Jagdeep Bhatia, Holly Jackson, Yunsheng Tian, Jie Xu, Wojciech Matusik | https://papers.nips.cc/paper_files/paper/2021/hash/118921efba23fc329e6560b27861f0c2-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11792-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/118921efba23fc329e6560b27861f0c2-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=lM2971LAwV | null | Both the design and control of a robot play equally important roles in its task performance. However, while optimal control is well studied in the machine learning and robotics community, less attention is placed on finding the optimal robot design. This is mainly because co-optimizing design and control in robotics is characterized as a challenging problem, and more importantly, a comprehensive evaluation benchmark for co-optimization does not exist. In this paper, we propose Evolution Gym, the first large-scale benchmark for co-optimizing the design and control of soft robots. In our benchmark, each robot is composed of different types of voxels (e.g., soft, rigid, actuators), resulting in a modular and expressive robot design space. Our benchmark environments span a wide range of tasks, including locomotion on various types of terrains and manipulation. Furthermore, we develop several robot co-evolution algorithms by combining state-of-the-art design optimization methods and deep reinforcement learning techniques. Evaluating the algorithms on our benchmark platform, we observe robots exhibiting increasingly complex behaviors as evolution progresses, with the best evolved designs solving many of our proposed tasks. Additionally, even though robot designs are evolved autonomously from scratch without prior knowledge, they often grow to resemble existing natural creatures while outperforming hand-designed robots. Nevertheless, all tested algorithms fail to find robots that succeed in our hardest environments. This suggests that more advanced algorithms are required to explore the high-dimensional design space and evolve increasingly intelligent robots -- an area of research in which we hope Evolution Gym will accelerate progress. Our website with code, environments, documentation, and tutorials is available at http://evogym.csail.mit.edu/. | null |
On Calibration and Out-of-Domain Generalization | https://papers.nips.cc/paper_files/paper/2021/hash/118bd558033a1016fcc82560c65cca5f-Abstract.html | Yoav Wald, Amir Feder, Daniel Greenfeld, Uri Shalit | https://papers.nips.cc/paper_files/paper/2021/hash/118bd558033a1016fcc82560c65cca5f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11793-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/118bd558033a1016fcc82560c65cca5f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=XWYJ25-yTRS | https://papers.nips.cc/paper_files/paper/2021/file/118bd558033a1016fcc82560c65cca5f-Supplemental.pdf | Out-of-domain (OOD) generalization is a significant challenge for machine learning models. Many techniques have been proposed to overcome this challenge, often focused on learning models with certain invariance properties. In this work, we draw a link between OOD performance and model calibration, arguing that calibration across multiple domains can be viewed as a special case of an invariant representation leading to better OOD generalization. Specifically, we show that under certain conditions, models which achieve \emph{multi-domain calibration} are provably free of spurious correlations. This leads us to propose multi-domain calibration as a measurable and trainable surrogate for the OOD performance of a classifier. We therefore introduce methods that are easy to apply and allow practitioners to improve multi-domain calibration by training or modifying an existing model, leading to better performance on unseen domains. Using four datasets from the recently proposed WILDS OOD benchmark, as well as the Colored MNIST, we demonstrate that training or tuning models so they are calibrated across multiple domains leads to significantly improved performance on unseen test domains. We believe this intriguing connection between calibration and OOD generalization is promising from both a practical and theoretical point of view. | null |
On the Convergence and Sample Efficiency of Variance-Reduced Policy Gradient Method | https://papers.nips.cc/paper_files/paper/2021/hash/11c484ea9305ea4c7bb6b2e6d570d466-Abstract.html | Junyu Zhang, Chengzhuo Ni, zheng Yu, Csaba Szepesvari, Mengdi Wang | https://papers.nips.cc/paper_files/paper/2021/hash/11c484ea9305ea4c7bb6b2e6d570d466-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11794-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/11c484ea9305ea4c7bb6b2e6d570d466-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Re_VXFOyyO | https://papers.nips.cc/paper_files/paper/2021/file/11c484ea9305ea4c7bb6b2e6d570d466-Supplemental.pdf | Policy gradient (PG) gives rise to a rich class of reinforcement learning (RL) methods. Recently, there has been an emerging trend to augment the existing PG methods such as REINFORCE by the \emph{variance reduction} techniques. However, all existing variance-reduced PG methods heavily rely on an uncheckable importance weight assumption made for every single iteration of the algorithms. In this paper, a simple gradient truncation mechanism is proposed to address this issue. Moreover, we design a Truncated Stochastic Incremental Variance-Reduced Policy Gradient (TSIVR-PG) method, which is able to maximize not only a cumulative sum of rewards but also a general utility function over a policy's long-term visiting distribution. We show an $\tilde{\mathcal{O}}(\epsilon^{-3})$ sample complexity for TSIVR-PG to find an $\epsilon$-stationary policy. By assuming the \emph{overparameterization} of policy and exploiting the \emph{hidden convexity} of the problem, we further show that TSIVR-PG converges to global $\epsilon$-optimal policy with $\tilde{\mathcal{O}}(\epsilon^{-2})$ samples. | null |
Circa: Stochastic ReLUs for Private Deep Learning | https://papers.nips.cc/paper_files/paper/2021/hash/11eba2991cc62daa4a85be5c0cfdae97-Abstract.html | Zahra Ghodsi, Nandan Kumar Jha, Brandon Reagen, Siddharth Garg | https://papers.nips.cc/paper_files/paper/2021/hash/11eba2991cc62daa4a85be5c0cfdae97-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11795-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/11eba2991cc62daa4a85be5c0cfdae97-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=_n59kgzSFef | https://papers.nips.cc/paper_files/paper/2021/file/11eba2991cc62daa4a85be5c0cfdae97-Supplemental.zip | The simultaneous rise of machine learning as a service and concerns over user privacy have increasingly motivated the need for private inference (PI). While recent work demonstrates PI is possible using cryptographic primitives, the computational overheads render it impractical. State-of-art deep networks are inadequate in this context because the source of slowdown in PI stems from the ReLU operations whereas optimizations for plaintext inference focus on reducing FLOPs. In this paper we re-think ReLU computations and propose optimizations for PI tailored to properties of neural networks. Specifically, we reformulate ReLU as an approximate sign test and introduce a novel truncation method for the sign test that significantly reduces the cost per ReLU. These optimizations result in a specific type of stochastic ReLU. The key observation is that the stochastic fault behavior is well suited for the fault-tolerant properties of neural network inference. Thus, we provide significant savings without impacting accuracy. We collectively call the optimizations Circa and demonstrate improvements of up to 4.7$\times$ storage and 3$\times$ runtime over baseline implementations; we further show that Circa can be used on top of recent PI optimizations to obtain 1.8$\times$ additional speedup. | null |
Reinforcement Learning in Reward-Mixing MDPs | https://papers.nips.cc/paper_files/paper/2021/hash/11f9e78e4899a78dedd439fc583b6693-Abstract.html | Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor | https://papers.nips.cc/paper_files/paper/2021/hash/11f9e78e4899a78dedd439fc583b6693-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11796-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/11f9e78e4899a78dedd439fc583b6693-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=XHHxE-KOK7 | https://papers.nips.cc/paper_files/paper/2021/file/11f9e78e4899a78dedd439fc583b6693-Supplemental.pdf | Learning a near optimal policy in a partially observable system remains an elusive challenge in contemporary reinforcement learning. In this work, we consider episodic reinforcement learning in a reward-mixing Markov decision process (MDP). There, a reward function is drawn from one of $M$ possible reward models at the beginning of every episode, but the identity of the chosen reward model is not revealed to the agent. Hence, the latent state space, for which the dynamics are Markovian, is not given to the agent. We study the problem of learning a near optimal policy for two reward-mixing MDPs. Unlike existing approaches that rely on strong assumptions on the dynamics, we make no assumptions and study the problem in full generality. Indeed, with no further assumptions, even for two switching reward-models, the problem requires several new ideas beyond existing algorithmic and analysis techniques for efficient exploration. We provide the first polynomial-time algorithm that finds an $\epsilon$-optimal policy after exploring $\tilde{O}(poly(H,\epsilon^{-1}) \cdot S^2 A^2)$ episodes, where $H$ is time-horizon and $S, A$ are the number of states and actions respectively. This is the first efficient algorithm that does not require any assumptions in partially observed environments where the observation space is smaller than the latent state space. | null |
A Gang of Adversarial Bandits | https://papers.nips.cc/paper_files/paper/2021/hash/124461dcd3571e6674ec4e0e140cc298-Abstract.html | Mark Herbster, Stephen Pasteris, Fabio Vitale, Massimiliano Pontil | https://papers.nips.cc/paper_files/paper/2021/hash/124461dcd3571e6674ec4e0e140cc298-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11797-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/124461dcd3571e6674ec4e0e140cc298-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=S9NmGEMkn29 | https://papers.nips.cc/paper_files/paper/2021/file/124461dcd3571e6674ec4e0e140cc298-Supplemental.pdf | We consider running multiple instances of multi-armed bandit (MAB) problems in parallel. A main motivation for this study are online recommendation systems, in which each of $N$ users is associated with a MAB problem and the goal is to exploit users' similarity in order to learn users' preferences to $K$ items more efficiently. We consider the adversarial MAB setting, whereby an adversary is free to choose which user and which loss to present to the learner during the learning process. Users are in a social network and the learner is aided by a-priori knowledge of the strengths of the social links between all pairs of users. It is assumed that if the social link between two users is strong then they tend to share the same action. The regret is measured relative to an arbitrary function which maps users to actions. The smoothness of the function is captured by a resistance-based dispersion measure $\Psi$. We present two learning algorithms, GABA-I and GABA-II, which exploit the network structure to bias towards functions of low $\Psi$ values. We show that GABA-I has an expected regret bound of $\mathcal{O}(\sqrt{\ln(NK/\Psi)\Psi KT})$ and per-trial time complexity of $\mathcal{O}(K\ln(N))$, whilst GABA-II has a weaker $\mathcal{O}(\sqrt{\ln(N/\Psi)\ln(NK/\Psi)\Psi KT})$ regret, but a better $\mathcal{O}(\ln(K)\ln(N))$ per-trial time complexity. We highlight improvements of both algorithms over running independent standard MABs across users. | null |
Explaining Hyperparameter Optimization via Partial Dependence Plots | https://papers.nips.cc/paper_files/paper/2021/hash/12ced2db6f0193dda91ba86224ea1cd8-Abstract.html | Julia Moosbauer, Julia Herbinger, Giuseppe Casalicchio, Marius Lindauer, Bernd Bischl | https://papers.nips.cc/paper_files/paper/2021/hash/12ced2db6f0193dda91ba86224ea1cd8-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11798-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/12ced2db6f0193dda91ba86224ea1cd8-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=k8KDqVbIS2l | https://papers.nips.cc/paper_files/paper/2021/file/12ced2db6f0193dda91ba86224ea1cd8-Supplemental.pdf | Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.However, there is often a lack of valuable insights into the effects of different hyperparameters on the final model performance.This lack of explainability makes it difficult to trust and understand the automated HPO process and its results.We suggest using interpretable machine learning (IML) to gain insights from the experimental data obtained during HPO with Bayesian optimization (BO).BO tends to focus on promising regions with potential high-performance configurations and thus induces a sampling bias.Hence, many IML techniques, such as the partial dependence plot (PDP), carry the risk of generating biased interpretations.By leveraging the posterior uncertainty of the BO surrogate model, we introduce a variant of the PDP with estimated confidence bands.We propose to partition the hyperparameter space to obtain more confident and reliable PDPs in relevant sub-regions.In an experimental study, we provide quantitative evidence for the increased quality of the PDPs within sub-regions. | null |
Robustifying Algorithms of Learning Latent Trees with Vector Variables | https://papers.nips.cc/paper_files/paper/2021/hash/12e086066892a311b752673a28583d3f-Abstract.html | Fengzhuo Zhang, Vincent Tan | https://papers.nips.cc/paper_files/paper/2021/hash/12e086066892a311b752673a28583d3f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11799-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/12e086066892a311b752673a28583d3f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=pUZBQd-yFk7 | https://papers.nips.cc/paper_files/paper/2021/file/12e086066892a311b752673a28583d3f-Supplemental.pdf | We consider learning the structures of Gaussian latent tree models with vector observations when a subset of them are arbitrarily corrupted. First, we present the sample complexities of Recursive Grouping (RG) and Chow-Liu Recursive Grouping (CLRG) without the assumption that the effective depth is bounded in the number of observed nodes, significantly generalizing the results in Choi et al. (2011). We show that Chow-Liu initialization in CLRG greatly reduces the sample complexity of RG from being exponential in the diameter of the tree to only logarithmic in the diameter for the hidden Markov model (HMM). Second, we robustify RG, CLRG, Neighbor Joining (NJ) and Spectral NJ (SNJ) by using the truncated inner product. These robustified algorithms can tolerate a number of corruptions up to the square root of the number of clean samples. Finally, we derive the first known instance-dependent impossibility result for structure learning of latent trees. The optimalities of the robust version of CLRG and NJ are verified by comparing their sample complexities and the impossibility result. | null |
Representation Learning on Spatial Networks | https://papers.nips.cc/paper_files/paper/2021/hash/12e35d9186dd72fe62fd039385890b9c-Abstract.html | Zheng Zhang, Liang Zhao | https://papers.nips.cc/paper_files/paper/2021/hash/12e35d9186dd72fe62fd039385890b9c-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11800-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/12e35d9186dd72fe62fd039385890b9c-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=1LCtHgPC-l4 | https://papers.nips.cc/paper_files/paper/2021/file/12e35d9186dd72fe62fd039385890b9c-Supplemental.zip | Spatial networks are networks for which the nodes and edges are constrained by geometry and embedded in real space, which has crucial effects on their topological properties. Although tremendous success has been achieved in spatial and network representation separately in recent years, there exist very little works on the representation of spatial networks. Extracting powerful representations from spatial networks requires the development of appropriate tools to uncover the pairing of both spatial and network information in the appearance of node permutation invariant, and rotation and translation invariant. Hence it can not be modeled merely with either spatial or network models individually. To address these challenges, this paper proposes a generic framework for spatial network representation learning. Specifically, a provably information-lossless and roto-translation invariant representation of spatial information on networks is presented. Then a higher-order spatial network convolution operation that adapts to our proposed representation is introduced. To ensure efficiency, we also propose a new approach that relied on sampling random spanning trees to reduce the time and memory complexity from $O(N^3)$ to $O(N)$. We demonstrate the strength of our proposed framework through extensive experiments on both synthetic and real-world datasets. The code for the proposed model is available at \url{https://github.com/rollingstonezz/SGMP_code}. | null |
Continuous-time edge modelling using non-parametric point processes | https://papers.nips.cc/paper_files/paper/2021/hash/1301962d8b7bd03fffaa27119aa7fc2b-Abstract.html | Xuhui Fan, Bin Li, Feng Zhou, Scott SIsson | https://papers.nips.cc/paper_files/paper/2021/hash/1301962d8b7bd03fffaa27119aa7fc2b-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11801-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1301962d8b7bd03fffaa27119aa7fc2b-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=8bbevt2MKPX | https://papers.nips.cc/paper_files/paper/2021/file/1301962d8b7bd03fffaa27119aa7fc2b-Supplemental.pdf | The mutually-exciting Hawkes process (ME-HP) is a natural choice to model reciprocity, which is an important attribute of continuous-time edge (dyadic) data. However, existing ways of implementing the ME-HP for such data are either inflexible, as the exogenous (background) rate functions are typically constant and the endogenous (excitation) rate functions are specified parametrically, or inefficient, as inference usually relies on Markov chain Monte Carlo methods with high computational costs. To address these limitations, we discuss various approaches to model design, and develop three variants of non-parametric point processes for continuous-time edge modelling (CTEM). The resulting models are highly adaptable as they generate intensity functions through sigmoidal Gaussian processes, and so provide greater modelling flexibility than parametric forms. The models are implemented via a fast variational inference method enabled by a novel edge modelling construction. The superior performance of the proposed CTEM models is demonstrated through extensive experimental evaluations on four real-world continuous-time edge data sets. | null |
Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time | https://papers.nips.cc/paper_files/paper/2021/hash/1325cdae3b6f0f91a1b629307bf2d498-Abstract.html | Feng Zhu, Andrew Sedler, Harrison A Grier, Nauman Ahad, Mark Davenport, Matthew Kaufman, Andrea Giovannucci, Chethan Pandarinath | https://papers.nips.cc/paper_files/paper/2021/hash/1325cdae3b6f0f91a1b629307bf2d498-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11802-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1325cdae3b6f0f91a1b629307bf2d498-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=9pt6F8w1Jgs | https://papers.nips.cc/paper_files/paper/2021/file/1325cdae3b6f0f91a1b629307bf2d498-Supplemental.pdf | Modern neural interfaces allow access to the activity of up to a million neurons within brain circuits. However, bandwidth limits often create a trade-off between greater spatial sampling (more channels or pixels) and the temporal frequency of sampling. Here we demonstrate that it is possible to obtain spatio-temporal super-resolution in neuronal time series by exploiting relationships among neurons, embedded in latent low-dimensional population dynamics. Our novel neural network training strategy, selective backpropagation through time (SBTT), enables learning of deep generative models of latent dynamics from data in which the set of observed variables changes at each time step. The resulting models are able to infer activity for missing samples by combining observations with learned latent dynamics. We test SBTT applied to sequential autoencoders and demonstrate more efficient and higher-fidelity characterization of neural population dynamics in electrophysiological and calcium imaging data. In electrophysiology, SBTT enables accurate inference of neuronal population dynamics with lower interface bandwidths, providing an avenue to significant power savings for implanted neuroelectronic interfaces. In applications to two-photon calcium imaging, SBTT accurately uncovers high-frequency temporal structure underlying neural population activity, substantially outperforming the current state-of-the-art. Finally, we demonstrate that performance could be further improved by using limited, high-bandwidth sampling to pretrain dynamics models, and then using SBTT to adapt these models for sparsely-sampled data. | null |
Memory-efficient Patch-based Inference for Tiny Deep Learning | https://papers.nips.cc/paper_files/paper/2021/hash/1371bccec2447b5aa6d96d2a540fb401-Abstract.html | Ji Lin, Wei-Ming Chen, Han Cai, Chuang Gan, Song Han | https://papers.nips.cc/paper_files/paper/2021/hash/1371bccec2447b5aa6d96d2a540fb401-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11803-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1371bccec2447b5aa6d96d2a540fb401-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=C1mPUP7uKNp | https://papers.nips.cc/paper_files/paper/2021/file/1371bccec2447b5aa6d96d2a540fb401-Supplemental.pdf | Tiny deep learning on microcontroller units (MCUs) is challenging due to the limited memory size. We find that the memory bottleneck is due to the imbalanced memory distribution in convolutional neural network (CNN) designs: the first several blocks have an order of magnitude larger memory usage than the rest of the network. To alleviate this issue, we propose a generic patch-by-patch inference scheduling, which operates only on a small spatial region of the feature map and significantly cuts down the peak memory. However, naive implementation brings overlapping patches and computation overhead. We further propose receptive field redistribution to shift the receptive field and FLOPs to the later stage and reduce the computation overhead. Manually redistributing the receptive field is difficult. We automate the process with neural architecture search to jointly optimize the neural architecture and inference scheduling, leading to MCUNetV2. Patch-based inference effectively reduces the peak memory usage of existing networks by4-8×. Co-designed with neural networks, MCUNetV2 sets a record ImageNetaccuracy on MCU (71.8%) and achieves >90% accuracy on the visual wake words dataset under only 32kB SRAM. MCUNetV2 also unblocks object detection on tiny devices, achieving 16.9% higher mAP on Pascal VOC compared to the state-of-the-art result. Our study largely addressed the memory bottleneck in tinyML and paved the way for various vision applications beyond image classification. | null |
Self-Interpretable Model with Transformation Equivariant Interpretation | https://papers.nips.cc/paper_files/paper/2021/hash/1387a00f03b4b423e63127b08c261bdc-Abstract.html | Yipei Wang, Xiaoqian Wang | https://papers.nips.cc/paper_files/paper/2021/hash/1387a00f03b4b423e63127b08c261bdc-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11804-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1387a00f03b4b423e63127b08c261bdc-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=YlM3tey8Z5I | https://papers.nips.cc/paper_files/paper/2021/file/1387a00f03b4b423e63127b08c261bdc-Supplemental.pdf | With the proliferation of machine learning applications in the real world, the demand for explaining machine learning predictions continues to grow especially in high-stakes fields. Recent studies have found that interpretation methods can be sensitive and unreliable, where the interpretations can be disturbed by perturbations or transformations of input data. To address this issue, we propose to learn robust interpretation through transformation equivariant regularization in a self-interpretable model. The resulting model is capable of capturing valid interpretation that is equivariant to geometric transformations. Moreover, since our model is self-interpretable, it enables faithful interpretations that reflect the true predictive mechanism. Unlike existing self-interpretable models, which usually sacrifice expressive power for the sake of interpretation quality, our model preserves the high expressive capability comparable to the state-of-the-art deep learning models in complex tasks, while providing visualizable and faithful high-quality interpretation. We compare with various related methods and validate the interpretation quality and consistency of our model. | null |
Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent | https://papers.nips.cc/paper_files/paper/2021/hash/13bf4a96378f3854bcd9792d132eff9f-Abstract.html | Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Georgios Piliouras | https://papers.nips.cc/paper_files/paper/2021/hash/13bf4a96378f3854bcd9792d132eff9f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11805-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/13bf4a96378f3854bcd9792d132eff9f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Efqe8E4Bww | https://papers.nips.cc/paper_files/paper/2021/file/13bf4a96378f3854bcd9792d132eff9f-Supplemental.pdf | Many recent AI architectures are inspired by zero-sum games, however, the behavior of their dynamics is still not well understood. Inspired by this, we study standard gradient descent ascent (GDA) dynamics in a specific class of non-convex non-concave zero-sum games, that we call hidden zero-sum games. In this class, players control the inputs of smooth but possibly non-linear functions whose outputs are being applied as inputs to a convex-concave game. Unlike general zero-sum games, these games have a well-defined notion of solution; outcomes that implement the von-Neumann equilibrium of the ``hidden" convex-concave game. We provide conditions under which vanilla GDA provably converges not merely to local Nash, but the actual von-Neumann solution. If the hidden game lacks strict convexity properties, GDA may fail to converge to any equilibrium, however, by applying standard regularization techniques we can prove convergence to a von-Neumann solution of a slightly perturbed zero-sum game. Our convergence results are non-local despite working in the setting of non-convex non-concave games. Critically, under proper assumptions we combine the Center-Stable Manifold Theorem along with novel type of initialization dependent Lyapunov functions to prove that almost all initial conditions converge to the solution. Finally, we discuss diverse applications of our framework ranging from generative adversarial networks to evolutionary biology. | null |
Preserved central model for faster bidirectional compression in distributed settings | https://papers.nips.cc/paper_files/paper/2021/hash/13d63838ef1fb6f34ca2dc6821c60e49-Abstract.html | Constantin Philippenko, Aymeric Dieuleveut | https://papers.nips.cc/paper_files/paper/2021/hash/13d63838ef1fb6f34ca2dc6821c60e49-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11806-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/13d63838ef1fb6f34ca2dc6821c60e49-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=q6h7jVe0wE3 | https://papers.nips.cc/paper_files/paper/2021/file/13d63838ef1fb6f34ca2dc6821c60e49-Supplemental.pdf | We develop a new approach to tackle communication constraints in a distributed learning problem with a central server. We propose and analyze a new algorithm that performs bidirectional compression and achieves the same convergence rate as algorithms using only uplink (from the local workers to the central server) compression. To obtain this improvement, we design MCM, an algorithm such that the downlink compression only impacts local models, while the global model is preserved. As a result, and contrary to previous works, the gradients on local servers are computed on perturbed models. Consequently, convergence proofs are more challenging and require a precise control of this perturbation. To ensure it, MCM additionally combines model compression with a memory mechanism. This analysis opens new doors, e.g. incorporating worker dependent randomized-models and partial participation. | null |
Understanding Instance-based Interpretability of Variational Auto-Encoders | https://papers.nips.cc/paper_files/paper/2021/hash/13d7dc096493e1f77fb4ccf3eaf79df1-Abstract.html | Zhifeng Kong, Kamalika Chaudhuri | https://papers.nips.cc/paper_files/paper/2021/hash/13d7dc096493e1f77fb4ccf3eaf79df1-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11807-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/13d7dc096493e1f77fb4ccf3eaf79df1-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=a5-37ER8qTI | null | Instance-based interpretation methods have been widely studied for supervised learning methods as they help explain how black box neural networks predict. However, instance-based interpretations remain ill-understood in the context of unsupervised learning. In this paper, we investigate influence functions [Koh and Liang, 2017], a popular instance-based interpretation method, for a class of deep generative models called variational auto-encoders (VAE). We formally frame the counter-factual question answered by influence functions in this setting, and through theoretical analysis, examine what they reveal about the impact of training samples on classical unsupervised learning methods. We then introduce VAE- TracIn, a computationally efficient and theoretically sound solution based on Pruthi et al. [2020], for VAEs. Finally, we evaluate VAE-TracIn on several real world datasets with extensive quantitative and qualitative analysis. | null |
Voxel-based 3D Detection and Reconstruction of Multiple Objects from a Single Image | https://papers.nips.cc/paper_files/paper/2021/hash/1415db70fe9ddb119e23e9b2808cde38-Abstract.html | Feng Liu, Xiaoming Liu | https://papers.nips.cc/paper_files/paper/2021/hash/1415db70fe9ddb119e23e9b2808cde38-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11808-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1415db70fe9ddb119e23e9b2808cde38-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ZdV8fv_7fPt | https://papers.nips.cc/paper_files/paper/2021/file/1415db70fe9ddb119e23e9b2808cde38-Supplemental.zip | Inferring 3D locations and shapes of multiple objects from a single 2D image is a long-standing objective of computer vision. Most of the existing works either predict one of these 3D properties or focus on solving both for a single object. One fundamental challenge lies in how to learn an effective representation of the image that is well-suited for 3D detection and reconstruction. In this work, we propose to learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator. Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space. Moreover, we devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation, which enables fine detail reconstruction and two orders of magnitude faster inference than prior methods. With complementary supervision from both 3D detection and reconstruction, one enables the 3D voxel features to be geometry and context preserving, benefiting both tasks. The effectiveness of our approach is demonstrated through 3D detection and reconstruction on single-object and multiple-object scenarios. | null |
Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization | https://papers.nips.cc/paper_files/paper/2021/hash/1415fe9fea0fa1e45dddcff5682239a0-Abstract.html | Yusuke Iwasawa, Yutaka Matsuo | https://papers.nips.cc/paper_files/paper/2021/hash/1415fe9fea0fa1e45dddcff5682239a0-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11809-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1415fe9fea0fa1e45dddcff5682239a0-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=e_yvNqkJKAW | https://papers.nips.cc/paper_files/paper/2021/file/1415fe9fea0fa1e45dddcff5682239a0-Supplemental.pdf | This paper presents a new algorithm for domain generalization (DG), \textit{test-time template adjuster (T3A)}, aiming to robustify a model to unknown distribution shift. Unlike existing methods that focus on \textit{training phase}, our method focuses \textit{test phase}, i.e., correcting its prediction by itself during test time. Specifically, T3A adjusts a trained linear classifier (the last layer of deep neural networks) with the following procedure: (1) compute a pseudo-prototype representation for each class using online unlabeled data augmented by the base classifier trained in the source domains, (2) and then classify each sample based on its distance to the pseudo-prototypes. T3A is back-propagation-free and modifies only the linear layer; therefore, the increase in computational cost during inference is negligible and avoids the catastrophic failure might caused by stochastic optimization. Despite its simplicity, T3A can leverage knowledge about the target domain by using off-the-shelf test-time data and improve performance. We tested our method on four domain generalization benchmarks, namely PACS, VLCS, OfficeHome, and TerraIncognita, along with various backbone networks including ResNet18, ResNet50, Big Transfer (BiT), Vision Transformers (ViT), and MLP-Mixer. The results show T3A stably improves performance on unseen domains across choices of backbone networks, and outperforms existing domain generalization methods. | null |
Luna: Linear Unified Nested Attention | https://papers.nips.cc/paper_files/paper/2021/hash/14319d9cfc6123106878dc20b94fbaf3-Abstract.html | Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer | https://papers.nips.cc/paper_files/paper/2021/hash/14319d9cfc6123106878dc20b94fbaf3-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11810-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/14319d9cfc6123106878dc20b94fbaf3-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=GWRkOYr4jxQ | https://papers.nips.cc/paper_files/paper/2021/file/14319d9cfc6123106878dc20b94fbaf3-Supplemental.pdf | The quadratic computational and memory complexities of the Transformer's attention mechanism have limited its scalability for modeling long sequences. In this paper, we propose Luna, a linear unified nested attention mechanism that approximates softmax attention with two nested linear attention functions, yielding only linear (as opposed to quadratic) time and space complexity. Specifically, with the first attention function, Luna packs the input sequence into a sequence of fixed length. Then, the packed sequence is unpacked using the second attention function. As compared to a more traditional attention mechanism, Luna introduces an additional sequence with a fixed length as input and an additional corresponding output, which allows Luna to perform attention operation linearly, while also storing adequate contextual information. We perform extensive evaluations on three benchmarks of sequence modeling tasks: long-context sequence modelling, neural machine translation and masked language modeling for large-scale pretraining. Competitive or even better experimental results demonstrate both the effectiveness and efficiency of Luna compared to a variety of strong baseline methods including the full-rank attention and other efficient sparse and dense attention methods. | null |
Iterative Causal Discovery in the Possible Presence of Latent Confounders and Selection Bias | https://papers.nips.cc/paper_files/paper/2021/hash/144a3f71a03ab7c4f46f9656608efdb2-Abstract.html | Raanan Y. Rohekar, Shami Nisimov, Yaniv Gurwicz, Gal Novik | https://papers.nips.cc/paper_files/paper/2021/hash/144a3f71a03ab7c4f46f9656608efdb2-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11811-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/144a3f71a03ab7c4f46f9656608efdb2-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=Y2OaOLYQYA | https://papers.nips.cc/paper_files/paper/2021/file/144a3f71a03ab7c4f46f9656608efdb2-Supplemental.pdf | We present a sound and complete algorithm, called iterative causal discovery (ICD), for recovering causal graphs in the presence of latent confounders and selection bias. ICD relies on the causal Markov and faithfulness assumptions and recovers the equivalence class of the underlying causal graph. It starts with a complete graph, and consists of a single iterative stage that gradually refines this graph by identifying conditional independence (CI) between connected nodes. Independence and causal relations entailed after any iteration are correct, rendering ICD anytime. Essentially, we tie the size of the CI conditioning set to its distance on the graph from the tested nodes, and increase this value in the successive iteration. Thus, each iteration refines a graph that was recovered by previous iterations having smaller conditioning sets---a higher statistical power---which contributes to stability. We demonstrate empirically that ICD requires significantly fewer CI tests and learns more accurate causal graphs compared to FCI, FCI+, and RFCI algorithms. | null |
Hindsight Task Relabelling: Experience Replay for Sparse Reward Meta-RL | https://papers.nips.cc/paper_files/paper/2021/hash/1454ca2270599546dfcd2a3700e4d2f1-Abstract.html | Charles Packer, Pieter Abbeel, Joseph E. Gonzalez | https://papers.nips.cc/paper_files/paper/2021/hash/1454ca2270599546dfcd2a3700e4d2f1-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11812-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1454ca2270599546dfcd2a3700e4d2f1-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=FeFIzwifdoL | https://papers.nips.cc/paper_files/paper/2021/file/1454ca2270599546dfcd2a3700e4d2f1-Supplemental.pdf | Meta-reinforcement learning (meta-RL) has proven to be a successful framework for leveraging experience from prior tasks to rapidly learn new related tasks, however, current meta-RL approaches struggle to learn in sparse reward environments. Although existing meta-RL algorithms can learn strategies for adapting to new sparse reward tasks, the actual adaptation strategies are learned using hand-shaped reward functions, or require simple environments where random exploration is sufficient to encounter sparse reward. In this paper we present a formulation of hindsight relabelling for meta-RL, which relabels experience during meta-training to enable learning to learn entirely using sparse reward. We demonstrate the effectiveness of our approach on a suite of challenging sparse reward environments that previously required dense reward during meta-training to solve. Our approach solves these environments using the true sparse reward function, with performance comparable to training with a proxy dense reward function. | null |
A Bayesian-Symbolic Approach to Reasoning and Learning in Intuitive Physics | https://papers.nips.cc/paper_files/paper/2021/hash/147540e129e096fa91700e9db6588354-Abstract.html | Kai Xu, Akash Srivastava, Dan Gutfreund, Felix Sosa, Tomer Ullman, Josh Tenenbaum, Charles Sutton | https://papers.nips.cc/paper_files/paper/2021/hash/147540e129e096fa91700e9db6588354-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11813-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/147540e129e096fa91700e9db6588354-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=WN8ChCARq2 | https://papers.nips.cc/paper_files/paper/2021/file/147540e129e096fa91700e9db6588354-Supplemental.pdf | Humans can reason about intuitive physics in fully or partially observed environments even after being exposed to a very limited set of observations. This sample-efficient intuitive physical reasoning is considered a core domain of human common sense knowledge. One hypothesis to explain this remarkable capacity, posits that humans quickly learn approximations to the laws of physics that govern the dynamics of the environment. In this paper, we propose a Bayesian-symbolic framework (BSP) for physical reasoning and learning that is close to human-level sample-efficiency and accuracy. In BSP, the environment is represented by a top-down generative model of entities, which are assumed to interact with each other under unknown force laws over their latent and observed properties. BSP models each of these entities as random variables, and uses Bayesian inference to estimate their unknown properties. For learning the unknown forces, BSP leverages symbolic regression on a novel grammar of Newtonian physics in a bilevel optimization setup. These inference and regression steps are performed in an iterative manner using expectation-maximization, allowing BSP to simultaneously learn force laws while maintaining uncertainty over entity properties. We show that BSP is more sample-efficient compared to neural alternatives on controlled synthetic datasets, demonstrate BSP's applicability to real-world common sense scenes and study BSP's performance on tasks previously used to study human physical reasoning. | null |
Associating Objects with Transformers for Video Object Segmentation | https://papers.nips.cc/paper_files/paper/2021/hash/147702db07145348245dc5a2f2fe5683-Abstract.html | Zongxin Yang, Yunchao Wei, Yi Yang | https://papers.nips.cc/paper_files/paper/2021/hash/147702db07145348245dc5a2f2fe5683-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11814-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/147702db07145348245dc5a2f2fe5683-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=hl3v8io3ZYt | https://papers.nips.cc/paper_files/paper/2021/file/147702db07145348245dc5a2f2fe5683-Supplemental.pdf | This paper investigates how to realize better and more efficient embedding learning to tackle the semi-supervised video object segmentation under challenging multi-object scenarios. The state-of-the-art methods learn to decode features with a single positive object and thus have to match and segment each target separately under multi-object scenarios, consuming multiple times computing resources. To solve the problem, we propose an Associating Objects with Transformers (AOT) approach to match and decode multiple objects uniformly. In detail, AOT employs an identification mechanism to associate multiple targets into the same high-dimensional embedding space. Thus, we can simultaneously process multiple objects' matching and segmentation decoding as efficiently as processing a single object. For sufficiently modeling multi-object association, a Long Short-Term Transformer is designed for constructing hierarchical matching and propagation. We conduct extensive experiments on both multi-object and single-object benchmarks to examine AOT variant networks with different complexities. Particularly, our R50-AOT-L outperforms all the state-of-the-art competitors on three popular benchmarks, i.e., YouTube-VOS (84.1% J&F), DAVIS 2017 (84.9%), and DAVIS 2016 (91.1%), while keeping more than 3X faster multi-object run-time. Meanwhile, our AOT-T can maintain real-time multi-object speed on the above benchmarks. Based on AOT, we ranked 1st in the 3rd Large-scale VOS Challenge. | null |
Automatic Symmetry Discovery with Lie Algebra Convolutional Network | https://papers.nips.cc/paper_files/paper/2021/hash/148148d62be67e0916a833931bd32b26-Abstract.html | Nima Dehmamy, Robin Walters, Yanchen Liu, Dashun Wang, Rose Yu | https://papers.nips.cc/paper_files/paper/2021/hash/148148d62be67e0916a833931bd32b26-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11815-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/148148d62be67e0916a833931bd32b26-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=NPOWF_ZLfC5 | https://papers.nips.cc/paper_files/paper/2021/file/148148d62be67e0916a833931bd32b26-Supplemental.zip | Existing equivariant neural networks require prior knowledge of the symmetry group and discretization for continuous groups. We propose to work with Lie algebras (infinitesimal generators) instead of Lie groups. Our model, the Lie algebra convolutional network (L-conv) can automatically discover symmetries and does not require discretization of the group. We show that L-conv can serve as a building block to construct any group equivariant feedforward architecture. Both CNNs and Graph Convolutional Networks can be expressed as L-conv with appropriate groups. We discover direct connections between L-conv and physics: (1) group invariant loss generalizes field theory (2) Euler-Lagrange equation measures the robustness, and (3) equivariance leads to conservation laws and Noether current. These connections open up new avenues for designing more general equivariant networks and applying them to important problems in physical sciences. | null |
Zero Time Waste: Recycling Predictions in Early Exit Neural Networks | https://papers.nips.cc/paper_files/paper/2021/hash/149ef6419512be56a93169cd5e6fa8fd-Abstract.html | Maciej Wołczyk, Bartosz Wójcik, Klaudia Bałazy, Igor T Podolak, Jacek Tabor, Marek Śmieja, Tomasz Trzcinski | https://papers.nips.cc/paper_files/paper/2021/hash/149ef6419512be56a93169cd5e6fa8fd-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11816-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/149ef6419512be56a93169cd5e6fa8fd-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=14-dXLRn4fE | https://papers.nips.cc/paper_files/paper/2021/file/149ef6419512be56a93169cd5e6fa8fd-Supplemental.pdf | The problem of reducing processing time of large deep learning models is a fundamental challenge in many real-world applications. Early exit methods strive towards this goal by attaching additional Internal Classifiers (ICs) to intermediate layers of a neural network. ICs can quickly return predictions for easy examples and, as a result, reduce the average inference time of the whole model. However, if a particular IC does not decide to return an answer early, its predictions are discarded, with its computations effectively being wasted. To solve this issue, we introduce Zero Time Waste (ZTW), a novel approach in which each IC reuses predictions returned by its predecessors by (1) adding direct connections between ICs and (2) combining previous outputs in an ensemble-like manner. We conduct extensive experiments across various datasets and architectures to demonstrate that ZTW achieves a significantly better accuracy vs. inference time trade-off than other recently proposed early exit methods. | null |
On Model Calibration for Long-Tailed Object Detection and Instance Segmentation | https://papers.nips.cc/paper_files/paper/2021/hash/14ad095ecc1c3e1b87f3c522836e9158-Abstract.html | Tai-Yu Pan, Cheng Zhang, Yandong Li, Hexiang Hu, Dong Xuan, Soravit Changpinyo, Boqing Gong, Wei-Lun Chao | https://papers.nips.cc/paper_files/paper/2021/hash/14ad095ecc1c3e1b87f3c522836e9158-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11817-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/14ad095ecc1c3e1b87f3c522836e9158-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=t9gKUW9T8fX | https://papers.nips.cc/paper_files/paper/2021/file/14ad095ecc1c3e1b87f3c522836e9158-Supplemental.pdf | Vanilla models for object detection and instance segmentation suffer from the heavy bias toward detecting frequent objects in the long-tailed setting. Existing methods address this issue mostly during training, e.g., by re-sampling or re-weighting. In this paper, we investigate a largely overlooked approach --- post-processing calibration of confidence scores. We propose NorCal, Normalized Calibration for long-tailed object detection and instance segmentation, a simple and straightforward recipe that reweighs the predicted scores of each class by its training sample size. We show that separately handling the background class and normalizing the scores over classes for each proposal are keys to achieving superior performance. On the LVIS dataset, NorCal can effectively improve nearly all the baseline models not only on rare classes but also on common and frequent classes. Finally, we conduct extensive analysis and ablation studies to offer insights into various modeling choices and mechanisms of our approach. Our code is publicly available at https://github.com/tydpan/NorCal. | null |
ReSSL: Relational Self-Supervised Learning with Weak Augmentation | https://papers.nips.cc/paper_files/paper/2021/hash/14c4f36143b4b09cbc320d7c95a50ee7-Abstract.html | Mingkai Zheng, Shan You, Fei Wang, Chen Qian, Changshui Zhang, Xiaogang Wang, Chang Xu | https://papers.nips.cc/paper_files/paper/2021/hash/14c4f36143b4b09cbc320d7c95a50ee7-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11818-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/14c4f36143b4b09cbc320d7c95a50ee7-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=ErivP29kYnx | https://papers.nips.cc/paper_files/paper/2021/file/14c4f36143b4b09cbc320d7c95a50ee7-Supplemental.pdf | Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations. However, most of methods mainly focus on the instance level information (\ie, the different augmented images of the same instance should have the same feature or cluster into the same class), but there is a lack of attention on the relationships between different instances. In this paper, we introduced a novel SSL paradigm, which we term as relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances. Specifically, our proposed method employs sharpened distribution of pairwise similarities among different instances as \textit{relation} metric, which is thus utilized to match the feature embeddings of different augmentations. Moreover, to boost the performance, we argue that weak augmentations matter to represent a more reliable relation, and leverage momentum strategy for practical efficiency. Experimental results show that our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency. | null |
Learning to See by Looking at Noise | https://papers.nips.cc/paper_files/paper/2021/hash/14f2ebeab937ca128186e7ba876faef9-Abstract.html | Manel Baradad Jurjo, Jonas Wulff, Tongzhou Wang, Phillip Isola, Antonio Torralba | https://papers.nips.cc/paper_files/paper/2021/hash/14f2ebeab937ca128186e7ba876faef9-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11819-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/14f2ebeab937ca128186e7ba876faef9-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=RQUl8gZnN7O | https://papers.nips.cc/paper_files/paper/2021/file/14f2ebeab937ca128186e7ba876faef9-Supplemental.pdf | Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this paper we go a step further and ask if we can do away with real image datasets entirely, instead learning from procedural noise processes. We investigate a suite of image generation models that produce images from simple random processes. These are then used as training data for a visual representation learner with a contrastive loss. In particular, we study statistical image models, randomly initialized deep generative models, and procedural graphics models.Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic. We also find that diversity is a key property to learn good representations. | null |
Explicit loss asymptotics in the gradient descent training of neural networks | https://papers.nips.cc/paper_files/paper/2021/hash/14faf969228fc18fcd4fcf59437b0c97-Abstract.html | Maksim Velikanov, Dmitry Yarotsky | https://papers.nips.cc/paper_files/paper/2021/hash/14faf969228fc18fcd4fcf59437b0c97-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11820-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/14faf969228fc18fcd4fcf59437b0c97-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=EHUsTBGIP17 | https://papers.nips.cc/paper_files/paper/2021/file/14faf969228fc18fcd4fcf59437b0c97-Supplemental.pdf | Current theoretical results on optimization trajectories of neural networks trained by gradient descent typically have the form of rigorous but potentially loose bounds on the loss values. In the present work we take a different approach and show that the learning trajectory of a wide network in a lazy training regime can be characterized by an explicit asymptotic at large training times. Specifically, the leading term in the asymptotic expansion of the loss behaves as a power law $L(t) \sim C t^{-\xi}$ with exponent $\xi$ expressed only through the data dimension, the smoothness of the activation function, and the class of function being approximated. Our results are based on spectral analysis of the integral operator representing the linearized evolution of a large network trained on the expected loss. Importantly, the techniques we employ do not require a specific form of the data distribution, for example Gaussian, thus making our findings sufficiently universal. | null |
Test-Time Personalization with a Transformer for Human Pose Estimation | https://papers.nips.cc/paper_files/paper/2021/hash/1517c8664be296f0d87d9e5fc54fdd60-Abstract.html | Yizhuo Li, Miao Hao, Zonglin Di, Nitesh Bharadwaj Gundavarapu, Xiaolong Wang | https://papers.nips.cc/paper_files/paper/2021/hash/1517c8664be296f0d87d9e5fc54fdd60-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11821-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1517c8664be296f0d87d9e5fc54fdd60-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=cwSkaedP-wz | https://papers.nips.cc/paper_files/paper/2021/file/1517c8664be296f0d87d9e5fc54fdd60-Supplemental.zip | We propose to personalize a 2D human pose estimator given a set of test images of a person without using any manual annotations. While there is a significant advancement in human pose estimation, it is still very challenging for a model to generalize to different unknown environments and unseen persons. Instead of using a fixed model for every test case, we adapt our pose estimator during test time to exploit person-specific information. We first train our model on diverse data with both a supervised and a self-supervised pose estimation objectives jointly. We use a Transformer model to build a transformation between the self-supervised keypoints and the supervised keypoints. During test time, we personalize and adapt our model by fine-tuning with the self-supervised objective. The pose is then improved by transforming the updated self-supervised keypoints. We experiment with multiple datasets and show significant improvements on pose estimations with our self-supervised personalization. Project page with code is available at https://liyz15.github.io/TTP/. | null |
Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN | https://papers.nips.cc/paper_files/paper/2021/hash/151de84cca69258b17375e2f44239191-Abstract.html | Zhenyu Xie, Zaiyu Huang, Fuwei Zhao, Haoye Dong, Michael Kampffmeyer, Xiaodan Liang | https://papers.nips.cc/paper_files/paper/2021/hash/151de84cca69258b17375e2f44239191-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11822-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/151de84cca69258b17375e2f44239191-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=X8SLExrO2Lp | https://papers.nips.cc/paper_files/paper/2021/file/151de84cca69258b17375e2f44239191-Supplemental.pdf | Image-based virtual try-on is one of the most promising applications of human-centric image generation due to its tremendous real-world potential. Yet, as most try-on approaches fit in-shop garments onto a target person, they require the laborious and restrictive construction of a paired training dataset, severely limiting their scalability. While a few recent works attempt to transfer garments directly from one person to another, alleviating the need to collect paired datasets, their performance is impacted by the lack of paired (supervised) information. In particular, disentangling style and spatial information of the garment becomes a challenge, which existing methods either address by requiring auxiliary data or extensive online optimization procedures, thereby still inhibiting their scalability. To achieve a scalable virtual try-on system that can transfer arbitrary garments between a source and a target person in an unsupervised manner, we thus propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on. Specifically, to disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module for successfully retaining garment texture and shape characteristics. Guided by the source person's keypoints, the patch-routed disentanglement module first decouples garments into normalized patches, thus eliminating the inherent spatial information of the garment, and then reconstructs the normalized patches to the warped garment complying with the target person pose. Given the warped garment, PASTA-GAN further introduces novel spatially-adaptive residual blocks that guide the generator to synthesize more realistic garment details. Extensive comparisons with paired and unpaired approaches demonstrate the superiority of PASTA-GAN, highlighting its ability to generate high-quality try-on images when faced with a large variety of garments(e.g. vests, shirts, pants), taking a crucial step towards real-world scalable try-on. | null |
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models | https://papers.nips.cc/paper_files/paper/2021/hash/1531beb762df4029513ebf9295e0d34f-Abstract.html | Hannah Rose Kirk, Yennie Jun, Filippo Volpin, Haider Iqbal, Elias Benussi, Frederic Dreyer, Aleksandar Shtedritski, Yuki Asano | https://papers.nips.cc/paper_files/paper/2021/hash/1531beb762df4029513ebf9295e0d34f-Abstract.html | NIPS 2021 | https://papers.nips.cc/paper_files/paper/11823-/bibtex | https://papers.nips.cc/paper_files/paper/2021/file/1531beb762df4029513ebf9295e0d34f-Paper.pdf | https://papers.nips.cchttps://openreview.net/forum?id=DsWYWm6ozxx | https://papers.nips.cc/paper_files/paper/2021/file/1531beb762df4029513ebf9295e0d34f-Supplemental.pdf | The capabilities of natural language models trained on large-scale data have increased immensely over the past few years. Open source libraries such as HuggingFace have made these models easily available and accessible. While prior research has identified biases in large language models, this paper considers biases contained in the most popular versions of these models when applied `out-of-the-box' for downstream tasks. We focus on generative language models as they are well-suited for extracting biases inherited from training data. Specifically, we conduct an in-depth analysis of GPT-2, which is the most downloaded text generation model on HuggingFace, with over half a million downloads per month. We assess biases related to occupational associations for different protected categories by intersecting gender with religion, sexuality, ethnicity, political affiliation, and continental name origin. Using a template-based data collection pipeline, we collect 396K sentence completions made by GPT-2 and find: (i) The machine-predicted jobs are less diverse and more stereotypical for women than for men, especially for intersections; (ii) Intersectional interactions are highly relevant for occupational associations, which we quantify by fitting 262 logistic models; (iii) For most occupations, GPT-2 reflects the skewed gender and ethnicity distribution found in US Labor Bureau data, and even pulls the societally-skewed distribution towards gender parity in cases where its predictions deviate from real labor market observations. This raises the normative question of what language models \textit{should} learn - whether they should reflect or correct for existing inequalities. | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.