title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
Model Agnostic Multilevel Explanations
https://papers.nips.cc/paper_files/paper/2020/hash/426f990b332ef8193a61cc90516c1245-Abstract.html
Karthikeyan Natesan Ramamurthy, Bhanukiran Vinzamuri, Yunfeng Zhang, Amit Dhurandhar
https://papers.nips.cc/paper_files/paper/2020/hash/426f990b332ef8193a61cc90516c1245-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/426f990b332ef8193a61cc90516c1245-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10225-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/426f990b332ef8193a61cc90516c1245-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/426f990b332ef8193a61cc90516c1245-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/426f990b332ef8193a61cc90516c1245-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/426f990b332ef8193a61cc90516c1245-Supplemental.pdf
In recent years, post-hoc local instance-level and global dataset-level explainability of black-box models has received a lot of attention. Lesser attention has been given to obtaining insights at intermediate or group levels, which is a need outlined in recent works that study the challenges in realizing the guidelines in the General Data Protection Regulation (GDPR). In this paper, we propose a meta-method that, given a typical local explainability method, can build a multilevel explanation tree. The leaves of this tree correspond to local explanations, the root corresponds to global explanation, and intermediate levels correspond to explanations for groups of data points that it automatically clusters. The method can also leverage side information, where users can specify points for which they may want the explanations to be similar. We argue that such a multilevel structure can also be an effective form of communication, where one could obtain few explanations that characterize the entire dataset by considering an appropriate level in our explanation tree. Explanations for novel test points can be cost-efficiently obtained by associating them with the closest training points. When the local explainability technique is generalized additive (viz. LIME, GAMs), we develop fast approximate algorithm for building the multilevel tree and study its convergence behavior. We show that we produce high fidelity sparse explanations on several public datasets and also validate the effectiveness of the proposed technique based on two human studies -- one with experts and the other with non-expert users -- on real world datasets.
NeuMiss networks: differentiable programming for supervised learning with missing values.
https://papers.nips.cc/paper_files/paper/2020/hash/42ae1544956fbe6e09242e6cd752444c-Abstract.html
Marine Le Morvan, Julie Josse, Thomas Moreau, Erwan Scornet, Gael Varoquaux
https://papers.nips.cc/paper_files/paper/2020/hash/42ae1544956fbe6e09242e6cd752444c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/42ae1544956fbe6e09242e6cd752444c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10226-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/42ae1544956fbe6e09242e6cd752444c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/42ae1544956fbe6e09242e6cd752444c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/42ae1544956fbe6e09242e6cd752444c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/42ae1544956fbe6e09242e6cd752444c-Supplemental.pdf
The presence of missing values makes supervised learning much more challenging. Indeed, previous work has shown that even when the response is a linear function of the complete data, the optimal predictor is a complex function of the observed entries and the missingness indicator. As a result, the computational or sample complexities of consistent approaches depend on the number of missing patterns, which can be exponential in the number of dimensions. In this work, we derive the analytical form of the optimal predictor under a linearity assumption and various missing data mechanisms including Missing at Random (MAR) and self-masking (Missing Not At Random). Based on a Neumann-series approximation of the optimal predictor, we propose a new principled architecture, named NeuMiss networks. Their originality and strength come from the use of a new type of non-linearity: the multiplication by the missingness indicator. We provide an upper bound on the Bayes risk of NeuMiss networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns. As a result they scale well to problems with many features, and remain statistically efficient for medium-sized samples. Moreover, we show that, contrary to procedures using EM or imputation, they are robust to the missing data mechanism, including difficult MNAR settings such as self-masking.
Revisiting Parameter Sharing for Automatic Neural Channel Number Search
https://papers.nips.cc/paper_files/paper/2020/hash/42cd63cb189c30ed03e42ce2c069566c-Abstract.html
Jiaxing Wang, Haoli Bai, Jiaxiang Wu, Xupeng Shi, Junzhou Huang, Irwin King, Michael Lyu, Jian Cheng
https://papers.nips.cc/paper_files/paper/2020/hash/42cd63cb189c30ed03e42ce2c069566c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/42cd63cb189c30ed03e42ce2c069566c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10227-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/42cd63cb189c30ed03e42ce2c069566c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/42cd63cb189c30ed03e42ce2c069566c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/42cd63cb189c30ed03e42ce2c069566c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/42cd63cb189c30ed03e42ce2c069566c-Supplemental.pdf
Recent advances in neural architecture search inspire many channel number search algorithms~(CNS) for convolutional neural networks. To improve searching efficiency, parameter sharing is widely applied, which reuses parameters among different channel configurations. Nevertheless, it is unclear how parameter sharing affects the searching process. In this paper, we aim at providing a better understanding and exploitation of parameter sharing for CNS. Specifically, we propose affine parameter sharing~(APS) as a general formulation to unify and quantitatively analyze existing channel search algorithms. It is found that with parameter sharing, weight updates of one architecture can simultaneously benefit other candidates. However, it also results in less confidence in choosing good architectures. We thus propose a new strategy of parameter sharing towards a better balance between training efficiency and architecture discrimination. Extensive analysis and experiments demonstrate the superiority of the proposed strategy in channel configuration against many state-of-the-art counterparts on benchmark datasets.
Differentially-Private Federated Linear Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/4311359ed4969e8401880e3c1836fbe1-Abstract.html
Abhimanyu Dubey, Alex `Sandy' Pentland
https://papers.nips.cc/paper_files/paper/2020/hash/4311359ed4969e8401880e3c1836fbe1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4311359ed4969e8401880e3c1836fbe1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10228-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4311359ed4969e8401880e3c1836fbe1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4311359ed4969e8401880e3c1836fbe1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4311359ed4969e8401880e3c1836fbe1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4311359ed4969e8401880e3c1836fbe1-Supplemental.pdf
The rapid proliferation of decentralized learning systems mandates the need for differentially-private cooperative learning. In this paper, we study this in context of the contextual linear bandit: we consider a collection of agents cooperating to solve a common contextual bandit, while ensuring that their communication remains private. For this problem, we devise FedUCB, a multiagent private algorithm for both centralized and decentralized (peer-to-peer) federated learning. We provide a rigorous technical analysis of its utility in terms of regret, improving several results in cooperative bandit learning, and provide rigorous privacy guarantees as well. Our algorithms provide competitive performance both in terms of pseudoregret bounds and empirical benchmark performance in various multi-agent settings.
Is Plug-in Solver Sample-Efficient for Feature-based Reinforcement Learning?
https://papers.nips.cc/paper_files/paper/2020/hash/43207fd5e34f87c48d584fc5c11befb8-Abstract.html
Qiwen Cui, Lin Yang
https://papers.nips.cc/paper_files/paper/2020/hash/43207fd5e34f87c48d584fc5c11befb8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/43207fd5e34f87c48d584fc5c11befb8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10229-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/43207fd5e34f87c48d584fc5c11befb8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/43207fd5e34f87c48d584fc5c11befb8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/43207fd5e34f87c48d584fc5c11befb8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/43207fd5e34f87c48d584fc5c11befb8-Supplemental.pdf
It is believed that a model-based approach for reinforcement learning (RL) is the key to reduce sample complexity. However, the understanding of the sample optimality of model-based RL is still largely missing, even for the linear case. This work considers sample complexity of finding an $\epsilon$-optimal policy in a Markov decision process (MDP) that admits a linear additive feature representation, given only access to a generative model. We solve this problem via a plug-in solver approach, which builds an empirical model and plans in this empirical model via an arbitrary plug-in solver. We prove that under the anchor-state assumption, which implies implicit non-negativity in the feature space, the minimax sample complexity of finding an $\epsilon$-optimal policy in a $\gamma$-discounted MDP is $O(K/(1-\gamma)^3\epsilon^2)$, which only depends on the dimensionality $K$ of the feature space and has no dependence on the state or action space. We further extend our results to a relaxed setting where anchor-states may not exist and show that a plug-in approach can be sample efficient as well, providing a flexible approach to design model-based algorithms for RL.
Learning Physical Graph Representations from Visual Scenes
https://papers.nips.cc/paper_files/paper/2020/hash/4324e8d0d37b110ee1a4f1633ac52df5-Abstract.html
Daniel Bear, Chaofei Fan, Damian Mrowca, Yunzhu Li, Seth Alter, Aran Nayebi, Jeremy Schwartz, Li F. Fei-Fei, Jiajun Wu, Josh Tenenbaum, Daniel L. Yamins
https://papers.nips.cc/paper_files/paper/2020/hash/4324e8d0d37b110ee1a4f1633ac52df5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4324e8d0d37b110ee1a4f1633ac52df5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10230-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4324e8d0d37b110ee1a4f1633ac52df5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4324e8d0d37b110ee1a4f1633ac52df5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4324e8d0d37b110ee1a4f1633ac52df5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4324e8d0d37b110ee1a4f1633ac52df5-Supplemental.pdf
Convolutional Neural Networks (CNNs) have proved exceptional at learning representations for visual object categorization. However, CNNs do not explicitly encode objects, parts, and their physical properties, which has limited CNNs' success on tasks that require structured understanding of visual scenes. To overcome these limitations, we introduce the idea of ``Physical Scene Graphs'' (PSGs), which represent scenes as hierarchical graphs, with nodes in the hierarchy corresponding intuitively to object parts at different scales, and edges to physical connections between parts. Bound to each node is a vector of latent attributes that intuitively represent object properties such as surface shape and texture. We also describe PSGNet, a network architecture that learns to extract PSGs by reconstructing scenes through a PSG-structured bottleneck. PSGNet augments standard CNNs by including: recurrent feedback connections to combine low and high-level image information; graph pooling and vectorization operations that convert spatially-uniform feature maps into object-centric graph structures; and perceptual grouping principles to encourage the identification of meaningful scene elements. We show that PSGNet outperforms alternative self-supervised scene representation algorithms at scene segmentation tasks, especially on complex real-world images, and generalizes well to unseen object types and scene arrangements. PSGNet is also able learn from physical motion, enhancing scene estimates even for static images. We present a series of ablation studies illustrating the importance of each component of the PSGNet architecture, analyses showing that learned latent attributes capture intuitive scene properties, and illustrate the use of PSGs for compositional scene inference.
Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking
https://papers.nips.cc/paper_files/paper/2020/hash/4379cf00e1a95a97a33dac10ce454ca4-Abstract.html
Anqi Wu, Estefany Kelly Buchanan, Matthew Whiteway, Michael Schartner, Guido Meijer, Jean-Paul Noel, Erica Rodriguez, Claire Everett, Amy Norovich, Evan Schaffer, Neeli Mishra, C. Daniel Salzman, Dora Angelaki, Andrés Bendesky, The International Brain Laboratory The International Brain Laboratory, John P. Cunningham, Liam Paninski
https://papers.nips.cc/paper_files/paper/2020/hash/4379cf00e1a95a97a33dac10ce454ca4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4379cf00e1a95a97a33dac10ce454ca4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10231-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4379cf00e1a95a97a33dac10ce454ca4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4379cf00e1a95a97a33dac10ce454ca4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4379cf00e1a95a97a33dac10ce454ca4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4379cf00e1a95a97a33dac10ce454ca4-Supplemental.pdf
Noninvasive behavioral tracking of animals is crucial for many scientific investigations. Recent transfer learning approaches for behavioral tracking have considerably advanced the state of the art. Typically these methods treat each video frame and each object to be tracked independently. In this work, we improve on these methods (particularly in the regime of few training labels) by leveraging the rich spatiotemporal structures pervasive in behavioral video --- specifically, the spatial statistics imposed by physical constraints (e.g., paw to elbow distance), and the temporal statistics imposed by smoothness from frame to frame. We propose a probabilistic graphical model built on top of deep neural networks, Deep Graph Pose (DGP), to leverage these useful spatial and temporal constraints, and develop an efficient structured variational approach to perform inference in this model. The resulting semi-supervised model exploits both labeled and unlabeled frames to achieve significantly more accurate and robust tracking while requiring users to label fewer training frames. In turn, these tracking improvements enhance performance on downstream applications, including robust unsupervised segmentation of behavioral syllables,'' and estimation of interpretabledisentangled'' low-dimensional representations of the full behavioral video. Open source code is available at \href{\CodeLink}{https://github.com/paninski-lab/deepgraphpose}.
Meta-learning from Tasks with Heterogeneous Attribute Spaces
https://papers.nips.cc/paper_files/paper/2020/hash/438124b4c06f3a5caffab2c07863b617-Abstract.html
Tomoharu Iwata, Atsutoshi Kumagai
https://papers.nips.cc/paper_files/paper/2020/hash/438124b4c06f3a5caffab2c07863b617-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/438124b4c06f3a5caffab2c07863b617-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10232-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/438124b4c06f3a5caffab2c07863b617-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/438124b4c06f3a5caffab2c07863b617-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/438124b4c06f3a5caffab2c07863b617-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/438124b4c06f3a5caffab2c07863b617-Supplemental.pdf
We propose a heterogeneous meta-learning method that trains a model on tasks with various attribute spaces, such that it can solve unseen tasks whose attribute spaces are different from the training tasks given a few labeled instances. Although many meta-learning methods have been proposed, they assume that all training and target tasks share the same attribute space, and they are inapplicable when attribute sizes are different across tasks. Our model infers latent representations of each attribute and each response from a few labeled instances using an inference network. Then, responses of unlabeled instances are predicted with the inferred representations using a prediction network. The attribute and response representations enable us to make predictions based on the task-specific properties of attributes and responses even when attribute and response sizes are different across tasks. In our experiments with synthetic datasets and 59 datasets in OpenML, we demonstrate that our proposed method can predict the responses given a few labeled instances in new tasks after being trained with tasks with heterogeneous attribute spaces.
Estimating decision tree learnability with polylogarithmic sample complexity
https://papers.nips.cc/paper_files/paper/2020/hash/439d8c975f26e5005dcdbf41b0d84161-Abstract.html
Guy Blanc, Neha Gupta, Jane Lange, Li-Yang Tan
https://papers.nips.cc/paper_files/paper/2020/hash/439d8c975f26e5005dcdbf41b0d84161-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/439d8c975f26e5005dcdbf41b0d84161-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10233-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/439d8c975f26e5005dcdbf41b0d84161-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/439d8c975f26e5005dcdbf41b0d84161-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/439d8c975f26e5005dcdbf41b0d84161-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/439d8c975f26e5005dcdbf41b0d84161-Supplemental.pdf
We show that top-down decision tree learning heuristics (such as ID3, C4.5, and CART) are amenable to highly efficient {\sl learnability estimation}: for monotone target functions, the error of the decision tree hypothesis constructed by these heuristics can be estimated with {\sl polylogarithmically} many labeled examples, exponentially smaller than the number necessary to run these heuristics, and indeed, exponentially smaller than information-theoretic minimum required to learn a good decision tree. This adds to a small but growing list of fundamental learning algorithms that have been shown to be amenable to learnability estimation. En route to this result, we design and analyze sample-efficient {\sl minibatch} versions of top-down decision tree learning heuristics and show that they achieve the same provable guarantees as the full-batch versions. We further give ``active local'' versions of these heuristics: given a test point $x^\star$, we show how the label $T(x^\star)$ of the decision tree hypothesis $T$ can be computed with polylogarithmically many labeled examples, exponentially smaller than the number necessary to learn~$T$.
Sparse Symplectically Integrated Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/439fca360bc99c315c5882c4432ae7a4-Abstract.html
Daniel DiPietro, Shiying Xiong, Bo Zhu
https://papers.nips.cc/paper_files/paper/2020/hash/439fca360bc99c315c5882c4432ae7a4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/439fca360bc99c315c5882c4432ae7a4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10234-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/439fca360bc99c315c5882c4432ae7a4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/439fca360bc99c315c5882c4432ae7a4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/439fca360bc99c315c5882c4432ae7a4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/439fca360bc99c315c5882c4432ae7a4-Supplemental.zip
We introduce Sparse Symplectically Integrated Neural Networks (SSINNs), a novel model for learning Hamiltonian dynamical systems from data. SSINNs combine fourth-order symplectic integration with a learned parameterization of the Hamiltonian obtained using sparse regression through a mathematically elegant function space. This allows for interpretable models that incorporate symplectic inductive biases and have low memory requirements. We evaluate SSINNs on four classical Hamiltonian dynamical problems: the Hénon-Heiles system, nonlinearly coupled oscillators, a multi-particle mass-spring system, and a pendulum system. Our results demonstrate promise in both system prediction and conservation of energy, often outperforming the current state-of-the-art black-box prediction techniques by an order of magnitude. Further, SSINNs successfully converge to true governing equations from highly limited and noisy data, demonstrating potential applicability in the discovery of new physical governing equations.
Continuous Object Representation Networks: Novel View Synthesis without Target View Supervision
https://papers.nips.cc/paper_files/paper/2020/hash/43a7c24e2d1fe375ce60d84ac901819f-Abstract.html
Nicolai Hani, Selim Engin, Jun-Jee Chao, Volkan Isler
https://papers.nips.cc/paper_files/paper/2020/hash/43a7c24e2d1fe375ce60d84ac901819f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/43a7c24e2d1fe375ce60d84ac901819f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10235-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/43a7c24e2d1fe375ce60d84ac901819f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/43a7c24e2d1fe375ce60d84ac901819f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/43a7c24e2d1fe375ce60d84ac901819f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/43a7c24e2d1fe375ce60d84ac901819f-Supplemental.zip
Novel View Synthesis (NVS) is concerned with synthesizing views under camera viewpoint transformations from one or multiple input images. NVS requires explicit reasoning about 3D object structure and unseen parts of the scene to synthesize convincing results. As a result, current approaches typically rely on supervised training with either ground truth 3D models or multiple target images. We propose Continuous Object Representation Networks (CORN), a conditional architecture that encodes an input image's geometry and appearance that map to a 3D consistent scene representation. We can train CORN with only two source images per object by combining our model with a neural renderer. A key feature of CORN is that it requires no ground truth 3D models or target view supervision. Regardless, CORN performs well on challenging tasks such as novel view synthesis and single-view 3D reconstruction and achieves performance comparable to state-of-the-art approaches that use direct supervision. For up-to-date information, data, and code, please see our project page: https://nicolaihaeni.github.io/corn/.
Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence
https://papers.nips.cc/paper_files/paper/2020/hash/43bb733c1b62a5e374c63cb22fa457b4-Abstract.html
Thomas Sutter, Imant Daunhawer, Julia Vogt
https://papers.nips.cc/paper_files/paper/2020/hash/43bb733c1b62a5e374c63cb22fa457b4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/43bb733c1b62a5e374c63cb22fa457b4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10236-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/43bb733c1b62a5e374c63cb22fa457b4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/43bb733c1b62a5e374c63cb22fa457b4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/43bb733c1b62a5e374c63cb22fa457b4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/43bb733c1b62a5e374c63cb22fa457b4-Supplemental.pdf
Learning from different data types is a long-standing goal in machine learning research, as multiple information sources co-occur when describing natural phenomena. However, existing generative models that approximate a multimodal ELBO rely on difficult or inefficient training schemes to learn a joint distribution and the dependencies between modalities. In this work, we propose a novel, efficient objective function that utilizes the Jensen-Shannon divergence for multiple distributions. It simultaneously approximates the unimodal and joint multimodal posteriors directly via a dynamic prior. In addition, we theoretically prove that the new multimodal JS-divergence (mmJSD) objective optimizes an ELBO. In extensive experiments, we demonstrate the advantage of the proposed mmJSD model compared to previous work in unsupervised, generative learning tasks.
Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers
https://papers.nips.cc/paper_files/paper/2020/hash/43e4e6a6f341e00671e123714de019a8-Abstract.html
Kiwon Um, Robert Brand, Yun (Raymond) Fei, Philipp Holl, Nils Thuerey
https://papers.nips.cc/paper_files/paper/2020/hash/43e4e6a6f341e00671e123714de019a8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/43e4e6a6f341e00671e123714de019a8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10237-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/43e4e6a6f341e00671e123714de019a8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/43e4e6a6f341e00671e123714de019a8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/43e4e6a6f341e00671e123714de019a8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/43e4e6a6f341e00671e123714de019a8-Supplemental.zip
Finding accurate solutions to partial differential equations (PDEs) is a crucial task in all scientific and engineering disciplines. It has recently been shown that machine learning methods can improve the solution accuracy by correcting for effects not captured by the discretized PDE. We target the problem of reducing numerical errors of iterative PDE solvers and compare different learning approaches for finding complex correction functions. We find that previously used learning approaches are significantly outperformed by methods that integrate the solver into the training loop and thereby allow the model to interact with the PDE during training. This provides the model with realistic input distributions that take previous corrections into account, yielding improvements in accuracy with stable rollouts of several hundred recurrent evaluation steps and surpassing even tailored supervised variants. We highlight the performance of the differentiable physics networks for a wide variety of PDEs, from non-linear advection-diffusion systems to three-dimensional Navier-Stokes flows.
Reinforcement Learning with General Value Function Approximation: Provably Efficient Approach via Bounded Eluder Dimension
https://papers.nips.cc/paper_files/paper/2020/hash/440924c5948e05070663f88e69e8242b-Abstract.html
Ruosong Wang, Russ R. Salakhutdinov, Lin Yang
https://papers.nips.cc/paper_files/paper/2020/hash/440924c5948e05070663f88e69e8242b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/440924c5948e05070663f88e69e8242b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10238-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/440924c5948e05070663f88e69e8242b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/440924c5948e05070663f88e69e8242b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/440924c5948e05070663f88e69e8242b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/440924c5948e05070663f88e69e8242b-Supplemental.pdf
Value function approximation has demonstrated phenomenal empirical success in reinforcement learning (RL). Nevertheless, despite a handful of recent progress on developing theory for RL with linear function approximation, the understanding of \emph{general} function approximation schemes largely remains missing. In this paper, we establish the first provably efficient RL algorithm with general value function approximation. We show that if the value functions admit an approximation with a function class $\mathcal{F}$, our algorithm achieves a regret bound of $\widetilde{O}(\mathrm{poly}(dH)\sqrt{T})$ where $d$ is a complexity measure of $\mathcal{F}$ that depends on the eluder dimension~[Russo and Van Roy, 2013] and log-covering numbers, $H$ is the planning horizon, and $T$ is the number interactions with the environment. Our theory generalizes the linear MDP assumption to general function classes. Moreover, our algorithm is model-free and provides a framework to justify the effectiveness of algorithms used in practice.
Predicting Training Time Without Training
https://papers.nips.cc/paper_files/paper/2020/hash/440e7c3eb9bbcd4c33c3535354a51605-Abstract.html
Luca Zancato, Alessandro Achille, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto
https://papers.nips.cc/paper_files/paper/2020/hash/440e7c3eb9bbcd4c33c3535354a51605-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/440e7c3eb9bbcd4c33c3535354a51605-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10239-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/440e7c3eb9bbcd4c33c3535354a51605-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/440e7c3eb9bbcd4c33c3535354a51605-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/440e7c3eb9bbcd4c33c3535354a51605-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/440e7c3eb9bbcd4c33c3535354a51605-Supplemental.pdf
We tackle the problem of predicting the number of optimization steps that a pre-trained deep network needs to converge to a given value of the loss function. To do so, we leverage the fact that the training dynamics of a deep network during fine-tuning are well approximated by those of a linearized model. This allows us to approximate the training loss and accuracy at any point during training by solving a low-dimensional Stochastic Differential Equation (SDE) in function space. Using this result, we are able to predict the time it takes for Stochastic Gradient Descent (SGD) to fine-tune a model to a given loss without having to perform any training. In our experiments, we are able to predict training time of a ResNet within a 20\% error margin on a variety of datasets and hyper-parameters, at a 30 to 45-fold reduction in cost compared to actual training. We also discuss how to further reduce the computational and memory cost of our method, and in particular we show that by exploiting the spectral properties of the gradients' matrix it is possible to predict training time on a large dataset while processing only a subset of the samples.
How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions
https://papers.nips.cc/paper_files/paper/2020/hash/443dec3062d0286986e21dc0631734c9-Abstract.html
Michael Tsang, Sirisha Rambhatla, Yan Liu
https://papers.nips.cc/paper_files/paper/2020/hash/443dec3062d0286986e21dc0631734c9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/443dec3062d0286986e21dc0631734c9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10240-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/443dec3062d0286986e21dc0631734c9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/443dec3062d0286986e21dc0631734c9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/443dec3062d0286986e21dc0631734c9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/443dec3062d0286986e21dc0631734c9-Supplemental.pdf
Machine learning transparency calls for interpretable explanations of how inputs relate to predictions. Feature attribution is a way to analyze the impact of features on predictions. Feature interactions are the contextual dependence between features that jointly impact predictions. There are a number of methods that extract feature interactions in prediction models; however, the methods that assign attributions to interactions are either uninterpretable, model-specific, or non-axiomatic. We propose an interaction attribution and detection framework called Archipelago which addresses these problems and is also scalable in real-world settings. Our experiments on standard annotation labels indicate our approach provides significantly more interpretable explanations than comparable methods, which is important for analyzing the impact of interactions on predictions. We also provide accompanying visualizations of our approach that give new insights into deep neural networks.
Optimal Adaptive Electrode Selection to Maximize Simultaneously Recorded Neuron Yield
https://papers.nips.cc/paper_files/paper/2020/hash/445e1050156c6ae8c082a8422bb7dfc0-Abstract.html
John Choi, Krishan Kumar, Mohammad Khazali, Katie Wingel, Mahdi Choudhury, Adam S. Charles, Bijan Pesaran
https://papers.nips.cc/paper_files/paper/2020/hash/445e1050156c6ae8c082a8422bb7dfc0-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/445e1050156c6ae8c082a8422bb7dfc0-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10241-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/445e1050156c6ae8c082a8422bb7dfc0-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/445e1050156c6ae8c082a8422bb7dfc0-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/445e1050156c6ae8c082a8422bb7dfc0-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/445e1050156c6ae8c082a8422bb7dfc0-Supplemental.pdf
Neural-Matrix style, high-density electrode arrays for brain-machine interfaces (BMIs) and neuroscientific research require the use of multiplexing: Each recording channel can be routed to one of several electrode sites on the array. This capability allows the user to flexibly distribute recording channels to the locations where the most desirable neural signals can be resolved. For example, in the Neuropixel probe, 960 electrodes can be addressed by 384 recording channels. However, currently no adaptive methods exist to use recorded neural data to optimize/customize the electrode selections per recording context. Here, we present an algorithm called classification-based selection (CBS) that optimizes the joint electrode selections for all recording channels so as to maximize isolation quality of detected neurons. We show, in experiments using Neuropixels in non-human primates, that this algorithm yields a similar number of isolated neurons as would be obtained if all electrodes were recorded simultaneously. Neuron counts were 41-85% improved over previously published electrode selection strategies. The neurons isolated from electrodes selected by CBS were a 73% match, by spike timing, to the complete set of recordable neurons around the probe. The electrodes selected by CBS exhibited higher average per-recording-channel signal-to-noise ratio. CBS, and selection optimization in general, could play an important role in development of neurotechnologies for BMI, as signal bandwidth becomes an increasingly limiting factor. Code and experimental data have been made available.
Neurosymbolic Reinforcement Learning with Formally Verified Exploration
https://papers.nips.cc/paper_files/paper/2020/hash/448d5eda79895153938a8431919f4c9f-Abstract.html
Greg Anderson, Abhinav Verma, Isil Dillig, Swarat Chaudhuri
https://papers.nips.cc/paper_files/paper/2020/hash/448d5eda79895153938a8431919f4c9f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/448d5eda79895153938a8431919f4c9f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10242-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/448d5eda79895153938a8431919f4c9f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/448d5eda79895153938a8431919f4c9f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/448d5eda79895153938a8431919f4c9f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/448d5eda79895153938a8431919f4c9f-Supplemental.pdf
We present REVEL, a partially neural reinforcement learning (RL) framework for provably safe exploration in continuous state and action spaces. A key challenge for provably safe deep RL is that repeatedly verifying neural networks within a learning loop is computationally infeasible. We address this challenge using two policy classes: a general, neurosymbolic class with approximate gradients and a more restricted class of symbolic policies that allows efficient verification. Our learning algorithm is a mirror descent over policies: in each iteration, it safely lifts a symbolic policy into the neurosymbolic space, performs safe gradient updates to the resulting policy, and projects the updated policy into the safe symbolic subset, all without requiring explicit verification of neural networks. Our empirical results show that REVEL enforces safe exploration in many scenarios in which Constrained Policy Optimization does not, and that it can discover policies that outperform those learned through prior approaches to verified exploration.
Wavelet Flow: Fast Training of High Resolution Normalizing Flows
https://papers.nips.cc/paper_files/paper/2020/hash/4491777b1aa8b5b32c2e8666dbe1a495-Abstract.html
Jason J. Yu, Konstantinos G. Derpanis, Marcus A. Brubaker
https://papers.nips.cc/paper_files/paper/2020/hash/4491777b1aa8b5b32c2e8666dbe1a495-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4491777b1aa8b5b32c2e8666dbe1a495-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10243-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4491777b1aa8b5b32c2e8666dbe1a495-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4491777b1aa8b5b32c2e8666dbe1a495-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4491777b1aa8b5b32c2e8666dbe1a495-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4491777b1aa8b5b32c2e8666dbe1a495-Supplemental.pdf
Normalizing flows are a class of probabilistic generative models which allow for both fast density computation and efficient sampling and are effective at modelling complex distributions like images. A drawback among current methods is their significant training cost, sometimes requiring months of GPU training time to achieve state-of-the-art results. This paper introduces Wavelet Flow, a multi-scale, normalizing flow architecture based on wavelets. A Wavelet Flow has an explicit representation of signal scale that inherently includes models of lower resolution signals and conditional generation of higher resolution signals, i.e., super resolution. A major advantage of Wavelet Flow is the ability to construct generative models for high resolution data (e.g., 1024 × 1024 images) that are impractical with previous models. Furthermore, Wavelet Flow is competitive with previous normalizing flows in terms of bits per dimension on standard (low resolution) benchmarks while being up to 15× faster to train.
Multi-task Batch Reinforcement Learning with Metric Learning
https://papers.nips.cc/paper_files/paper/2020/hash/4496bf24afe7fab6f046bf4923da8de6-Abstract.html
Jiachen Li, Quan Vuong, Shuang Liu, Minghua Liu, Kamil Ciosek, Henrik Christensen, Hao Su
https://papers.nips.cc/paper_files/paper/2020/hash/4496bf24afe7fab6f046bf4923da8de6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4496bf24afe7fab6f046bf4923da8de6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10244-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4496bf24afe7fab6f046bf4923da8de6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4496bf24afe7fab6f046bf4923da8de6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4496bf24afe7fab6f046bf4923da8de6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4496bf24afe7fab6f046bf4923da8de6-Supplemental.pdf
We tackle the Multi-task Batch Reinforcement Learning problem. Given multiple datasets collected from different tasks, we train a multi-task policy to perform well in unseen tasks sampled from the same distribution. The task identities of the unseen tasks are not provided. To perform well, the policy must infer the task identity from collected transitions by modelling its dependency on states, actions and rewards. Because the different datasets may have state-action distributions with large divergence, the task inference module can learn to ignore the rewards and spuriously correlate \textit{only} state-action pairs to the task identity, leading to poor test time performance. To robustify task inference, we propose a novel application of the triplet loss. To mine hard negative examples, we relabel the transitions from the training tasks by approximating their reward functions. When we allow further training on the unseen tasks, using the trained policy as an initialization leads to significantly faster convergence compared to randomly initialized policies (up to 80% improvement and across 5 different Mujoco task distributions). We name our method \textbf{MBML} (\textbf{M}ulti-task \textbf{B}atch RL with \textbf{M}etric \textbf{L}earning).
On 1/n neural representation and robustness
https://papers.nips.cc/paper_files/paper/2020/hash/44bf89b63173d40fb39f9842e308b3f9-Abstract.html
Josue Nassar, Piotr Sokol, Sueyeon Chung, Kenneth D. Harris, Il Memming Park
https://papers.nips.cc/paper_files/paper/2020/hash/44bf89b63173d40fb39f9842e308b3f9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/44bf89b63173d40fb39f9842e308b3f9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10245-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/44bf89b63173d40fb39f9842e308b3f9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/44bf89b63173d40fb39f9842e308b3f9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/44bf89b63173d40fb39f9842e308b3f9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/44bf89b63173d40fb39f9842e308b3f9-Supplemental.pdf
Understanding the nature of representation in neural networks is a goal shared by neuroscience and machine learning. It is therefore exciting that both fields converge not only on shared questions but also on similar approaches. A pressing question in these areas is understanding how the structure of the representation used by neural networks affects both their generalization, and robustness to perturbations. In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks. We use adversarial robustness to probe Stringer et al’s theory regarding the causal role of a 1/n covariance spectrum. We empirically investigate the benefits such a neural code confers in neural networks, and illuminate its role in multi-layer architectures. Our results show that imposing the experimentally observed structure on artificial neural networks makes them more robust to adversarial attacks. Moreover, our findings complement the existing theory relating wide neural networks to kernel methods, by showing the role of intermediate representations.
Boundary thickness and robustness in learning models
https://papers.nips.cc/paper_files/paper/2020/hash/44e76e99b5e194377e955b13fb12f630-Abstract.html
Yaoqing Yang, Rajiv Khanna, Yaodong Yu, Amir Gholami, Kurt Keutzer, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney
https://papers.nips.cc/paper_files/paper/2020/hash/44e76e99b5e194377e955b13fb12f630-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/44e76e99b5e194377e955b13fb12f630-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10246-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/44e76e99b5e194377e955b13fb12f630-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/44e76e99b5e194377e955b13fb12f630-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/44e76e99b5e194377e955b13fb12f630-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/44e76e99b5e194377e955b13fb12f630-Supplemental.pdf
Robustness of machine learning models to various adversarial and non-adversarial corruptions continues to be of interest. In this paper, we introduce the notion of the boundary thickness of a classifier, and we describe its connection with and usefulness for model robustness. Thick decision boundaries lead to improved performance, while thin decision boundaries lead to overfitting (e.g., measured by the robust generalization gap between training and testing) and lower robustness. We show that a thicker boundary helps improve robustness against adversarial examples (e.g., improving the robust test accuracy of adversarial training), as well as so-called out-of-distribution (OOD) transforms, and we show that many commonly-used regularization and data augmentation procedures can increase boundary thickness. On the theoretical side, we establish that maximizing boundary thickness is akin to minimizing the so-called mixup loss. Using these observations, we can show that noise-augmentation on mixup training further increases boundary thickness, thereby combating vulnerability to various forms of adversarial attacks and OOD transforms. We can also show that the performance improvement in several recent lines of work happens in conjunction with a thicker boundary.
Demixed shared component analysis of neural population data from multiple brain areas
https://papers.nips.cc/paper_files/paper/2020/hash/44ece762ae7e41e3a0b1301488907eaa-Abstract.html
Yu Takagi, Steven Kennerley, Jun-ichiro Hirayama, Laurence Hunt
https://papers.nips.cc/paper_files/paper/2020/hash/44ece762ae7e41e3a0b1301488907eaa-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/44ece762ae7e41e3a0b1301488907eaa-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10247-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/44ece762ae7e41e3a0b1301488907eaa-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/44ece762ae7e41e3a0b1301488907eaa-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/44ece762ae7e41e3a0b1301488907eaa-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/44ece762ae7e41e3a0b1301488907eaa-Supplemental.pdf
Recent advances in neuroscience data acquisition allow for the simultaneous recording of large populations of neurons across multiple brain areas while subjects perform complex cognitive tasks. Interpreting these data requires us to index how task-relevant information is shared across brain regions, but this is often confounded by the mixing of different task parameters at the single neuron level. Here, inspired by a method developed for a single brain area, we introduce a new technique for demixing variables across multiple brain areas, called demixed shared component analysis (dSCA). dSCA decomposes population activity into a few components, such that the shared components capture the maximum amount of shared information across brain regions while also depending on relevant task parameters. This yields interpretable components that express which variables are shared between different brain regions and when this information is shared across time. To illustrate our method, we reanalyze two datasets recorded during decision-making tasks in rodents and macaques. We find that dSCA provides new insights into the shared computation between different brain areas in these datasets, relating to several different aspects of decision formation.
Learning Kernel Tests Without Data Splitting
https://papers.nips.cc/paper_files/paper/2020/hash/44f683a84163b3523afe57c2e008bc8c-Abstract.html
Jonas Kübler, Wittawat Jitkrittum, Bernhard Schölkopf, Krikamol Muandet
https://papers.nips.cc/paper_files/paper/2020/hash/44f683a84163b3523afe57c2e008bc8c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/44f683a84163b3523afe57c2e008bc8c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10248-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/44f683a84163b3523afe57c2e008bc8c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/44f683a84163b3523afe57c2e008bc8c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/44f683a84163b3523afe57c2e008bc8c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/44f683a84163b3523afe57c2e008bc8c-Supplemental.pdf
Modern large-scale kernel-based tests such as maximum mean discrepancy (MMD) and kernelized Stein discrepancy (KSD) optimize kernel hyperparameters on a held-out sample via data splitting to obtain the most powerful test statistics. While data splitting results in a tractable null distribution, it suffers from a reduction in test power due to smaller test sample size. Inspired by the selective inference framework, we propose an approach that enables learning the hyperparameters and testing on the full sample without data splitting. Our approach can correctly calibrate the test in the presence of such dependency, and yield a test threshold in closed form. At the same significance level, our approach’s test power is empirically larger than that of the data-splitting approach, regardless of its split proportion.
Unsupervised Data Augmentation for Consistency Training
https://papers.nips.cc/paper_files/paper/2020/hash/44feb0096faa8326192570788b38c1d1-Abstract.html
Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, Quoc Le
https://papers.nips.cc/paper_files/paper/2020/hash/44feb0096faa8326192570788b38c1d1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/44feb0096faa8326192570788b38c1d1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10249-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/44feb0096faa8326192570788b38c1d1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/44feb0096faa8326192570788b38c1d1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/44feb0096faa8326192570788b38c1d1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/44feb0096faa8326192570788b38c1d1-Supplemental.pdf
Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. Code is available at https://github.com/google-research/uda.
Subgroup-based Rank-1 Lattice Quasi-Monte Carlo
https://papers.nips.cc/paper_files/paper/2020/hash/456048afb7253926e1fbb7486e699180-Abstract.html
Yueming LYU, Yuan Yuan, Ivor Tsang
https://papers.nips.cc/paper_files/paper/2020/hash/456048afb7253926e1fbb7486e699180-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/456048afb7253926e1fbb7486e699180-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10250-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/456048afb7253926e1fbb7486e699180-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/456048afb7253926e1fbb7486e699180-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/456048afb7253926e1fbb7486e699180-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/456048afb7253926e1fbb7486e699180-Supplemental.zip
Quasi-Monte Carlo (QMC) is an essential tool for integral approximation, Bayesian inference, and sampling for simulation in science, etc. In the QMC area, the rank-1 lattice is important due to its simple operation, and nice property for point set construction. However, the construction of the generating vector of the rank-1 lattice is usually time-consuming through an exhaustive computer search. To address this issue, we propose a simple closed-form rank-1 lattice construction method based on group theory. Our method reduces the number of distinct pairwise distance values to generate a more regular lattice. We theoretically prove a lower and an upper bound of the minimum pairwise distance of any non-degenerate rank-1 lattice. Empirically, our methods can generate near-optimal rank-1 lattice compared with Korobov exhaustive search regarding the $l_1$-norm and $l_2$-norm minimum distance. Moreover, experimental results show that our method achieves superior approximation performance on the benchmark integration test problems and the kernel approximation problems.
Minibatch vs Local SGD for Heterogeneous Distributed Learning
https://papers.nips.cc/paper_files/paper/2020/hash/45713f6ff2041d3fdfae927b82488db8-Abstract.html
Blake E. Woodworth, Kumar Kshitij Patel, Nati Srebro
https://papers.nips.cc/paper_files/paper/2020/hash/45713f6ff2041d3fdfae927b82488db8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/45713f6ff2041d3fdfae927b82488db8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10251-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/45713f6ff2041d3fdfae927b82488db8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/45713f6ff2041d3fdfae927b82488db8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/45713f6ff2041d3fdfae927b82488db8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/45713f6ff2041d3fdfae927b82488db8-Supplemental.zip
We analyze Local SGD (aka parallel or federated SGD) and Minibatch SGD in the heterogeneous distributed setting, where each machine has access to stochastic gradient estimates for a different, machine-specific, convex objective; the goal is to optimize w.r.t.~the average objective; and machines can only communicate intermittently. We argue that, (i) Minibatch SGD (even without acceleration) dominates all existing analysis of Local SGD in this setting, (ii) accelerated Minibatch SGD is optimal when the heterogeneity is high, and (iii) present the first upper bound for Local SGD that improves over Minibatch SGD in a non-homogeneous regime.
Multi-task Causal Learning with Gaussian Processes
https://papers.nips.cc/paper_files/paper/2020/hash/45c166d697d65080d54501403b433256-Abstract.html
Virginia Aglietti, Theodoros Damoulas, Mauricio Álvarez, Javier González
https://papers.nips.cc/paper_files/paper/2020/hash/45c166d697d65080d54501403b433256-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/45c166d697d65080d54501403b433256-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10252-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/45c166d697d65080d54501403b433256-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/45c166d697d65080d54501403b433256-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/45c166d697d65080d54501403b433256-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/45c166d697d65080d54501403b433256-Supplemental.pdf
This paper studies the problem of learning the correlation structure of a set of intervention functions defined on the directed acyclic graph (DAG) of a causal model. This is useful when we are interested in jointly learning the causal effects of interventions on different subsets of variables in a DAG, which is common in field such as healthcare or operations research. We propose the first multi-task causal Gaussian process (GP) model, which we call DAG-GP, that allows for information sharing across continuous interventions and across experiments on different variables. DAG-GP accommodates different assumptions in terms of data availability and captures the correlation between functions lying in input spaces of different dimensionality via a well-defined integral operator. We give theoretical results detailing when and how the DAG-GP model can be formulated depending on the DAG. We test both the quality of its predictions and its calibrated uncertainties. Compared to single-task models, DAG-GP achieves the best fitting performance in a variety of real and synthetic settings. In addition, it helps to select optimal interventions faster than competing approaches when used within sequential decision making frameworks, like active learning or Bayesian optimization.
Proximity Operator of the Matrix Perspective Function and its Applications
https://papers.nips.cc/paper_files/paper/2020/hash/45f31d16b1058d586fc3be7207b58053-Abstract.html
Joong-Ho (Johann) Won
https://papers.nips.cc/paper_files/paper/2020/hash/45f31d16b1058d586fc3be7207b58053-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/45f31d16b1058d586fc3be7207b58053-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10253-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/45f31d16b1058d586fc3be7207b58053-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/45f31d16b1058d586fc3be7207b58053-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/45f31d16b1058d586fc3be7207b58053-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/45f31d16b1058d586fc3be7207b58053-Supplemental.pdf
We show that the matrix perspective function, which is jointly convex in the Cartesian product of a standard Euclidean vector space and a conformal space of symmetric matrices, has a proximity operator in an almost closed form. The only implicit part is to solve a semismooth, univariate root finding problem. We uncover the connection between our problem of study and the matrix nearness problem. Through this connection, we propose a quadratically convergent Newton algorithm for the root finding problem.Experiments verify that the evaluation of the proximity operator requires at most 8 Newton steps, taking less than 5s for 2000 by 2000 matrices on a standard laptop. Using this routine as a building block, we demonstrate the usefulness of the studied proximity operator in constrained maximum likelihood estimation of Gaussian mean and covariance, peudolikelihood-based graphical model selection, and a matrix variant of the scaled lasso problem.
Generative 3D Part Assembly via Dynamic Graph Learning
https://papers.nips.cc/paper_files/paper/2020/hash/45fbc6d3e05ebd93369ce542e8f2322d-Abstract.html
jialei huang, Guanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas J. Guibas, Hao Dong
https://papers.nips.cc/paper_files/paper/2020/hash/45fbc6d3e05ebd93369ce542e8f2322d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/45fbc6d3e05ebd93369ce542e8f2322d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10254-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/45fbc6d3e05ebd93369ce542e8f2322d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/45fbc6d3e05ebd93369ce542e8f2322d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/45fbc6d3e05ebd93369ce542e8f2322d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/45fbc6d3e05ebd93369ce542e8f2322d-Supplemental.pdf
Autonomous part assembly is a challenging yet crucial task in 3D computer vision and robotics. Analogous to buying an IKEA furniture, given a set of 3D parts that can assemble a single shape, an intelligent agent needs to perceive the 3D part geometry, reason to propose pose estimations for the input parts, and finally call robotic planning and control routines for actuation. In this paper, we focus on the pose estimation subproblem from the vision side involving geometric and relational reasoning over the input part geometry. Essentially, the task of generative 3D part assembly is to predict a 6-DoF part pose, including a rigid rotation and translation, for each input part that assembles a single 3D shape as the final output. To tackle this problem, we propose an assembly-oriented dynamic graph learning framework that leverages an iterative graph neural network as a backbone. It explicitly conducts sequential part assembly refinements in a coarse-to-fine manner, exploits a pair of part relation reasoning module and part aggregation module for dynamically adjusting both part features and their relations in the part graph. We conduct extensive experiments and quantitative comparisons to three strong baseline methods, demonstrating the effectiveness of the proposed approach.
Improving Natural Language Processing Tasks with Human Gaze-Guided Neural Attention
https://papers.nips.cc/paper_files/paper/2020/hash/460191c72f67e90150a093b4585e7eb4-Abstract.html
Ekta Sood, Simon Tannert, Philipp Mueller, Andreas Bulling
https://papers.nips.cc/paper_files/paper/2020/hash/460191c72f67e90150a093b4585e7eb4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/460191c72f67e90150a093b4585e7eb4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10255-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/460191c72f67e90150a093b4585e7eb4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/460191c72f67e90150a093b4585e7eb4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/460191c72f67e90150a093b4585e7eb4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/460191c72f67e90150a093b4585e7eb4-Supplemental.pdf
A lack of corpora has so far limited advances in integrating human gaze data as a supervisory signal in neural attention mechanisms for natural language processing (NLP). We propose a novel hybrid text saliency model (TSM) that, for the first time, combines a cognitive model of reading with explicit human gaze supervision in a single machine learning framework. On four different corpora we demonstrate that our hybrid TSM duration predictions are highly correlated with human gaze ground truth. We further propose a novel joint modeling approach to integrate TSM predictions into the attention layer of a network designed for a specific upstream NLP task without the need for any task-specific human gaze data. We demonstrate that our joint model outperforms the state of the art in paraphrase generation on the Quora Question Pairs corpus by more than 10% in BLEU-4 and achieves state of the art performance for sentence compression on the challenging Google Sentence Compression corpus. As such, our work introduces a practical approach for bridging between data-driven and cognitive models and demonstrates a new way to integrate human gaze-guided neural attention into NLP tasks.
The Power of Comparisons for Actively Learning Linear Classifiers
https://papers.nips.cc/paper_files/paper/2020/hash/4607f7fff0dce694258e1c637512aa9d-Abstract.html
Max Hopkins, Daniel Kane, Shachar Lovett
https://papers.nips.cc/paper_files/paper/2020/hash/4607f7fff0dce694258e1c637512aa9d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4607f7fff0dce694258e1c637512aa9d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10256-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4607f7fff0dce694258e1c637512aa9d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4607f7fff0dce694258e1c637512aa9d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4607f7fff0dce694258e1c637512aa9d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4607f7fff0dce694258e1c637512aa9d-Supplemental.zip
In the world of big data, large but costly to label datasets dominate many fields. Active learning, a semi-supervised alternative to the standard PAC-learning model, was introduced to explore whether adaptive labeling could learn concepts with exponentially fewer labeled samples. While previous results show that active learning performs no better than its supervised alternative for important concept classes such as linear separators, we show that by adding weak distributional assumptions and allowing comparison queries, active learning requires exponentially fewer samples. Further, we show that these results hold as well for a stronger model of learning called Reliable and Probably Useful (RPU) learning. In this model, our learner is not allowed to make mistakes, but may instead answer ``I don't know.'' While previous negative results showed this model to have intractably large sample complexity for label queries, we show that comparison queries make RPU-learning at worst logarithmically more expensive in both the passive and active regimes.
From Boltzmann Machines to Neural Networks and Back Again
https://papers.nips.cc/paper_files/paper/2020/hash/464074179972cbbd75a39abc6954cd12-Abstract.html
Surbhi Goel, Adam Klivans, Frederic Koehler
https://papers.nips.cc/paper_files/paper/2020/hash/464074179972cbbd75a39abc6954cd12-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/464074179972cbbd75a39abc6954cd12-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10257-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/464074179972cbbd75a39abc6954cd12-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/464074179972cbbd75a39abc6954cd12-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/464074179972cbbd75a39abc6954cd12-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/464074179972cbbd75a39abc6954cd12-Supplemental.zip
Graphical models are powerful tools for modeling high-dimensional data, but learning graphical models in the presence of latent variables is well-known to be difficult. In this work we give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models. Our results are based on new connections to learning two-layer neural networks under $\ell_{\infty}$ bounded input; for both problems, we give nearly optimal results under the conjectured hardness of sparse parity with noise. Using the connection between RBMs and feedforward networks, we also initiate the theoretical study of {\em supervised RBMs} \citep{hinton2012practical}, a version of neural-network learning that couples distributional assumptions induced from the underlying graphical model with the architecture of the unknown function class. We then give an algorithm for learning a natural class of supervised RBMs with better runtime than what is possible for its related class of networks without distributional assumptions.
Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic Optimality
https://papers.nips.cc/paper_files/paper/2020/hash/46489c17893dfdcf028883202cefd6d1-Abstract.html
Kwang-Sung Jun, Chicheng Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/46489c17893dfdcf028883202cefd6d1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/46489c17893dfdcf028883202cefd6d1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10258-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/46489c17893dfdcf028883202cefd6d1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/46489c17893dfdcf028883202cefd6d1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/46489c17893dfdcf028883202cefd6d1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/46489c17893dfdcf028883202cefd6d1-Supplemental.pdf
We study stochastic structured bandits for minimizing regret. The fact that the popular optimistic algorithms do not achieve the asymptotic instance-dependent regret optimality (asymptotic optimality for short) has recently alluded researchers. On the other hand, it is known that one can achieve bounded regret (i.e., does not grow indefinitely with $n$) in certain instances. Unfortunately, existing asymptotically optimal algorithms rely on forced sampling that introduces an $\omega(1)$ term w.r.t. the time horizon $n$ in their regret, failing to adapt to the ``easiness'' of the instance. In this paper, we focus on the finite hypothesis case and ask if one can achieve the asymptotic optimality while enjoying bounded regret whenever possible. We provide a positive answer by introducing a new algorithm called CRush Optimism with Pessimism (CROP) that eliminates optimistic hypotheses by pulling the informative arms indicated by a pessimistic hypothesis. Our finite-time analysis shows that CROP $(i)$ achieves a constant-factor asymptotic optimality and, thanks to the forced-exploration-free design, $(ii)$ adapts to bounded regret, and $(iii)$ its regret bound scales not with $K$ but with an effective number of arms $K_\psi$ that we introduce. We also discuss a problem class where CROP can be exponentially better than existing algorithms in \textit{nonasymptotic} regimes. This problem class also reveals a surprising fact that even a clairvoyant oracle who plays according to the asymptotically optimal arm pull scheme may suffer a linear worst-case regret.
Pruning neural networks without any data by iteratively conserving synaptic flow
https://papers.nips.cc/paper_files/paper/2020/hash/46a4378f835dc8040c8057beb6a2da52-Abstract.html
Hidenori Tanaka, Daniel Kunin, Daniel L. Yamins, Surya Ganguli
https://papers.nips.cc/paper_files/paper/2020/hash/46a4378f835dc8040c8057beb6a2da52-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/46a4378f835dc8040c8057beb6a2da52-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10259-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/46a4378f835dc8040c8057beb6a2da52-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/46a4378f835dc8040c8057beb6a2da52-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/46a4378f835dc8040c8057beb6a2da52-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/46a4378f835dc8040c8057beb6a2da52-Supplemental.zip
Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design. We first mathematically formulate and experimentally verify a conservation law that explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse, the premature pruning of an entire layer rendering a network untrainable. This theory also elucidates how layer-collapse can be entirely avoided, motivating a novel pruning algorithm Iterative Synaptic Flow Pruning (SynFlow). This algorithm can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint. Notably, this algorithm makes no reference to the training data and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models (VGG and ResNet), datasets (CIFAR-10/100 and Tiny ImageNet), and sparsity constraints (up to 99.99 percent). Thus our data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important.
Detecting Interactions from Neural Networks via Topological Analysis
https://papers.nips.cc/paper_files/paper/2020/hash/473803f0f2ebd77d83ee60daaa61f381-Abstract.html
Zirui Liu, Qingquan Song, Kaixiong Zhou, Ting-Hsiang Wang, Ying Shan, Xia Hu
https://papers.nips.cc/paper_files/paper/2020/hash/473803f0f2ebd77d83ee60daaa61f381-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/473803f0f2ebd77d83ee60daaa61f381-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10260-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/473803f0f2ebd77d83ee60daaa61f381-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/473803f0f2ebd77d83ee60daaa61f381-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/473803f0f2ebd77d83ee60daaa61f381-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/473803f0f2ebd77d83ee60daaa61f381-Supplemental.pdf
Detecting statistical interactions between input features is a crucial and challenging task. Recent advances demonstrate that it is possible to extract learned interactions from trained neural networks. It has also been observed that, in neural networks, any interacting features must follow a strongly weighted connection to common hidden units. Motivated by the observation, in this paper, we propose to investigate the interaction detection problem from a novel topological perspective by analyzing the connectivity in neural networks. Specially, we propose a new measure for quantifying interaction strength, based upon the well-received theory of persistent homology. Based on this measure, a Persistence Interaction Dection (PID) algorithm is developed to efficiently detect interactions. Our proposed algorithm is evaluated across a number of interaction detection tasks on several synthetic and real-world datasets with different hyperparameters. Experimental results validate that the PID algorithm outperforms the state-of-the-art baselines.
Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems
https://papers.nips.cc/paper_files/paper/2020/hash/475d66314dc56a0df8fb8f7c5dbbaf78-Abstract.html
Aman Sinha, Matthew O'Kelly, Russ Tedrake, John C. Duchi
https://papers.nips.cc/paper_files/paper/2020/hash/475d66314dc56a0df8fb8f7c5dbbaf78-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/475d66314dc56a0df8fb8f7c5dbbaf78-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10261-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/475d66314dc56a0df8fb8f7c5dbbaf78-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/475d66314dc56a0df8fb8f7c5dbbaf78-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/475d66314dc56a0df8fb8f7c5dbbaf78-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/475d66314dc56a0df8fb8f7c5dbbaf78-Supplemental.pdf
Learning-based methodologies increasingly find applications in safety-critical domains like autonomous driving and medical robotics. Due to the rare nature of dangerous events, real-world testing is prohibitively expensive and unscalable. In this work, we employ a probabilistic approach to safety evaluation in simulation, where we are concerned with computing the probability of dangerous events. We develop a novel rare-event simulation method that combines exploration, exploitation, and optimization techniques to find failure modes and estimate their rate of occurrence. We provide rigorous guarantees for the performance of our method in terms of both statistical and computational efficiency. Finally, we demonstrate the efficacy of our approach on a variety of scenarios, illustrating its usefulness as a tool for rapid sensitivity analysis and model comparison that are essential to developing and testing safety-critical autonomous systems.
Interpretable and Personalized Apprenticeship Scheduling: Learning Interpretable Scheduling Policies from Heterogeneous User Demonstrations
https://papers.nips.cc/paper_files/paper/2020/hash/477bdb55b231264bb53a7942fd84254d-Abstract.html
Rohan Paleja, Andrew Silva, Letian Chen, Matthew Gombolay
https://papers.nips.cc/paper_files/paper/2020/hash/477bdb55b231264bb53a7942fd84254d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/477bdb55b231264bb53a7942fd84254d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10262-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/477bdb55b231264bb53a7942fd84254d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/477bdb55b231264bb53a7942fd84254d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/477bdb55b231264bb53a7942fd84254d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/477bdb55b231264bb53a7942fd84254d-Supplemental.zip
Resource scheduling and coordination is an NP-hard optimization requiring an efficient allocation of agents to a set of tasks with upper- and lower bound temporal and resource constraints. Due to the large-scale and dynamic nature of resource coordination in hospitals and factories, human domain experts manually plan and adjust schedules on the fly. To perform this job, domain experts leverage heterogeneous strategies and rules-of-thumb honed over years of apprenticeship. What is critically needed is the ability to extract this domain knowledge in a heterogeneous and interpretable apprenticeship learning framework to scale beyond the power of a single human expert, a necessity in safety-critical domains. We propose a personalized and interpretable apprenticeship scheduling algorithm that infers an interpretable representation of all human task demonstrators by extracting decision-making criteria via an inferred, personalized embedding non-parametric in the number of demonstrator types. We achieve near-perfect LfD accuracy in synthetic domains and 88.22\% accuracy on a planning domain with real-world data, outperforming baselines. Finally, our user study showed our methodology produces more interpretable and easier-to-use models than neural networks ($p < 0.05$).
Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes
https://papers.nips.cc/paper_files/paper/2020/hash/47951a40efc0d2f7da8ff1ecbfde80f4-Abstract.html
Mengdi Xu, Wenhao Ding, Jiacheng Zhu, ZUXIN LIU, Baiming Chen, Ding Zhao
https://papers.nips.cc/paper_files/paper/2020/hash/47951a40efc0d2f7da8ff1ecbfde80f4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/47951a40efc0d2f7da8ff1ecbfde80f4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10263-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/47951a40efc0d2f7da8ff1ecbfde80f4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/47951a40efc0d2f7da8ff1ecbfde80f4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/47951a40efc0d2f7da8ff1ecbfde80f4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/47951a40efc0d2f7da8ff1ecbfde80f4-Supplemental.zip
Continuously learning to solve unseen tasks with limited experience has been extensively pursued in meta-learning and continual learning, but with restricted assumptions such as accessible task distributions, independently and identically distributed tasks, and clear task delineations. However, real-world physical tasks frequently violate these assumptions, resulting in performance degradation. This paper proposes a continual online model-based reinforcement learning approach that does not require pre-training to solve task-agnostic problems with unknown task boundaries. We maintain a mixture of experts to handle nonstationarity, and represent each different type of dynamics with a Gaussian Process to efficiently leverage collected data and expressively model uncertainty. We propose a transition prior to account for the temporal dependencies in streaming data and update the mixture online via sequential variational inference. Our approach reliably handles the task distribution shift by generating new models for never-before-seen dynamics and reusing old models for previously seen dynamics. In experiments, our approach outperforms alternative methods in non-stationary tasks, including classic control with changing dynamics and decision making in different driving scenarios.
Benchmarking Deep Learning Interpretability in Time Series Predictions
https://papers.nips.cc/paper_files/paper/2020/hash/47a3893cc405396a5c30d91320572d6d-Abstract.html
Aya Abdelsalam Ismail, Mohamed Gunady, Hector Corrada Bravo, Soheil Feizi
https://papers.nips.cc/paper_files/paper/2020/hash/47a3893cc405396a5c30d91320572d6d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/47a3893cc405396a5c30d91320572d6d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10264-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/47a3893cc405396a5c30d91320572d6d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/47a3893cc405396a5c30d91320572d6d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/47a3893cc405396a5c30d91320572d6d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/47a3893cc405396a5c30d91320572d6d-Supplemental.pdf
Saliency methods are used extensively to highlight the importance of input features in model predictions. These methods are mostly used in vision and language tasks, and their applications to time series data is relatively unexplored. In this paper, we set out to extensively compare the performance of various saliency-based interpretability methods across diverse neural architectures, including Recurrent Neural Network, Temporal Convolutional Networks, and Transformers in a new benchmark of synthetic time series data. We propose and report multiple metrics to empirically evaluate the performance of saliency methods for detecting feature importance over time using both precision (i.e., whether identified features contain meaningful signals) and recall (i.e., the number of features with signal identified as important). Through several experiments, we show that (i) in general, network architectures and saliency methods fail to reliably and accurately identify feature importance over time in time series data, (ii) this failure is mainly due to the conflation of time and feature domains, and (iii) the quality of saliency maps can be improved substantially by using our proposed two-step temporal saliency rescaling (TSR) approach that first calculates the importance of each time step before calculating the importance of each feature at a time step.
Federated Principal Component Analysis
https://papers.nips.cc/paper_files/paper/2020/hash/47a658229eb2368a99f1d032c8848542-Abstract.html
Andreas Grammenos, Rodrigo Mendoza Smith, Jon Crowcroft, Cecilia Mascolo
https://papers.nips.cc/paper_files/paper/2020/hash/47a658229eb2368a99f1d032c8848542-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/47a658229eb2368a99f1d032c8848542-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10265-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/47a658229eb2368a99f1d032c8848542-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/47a658229eb2368a99f1d032c8848542-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/47a658229eb2368a99f1d032c8848542-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/47a658229eb2368a99f1d032c8848542-Supplemental.pdf
We present a federated, asynchronous, and $(\varepsilon, \delta)$-differentially private algorithm for $\PCA$ in the memory-limited setting. % Our algorithm incrementally computes local model updates using a streaming procedure and adaptively estimates its $r$ leading principal components when only $\mathcal{O}(dr)$ memory is available with $d$ being the dimensionality of the data. % We guarantee differential privacy via an input-perturbation scheme in which the covariance matrix of a dataset $\B{X} \in \R^{d \times n}$ is perturbed with a non-symmetric random Gaussian matrix with variance in $\mathcal{O}\left(\left(\frac{d}{n}\right)^2 \log d \right)$, thus improving upon the state-of-the-art. % Furthermore, contrary to previous federated or distributed algorithms for $\PCA$, our algorithm is also invariant to permutations in the incoming data, which provides robustness against straggler or failed nodes. % Numerical simulations show that, while using limited-memory, our algorithm exhibits performance that closely matches or outperforms traditional non-federated algorithms, and in the absence of communication latency, it exhibits attractive horizontal scalability.
(De)Randomized Smoothing for Certifiable Defense against Patch Attacks
https://papers.nips.cc/paper_files/paper/2020/hash/47ce0875420b2dbacfc5535f94e68433-Abstract.html
Alexander Levine, Soheil Feizi
https://papers.nips.cc/paper_files/paper/2020/hash/47ce0875420b2dbacfc5535f94e68433-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/47ce0875420b2dbacfc5535f94e68433-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10266-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/47ce0875420b2dbacfc5535f94e68433-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/47ce0875420b2dbacfc5535f94e68433-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/47ce0875420b2dbacfc5535f94e68433-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/47ce0875420b2dbacfc5535f94e68433-Supplemental.zip
Patch adversarial attacks on images, in which the attacker can distort pixels within a region of bounded size, are an important threat model since they provide a quantitative model for physical adversarial attacks. In this paper, we introduce a certifiable defense against patch attacks that guarantees for a given image and patch attack size, no patch adversarial examples exist. Our method is related to the broad class of randomized smoothing robustness schemes which provide high-confidence probabilistic robustness certificates. By exploiting the fact that patch attacks are more constrained than general sparse attacks, we derive meaningfully large robustness certificates against them. Additionally, in contrast to smoothing-based defenses against L_p and sparse attacks, our defense method against patch attacks is de-randomized, yielding improved, deterministic certificates. Compared to the existing patch certification method proposed by (Chiang et al., 2020), which relies on interval bound propagation, our method can be trained significantly faster, achieves high clean and certified robust accuracy on CIFAR-10, and provides certificates at ImageNet scale. For example, for a 5-by-5 patch attack on CIFAR-10, our method achieves up to around 57.6% certified accuracy (with a classifier with around 83.8% clean accuracy), compared to at most 30.3% certified accuracy for the existing method (with a classifier with around 47.8% clean accuracy). Our results effectively establish a new state-of-the-art of certifiable defense against patch attacks on CIFAR-10 and ImageNet.
SMYRF - Efficient Attention using Asymmetric Clustering
https://papers.nips.cc/paper_files/paper/2020/hash/47d40767c7e9df50249ebfd9c7cfff77-Abstract.html
Giannis Daras, Nikita Kitaev, Augustus Odena, Alexandros G. Dimakis
https://papers.nips.cc/paper_files/paper/2020/hash/47d40767c7e9df50249ebfd9c7cfff77-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/47d40767c7e9df50249ebfd9c7cfff77-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10267-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/47d40767c7e9df50249ebfd9c7cfff77-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/47d40767c7e9df50249ebfd9c7cfff77-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/47d40767c7e9df50249ebfd9c7cfff77-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/47d40767c7e9df50249ebfd9c7cfff77-Supplemental.zip
We propose a novel type of balanced clustering algorithm to approximate attention. Attention complexity is reduced from $O(N^2)$ to $O(N \log N)$, where N is the sequence length. Our algorithm, SMYRF, uses Locality Sensitive Hashing (LSH) in a novel way by defining new Asymmetric transformations and an adaptive scheme that produces balanced clusters. The biggest advantage of SMYRF is that it can be used as a drop-in replacement for dense attention layers without any retraining. On the contrary, prior fast attention methods impose constraints (e.g. tight queries and keys) and require re-training from scratch. We apply our method to pre-trained state-of-the-art Natural Language Processing and Computer Vision models and we report significant memory and speed benefits. Notably, SMYRF-BERT outperforms (slightly) BERT on GLUE, while using 50% less memory. We also show that SMYRF can be used interchangeably with dense attention before and after training. Finally, we use SMYRF to train GANs with attention in high resolutions. Using a single TPU, we train BigGAN on Celeba-HQ, with attention at resolution 128x128 and 256x256, capable of generating realistic human faces.
Introducing Routing Uncertainty in Capsule Networks
https://papers.nips.cc/paper_files/paper/2020/hash/47fd3c87f42f55d4b233417d49c34783-Abstract.html
Fabio De Sousa Ribeiro, Georgios Leontidis, Stefanos Kollias
https://papers.nips.cc/paper_files/paper/2020/hash/47fd3c87f42f55d4b233417d49c34783-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/47fd3c87f42f55d4b233417d49c34783-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10268-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/47fd3c87f42f55d4b233417d49c34783-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/47fd3c87f42f55d4b233417d49c34783-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/47fd3c87f42f55d4b233417d49c34783-Review.html
null
Rather than performing inefficient local iterative routing between adjacent capsule layers, we propose an alternative global view based on representing the inherent uncertainty in part-object assignment. In our formulation, the local routing iterations are replaced with variational inference of part-object connections in a probabilistic capsule network, leading to a significant speedup without sacrificing performance. In this way, global context is also considered when routing capsules by introducing global latent variables that have direct influence on the objective function, and are updated discriminatively in accordance with the minimum description length (MDL) principle. We focus on enhancing capsule network properties, and perform a thorough evaluation on pose-aware tasks, observing improvements in performance over previous approaches whilst being more computationally efficient.
A Simple and Efficient Smoothing Method for Faster Optimization and Local Exploration
https://papers.nips.cc/paper_files/paper/2020/hash/481d462e46c2ab976294271a175b8929-Abstract.html
Kevin Scaman, Ludovic DOS SANTOS, Merwan Barlier, Igor Colin
https://papers.nips.cc/paper_files/paper/2020/hash/481d462e46c2ab976294271a175b8929-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/481d462e46c2ab976294271a175b8929-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10269-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/481d462e46c2ab976294271a175b8929-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/481d462e46c2ab976294271a175b8929-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/481d462e46c2ab976294271a175b8929-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/481d462e46c2ab976294271a175b8929-Supplemental.pdf
This work proposes a novel smoothing method, called Bend, Mix and Release (BMR), that extends two well-known smooth approximations of the convex optimization literature: randomized smoothing and the Moreau envelope. The BMR smoothing method allows to trade-off between the computational simplicity of randomized smoothing (RS) and the approximation efficiency of the Moreau envelope (ME). More specifically, we show that BMR achieves up to a $\sqrt{d}$ multiplicative improvement compared to the approximation error of RS, where $d$ is the dimension of the search space, while being less computation intensive than the ME. For non-convex objectives, BMR also has the desirable property to widen local minima, allowing optimization methods to reach small cracks and crevices of extremely irregular and non-convex functions, while being well-suited to a distributed setting. This novel smoothing method is then used to improve first-order non-smooth optimization (both convex and non-convex) by allowing for a local exploration of the search space. More specifically, our analysis sheds light on the similarities between evolution strategies and BMR, creating a link between exploration strategies of zeroth-order methods and the regularity of first-order optimization problems. Finally, we evidence the impact of BMR through synthetic experiments.
Hyperparameter Ensembles for Robustness and Uncertainty Quantification
https://papers.nips.cc/paper_files/paper/2020/hash/481fbfa59da2581098e841b7afc122f1-Abstract.html
Florian Wenzel, Jasper Snoek, Dustin Tran, Rodolphe Jenatton
https://papers.nips.cc/paper_files/paper/2020/hash/481fbfa59da2581098e841b7afc122f1-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/481fbfa59da2581098e841b7afc122f1-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10270-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/481fbfa59da2581098e841b7afc122f1-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/481fbfa59da2581098e841b7afc122f1-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/481fbfa59da2581098e841b7afc122f1-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/481fbfa59da2581098e841b7afc122f1-Supplemental.pdf
Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, ResNet 20 and Wide ResNet 28-10 architectures, we improve upon both deep and batch ensembles.
Neutralizing Self-Selection Bias in Sampling for Sortition
https://papers.nips.cc/paper_files/paper/2020/hash/48237d9f2dea8c74c2a72126cf63d933-Abstract.html
Bailey Flanigan, Paul Gölz, Anupam Gupta, Ariel D. Procaccia
https://papers.nips.cc/paper_files/paper/2020/hash/48237d9f2dea8c74c2a72126cf63d933-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/48237d9f2dea8c74c2a72126cf63d933-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10271-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/48237d9f2dea8c74c2a72126cf63d933-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/48237d9f2dea8c74c2a72126cf63d933-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/48237d9f2dea8c74c2a72126cf63d933-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/48237d9f2dea8c74c2a72126cf63d933-Supplemental.pdf
Sortition is a political system in which decisions are made by panels of randomly selected citizens. The process for selecting a sortition panel is traditionally thought of as uniform sampling without replacement, which has strong fairness properties. In practice, however, sampling without replacement is not possible since only a fraction of agents is willing to participate in a panel when invited, and different demographic groups participate at different rates. In order to still produce panels whose composition resembles that of the population, we develop a sampling algorithm that restores close-to-equal representation probabilities for all agents while satisfying meaningful demographic quotas. As part of its input, our algorithm requires probabilities indicating how likely each volunteer in the pool was to participate. Since these participation probabilities are not directly observable, we show how to learn them, and demonstrate our approach using data on a real sortition panel combined with information on the general population in the form of publicly available survey data.
On the Convergence of Smooth Regularized Approximate Value Iteration Schemes
https://papers.nips.cc/paper_files/paper/2020/hash/483101a6bc4e6c46a86222eb65fbcb6a-Abstract.html
Elena Smirnova, Elvis Dohmatob
https://papers.nips.cc/paper_files/paper/2020/hash/483101a6bc4e6c46a86222eb65fbcb6a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/483101a6bc4e6c46a86222eb65fbcb6a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10272-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/483101a6bc4e6c46a86222eb65fbcb6a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/483101a6bc4e6c46a86222eb65fbcb6a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/483101a6bc4e6c46a86222eb65fbcb6a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/483101a6bc4e6c46a86222eb65fbcb6a-Supplemental.pdf
Entropy regularization, smoothing of Q-values and neural network function approximator are key components of the state-of-the-art reinforcement learning (RL) algorithms, such as Soft Actor-Critic~\cite{haarnoja2018soft}. Despite the widespread use, the impact of these core techniques on the convergence of RL algorithms is not yet fully understood. In this work, we analyse these techniques from error propagation perspective using the approximate dynamic programming framework. In particular, our analysis shows that (1) value smoothing results in increased stability of the algorithm in exchange for slower convergence, (2) entropy regularization reduces overestimation errors at the cost of modifying the original problem, (3) we study a combination of these techniques that describes the Soft Actor-Critic algorithm.
Off-Policy Evaluation via the Regularized Lagrangian
https://papers.nips.cc/paper_files/paper/2020/hash/488e4104520c6aab692863cc1dba45af-Abstract.html
Mengjiao Yang, Ofir Nachum, Bo Dai, Lihong Li, Dale Schuurmans
https://papers.nips.cc/paper_files/paper/2020/hash/488e4104520c6aab692863cc1dba45af-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/488e4104520c6aab692863cc1dba45af-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10273-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/488e4104520c6aab692863cc1dba45af-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/488e4104520c6aab692863cc1dba45af-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/488e4104520c6aab692863cc1dba45af-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/488e4104520c6aab692863cc1dba45af-Supplemental.pdf
The recently proposed distribution correction estimation (DICE) family of estimators has advanced the state of the art in off-policy evaluation from behavior-agnostic data. While these estimators all perform some form of stationary distribution correction, they arise from different derivations and objective functions. In this paper, we unify these estimators as regularized Lagrangians of the same linear program. The unification allows us to expand the space of DICE estimators to new alternatives that demonstrate improved performance. More importantly, by analyzing the expanded space of estimators both mathematically and empirically we find that dual solutions offer greater flexibility in navigating the tradeoff between optimization stability and estimation bias, and generally provide superior estimates in practice.
The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/48db71587df6c7c442e5b76cc723169a-Abstract.html
Harm Van Seijen, Hadi Nekoei, Evan Racah, Sarath Chandar
https://papers.nips.cc/paper_files/paper/2020/hash/48db71587df6c7c442e5b76cc723169a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/48db71587df6c7c442e5b76cc723169a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10274-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/48db71587df6c7c442e5b76cc723169a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/48db71587df6c7c442e5b76cc723169a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/48db71587df6c7c442e5b76cc723169a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/48db71587df6c7c442e5b76cc723169a-Supplemental.pdf
Deep model-based Reinforcement Learning (RL) has the potential to substantially improve the sample-efficiency of deep RL. While various challenges have long held it back, a number of papers have recently come out reporting success with deep model-based methods. This is a great development, but the lack of a consistent metric to evaluate such methods makes it difficult to compare various approaches. For example, the common single-task sample-efficiency metric conflates improvements due to model-based learning with various other aspects, such as representation learning, making it difficult to assess true progress on model-based RL. To address this, we introduce an experimental setup to evaluate model-based behavior of RL methods, inspired by work from neuroscience on detecting model-based behavior in humans and animals. Our metric based on this setup, the Local Change Adaptation (LoCA) regret, measures how quickly an RL method adapts to a local change in the environment. Our metric can identify model-based behavior, even if the method uses a poor representation and provides insight in how close a method's behavior is from optimal model-based behavior. We use our setup to evaluate the model-based behavior of MuZero on a variation of the classic Mountain Car task.
Neural Power Units
https://papers.nips.cc/paper_files/paper/2020/hash/48e59000d7dfcf6c1d96ce4a603ed738-Abstract.html
Niklas Heim, Tomas Pevny, Vasek Smidl
https://papers.nips.cc/paper_files/paper/2020/hash/48e59000d7dfcf6c1d96ce4a603ed738-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/48e59000d7dfcf6c1d96ce4a603ed738-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10275-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/48e59000d7dfcf6c1d96ce4a603ed738-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/48e59000d7dfcf6c1d96ce4a603ed738-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/48e59000d7dfcf6c1d96ce4a603ed738-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/48e59000d7dfcf6c1d96ce4a603ed738-Supplemental.zip
Conventional Neural Networks can approximate simple arithmetic operations, but fail to generalize beyond the range of numbers that were seen during training. Neural Arithmetic Units aim to overcome this difficulty, but current arithmetic units are either limited to operate on positive numbers or can only represent a subset of arithmetic operations. We introduce the Neural Power Unit (NPU) that operates on the full domain of real numbers and is capable of learning arbitrary power functions in a single layer. The NPU thus fixes the shortcomings of existing arithmetic units and extends their expressivity. We achieve this by using complex arithmetic without requiring a conversion of the network to complex numbers. A simplification of the unit to the RealNPU yields a highly transparent model. We show that the NPUs outperform their competitors in terms of accuracy and sparsity on artificial arithmetic datasets, and that the RealNPU can discover the governing equations of a dynamical system only from data.
Towards Scalable Bayesian Learning of Causal DAGs
https://papers.nips.cc/paper_files/paper/2020/hash/48f7d3043bc03e6c48a6f0ebc0f258a8-Abstract.html
Jussi Viinikka, Antti Hyttinen, Johan Pensar, Mikko Koivisto
https://papers.nips.cc/paper_files/paper/2020/hash/48f7d3043bc03e6c48a6f0ebc0f258a8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/48f7d3043bc03e6c48a6f0ebc0f258a8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10276-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/48f7d3043bc03e6c48a6f0ebc0f258a8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/48f7d3043bc03e6c48a6f0ebc0f258a8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/48f7d3043bc03e6c48a6f0ebc0f258a8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/48f7d3043bc03e6c48a6f0ebc0f258a8-Supplemental.pdf
We give methods for Bayesian inference of directed acyclic graphs, DAGs, and the induced causal effects from passively observed complete data. Our methods build on a recent Markov chain Monte Carlo scheme for learning Bayesian networks, which enables efficient approximate sampling from the graph posterior, provided that each node is assigned a small number K of candidate parents. We present algorithmic techniques to significantly reduce the space and time requirements, which make the use of substantially larger values of K feasible. Furthermore, we investigate the problem of selecting the candidate parents per node so as to maximize the covered posterior mass. Finally, we combine our sampling method with a novel Bayesian approach for estimating causal effects in linear Gaussian DAG models. Numerical experiments demonstrate the performance of our methods in detecting ancestor–descendant relations, and in causal effect estimation our Bayesian method is shown to outperform previous approaches.
A Dictionary Approach to Domain-Invariant Learning in Deep Networks
https://papers.nips.cc/paper_files/paper/2020/hash/490640b43519c77281cb2f8471e61a71-Abstract.html
Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu
https://papers.nips.cc/paper_files/paper/2020/hash/490640b43519c77281cb2f8471e61a71-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/490640b43519c77281cb2f8471e61a71-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10277-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/490640b43519c77281cb2f8471e61a71-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/490640b43519c77281cb2f8471e61a71-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/490640b43519c77281cb2f8471e61a71-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/490640b43519c77281cb2f8471e61a71-Supplemental.pdf
In this paper, we consider domain-invariant deep learning by explicitly modeling domain shifts with only a small amount of domain-specific parameters in a Convolutional Neural Network (CNN). By exploiting the observation that a convolutional filter can be well approximated as a linear combination of a small set of dictionary atoms, we show for the first time, both empirically and theoretically, that domain shifts can be effectively handled by decomposing a convolutional layer into a domain-specific atom layer and a domain-shared coefficient layer, while both remain convolutional. An input channel will now first convolve spatially only with each respective domain-specific dictionary atom to ``absorb" domain variations, and then output channels are linearly combined using common decomposition coefficients trained to promote shared semantics across domains. We use toy examples, rigorous analysis, and real-world examples with diverse datasets and architectures, to show the proposed plug-in framework's effectiveness in cross and joint domain performance and domain adaptation. With the proposed architecture, we need only a small set of dictionary atoms to model each additional domain, which brings a negligible amount of additional parameters, typically a few hundred.
Bootstrapping neural processes
https://papers.nips.cc/paper_files/paper/2020/hash/492114f6915a69aa3dd005aa4233ef51-Abstract.html
Juho Lee, Yoonho Lee, Jungtaek Kim, Eunho Yang, Sung Ju Hwang, Yee Whye Teh
https://papers.nips.cc/paper_files/paper/2020/hash/492114f6915a69aa3dd005aa4233ef51-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/492114f6915a69aa3dd005aa4233ef51-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10278-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/492114f6915a69aa3dd005aa4233ef51-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/492114f6915a69aa3dd005aa4233ef51-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/492114f6915a69aa3dd005aa4233ef51-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/492114f6915a69aa3dd005aa4233ef51-Supplemental.pdf
Unlike in the traditional statistical modeling for which a user typically hand-specify a prior, Neural Processes (NPs) implicitly define a broad class of stochastic processes with neural networks. Given a data stream, NP learns a stochastic process that best describes the data. While this ``data-driven'' way of learning stochastic processes has proven to handle various types of data, NPs still relies on an assumption that uncertainty in stochastic processes is modeled by a single latent variable, which potentially limits the flexibility. To this end, we propose the Bootstrapping Neural Process (BNP), a novel extension of the NP family using the bootstrap. The bootstrap is a classical data-driven technique for estimating uncertainty, which allows BNP to learn the stochasticity in NPs without assuming a particular form. We demonstrate the efficacy of BNP on various types of data and its robustness in the presence of model-data mismatch.
Large-Scale Adversarial Training for Vision-and-Language Representation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/49562478de4c54fafd4ec46fdb297de5-Abstract.html
Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, Jingjing Liu
https://papers.nips.cc/paper_files/paper/2020/hash/49562478de4c54fafd4ec46fdb297de5-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/49562478de4c54fafd4ec46fdb297de5-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10279-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/49562478de4c54fafd4ec46fdb297de5-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/49562478de4c54fafd4ec46fdb297de5-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/49562478de4c54fafd4ec46fdb297de5-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/49562478de4c54fafd4ec46fdb297de5-Supplemental.pdf
We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the ``free'' adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2.
Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations
https://papers.nips.cc/paper_files/paper/2020/hash/497476fe61816251905e8baafdf54c23-Abstract.html
Amit Daniely, Hadas Shacham
https://papers.nips.cc/paper_files/paper/2020/hash/497476fe61816251905e8baafdf54c23-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/497476fe61816251905e8baafdf54c23-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10280-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/497476fe61816251905e8baafdf54c23-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/497476fe61816251905e8baafdf54c23-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/497476fe61816251905e8baafdf54c23-Review.html
null
We consider ReLU networks with random weights, in which the dimension decreases at each layer. We show that for most such networks, most examples $x$ admit an adversarial perturbation at an Euclidean distance of $O\left(\frac{\|x\|}{\sqrt{d}}\right)$, where $d$ is the input dimension. Moreover, this perturbation can be found via gradient flow, as well as gradient descent with sufficiently small steps. This result can be seen as an explanation to the abundance of adversarial examples, and to the fact that they are found via gradient descent.
Compositional Visual Generation with Energy Based Models
https://papers.nips.cc/paper_files/paper/2020/hash/49856ed476ad01fcff881d57e161d73f-Abstract.html
Yilun Du, Shuang Li, Igor Mordatch
https://papers.nips.cc/paper_files/paper/2020/hash/49856ed476ad01fcff881d57e161d73f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/49856ed476ad01fcff881d57e161d73f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10281-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/49856ed476ad01fcff881d57e161d73f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/49856ed476ad01fcff881d57e161d73f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/49856ed476ad01fcff881d57e161d73f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/49856ed476ad01fcff881d57e161d73f-Supplemental.zip
A vital aspect of human intelligence is the ability to compose increasingly complex concepts out of simpler ideas, enabling both rapid learning and adaptation of knowledge. In this paper we show that energy-based models can exhibit this ability by directly combining probability distributions. Samples from the combined distribution correspond to compositions of concepts. For example, given a distribution for smiling faces, and another for male faces, we can combine them to generate smiling male faces. This allows us to generate natural images that simultaneously satisfy conjunctions, disjunctions, and negations of concepts. We evaluate compositional generation abilities of our model on the CelebA dataset of natural faces and synthetic 3D scene images. We also demonstrate other unique advantages of our model, such as the ability to continually learn and incorporate new concepts, or infer compositions of concept properties underlying an image.
Factor Graph Grammars
https://papers.nips.cc/paper_files/paper/2020/hash/49ca03822497d26a3943d5084ed59130-Abstract.html
David Chiang, Darcey Riley
https://papers.nips.cc/paper_files/paper/2020/hash/49ca03822497d26a3943d5084ed59130-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/49ca03822497d26a3943d5084ed59130-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10282-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/49ca03822497d26a3943d5084ed59130-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/49ca03822497d26a3943d5084ed59130-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/49ca03822497d26a3943d5084ed59130-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/49ca03822497d26a3943d5084ed59130-Supplemental.pdf
We propose the use of hyperedge replacement graph grammars for factor graphs, or factor graph grammars (FGGs) for short. FGGs generate sets of factor graphs and can describe a more general class of models than plate notation, dynamic graphical models, case-factor diagrams, and sum-product networks can. Moreover, inference can be done on FGGs without enumerating all the generated factor graphs. For finite variable domains (but possibly infinite sets of graphs), a generalization of variable elimination to FGGs allows exact and tractable inference in many situations. For finite sets of graphs (but possibly infinite variable domains), a FGG can be converted to a single factor graph amenable to standard inference techniques.
Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
https://papers.nips.cc/paper_files/paper/2020/hash/49f85a9ed090b20c8bed85a5923c669f-Abstract.html
Nikolaos Karalias, Andreas Loukas
https://papers.nips.cc/paper_files/paper/2020/hash/49f85a9ed090b20c8bed85a5923c669f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/49f85a9ed090b20c8bed85a5923c669f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10283-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/49f85a9ed090b20c8bed85a5923c669f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/49f85a9ed090b20c8bed85a5923c669f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/49f85a9ed090b20c8bed85a5923c669f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/49f85a9ed090b20c8bed85a5923c669f-Supplemental.zip
Combinatorial optimization (CO) problems are notoriously challenging for neural networks, especially in the absence of labeled instances. This work proposes an unsupervised learning framework for CO problems on graphs that can provide integral solutions of certified quality. Inspired by Erdos' probabilistic method, we use a neural network to parametrize a probability distribution over sets. Crucially, we show that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution that obeys the constraints of the combinatorial problem. The probabilistic proof of existence is then derandomized to decode the desired solutions. We demonstrate the efficacy of this approach to obtain valid solutions to the maximum clique problem and to perform local graph clustering. Our method achieves competitive results on both real datasets and synthetic hard instances.
Autoregressive Score Matching
https://papers.nips.cc/paper_files/paper/2020/hash/4a4526b1ec301744aba9526d78fcb2a6-Abstract.html
Chenlin Meng, Lantao Yu, Yang Song, Jiaming Song, Stefano Ermon
https://papers.nips.cc/paper_files/paper/2020/hash/4a4526b1ec301744aba9526d78fcb2a6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4a4526b1ec301744aba9526d78fcb2a6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10284-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4a4526b1ec301744aba9526d78fcb2a6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4a4526b1ec301744aba9526d78fcb2a6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4a4526b1ec301744aba9526d78fcb2a6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4a4526b1ec301744aba9526d78fcb2a6-Supplemental.pdf
Autoregressive models use chain rule to define a joint probability distribution as a product of conditionals. These conditionals need to be normalized, imposing constraints on the functional families that can be used. To increase flexibility, we propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariate log-conditionals (scores), which need not be normalized. To train AR-CSM, we introduce a new divergence between distributions named Composite Score Matching (CSM). For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training. Compared to previous score matching algorithms, our method is more scalable to high dimensional data and more stable to optimize. We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization
https://papers.nips.cc/paper_files/paper/2020/hash/4a46fbfca3f1465a27b210f4bdfe6ab3-Abstract.html
Michal Derezinski, Burak Bartan, Mert Pilanci, Michael W. Mahoney
https://papers.nips.cc/paper_files/paper/2020/hash/4a46fbfca3f1465a27b210f4bdfe6ab3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4a46fbfca3f1465a27b210f4bdfe6ab3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10285-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4a46fbfca3f1465a27b210f4bdfe6ab3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4a46fbfca3f1465a27b210f4bdfe6ab3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4a46fbfca3f1465a27b210f4bdfe6ab3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4a46fbfca3f1465a27b210f4bdfe6ab3-Supplemental.zip
In distributed second order optimization, a standard strategy is to average many local estimates, each of which is based on a small sketch or batch of the data. However, the local estimates on each machine are typically biased, relative to the full solution on all of the data, and this can limit the effectiveness of averaging. Here, we introduce a new technique for debiasing the local estimates, which leads to both theoretical and empirical improvements in the convergence rate of distributed second order methods. Our technique has two novel components: (1) modifying standard sketching techniques to obtain what we call a surrogate sketch; and (2) carefully scaling the global regularization parameter for local computations. Our surrogate sketches are based on determinantal point processes, a family of distributions for which the bias of an estimate of the inverse Hessian can be computed exactly. Based on this computation, we show that when the objective being minimized is $l_2$-regularized with parameter $\lambda$ and individual machines are each given a sketch of size $m$, then to eliminate the bias, local estimates should be computed using a shrunk regularization parameter given by $\lambda^\prime=\lambda(1-\frac{d_\lambda}{m})$, where $d_\lambda$ is the $\lambda$-effective dimension of the Hessian (or, for quadratic problems, the data matrix).
Neural Controlled Differential Equations for Irregular Time Series
https://papers.nips.cc/paper_files/paper/2020/hash/4a5876b450b45371f6cfe5047ac8cd45-Abstract.html
Patrick Kidger, James Morrill, James Foster, Terry Lyons
https://papers.nips.cc/paper_files/paper/2020/hash/4a5876b450b45371f6cfe5047ac8cd45-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4a5876b450b45371f6cfe5047ac8cd45-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10286-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4a5876b450b45371f6cfe5047ac8cd45-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4a5876b450b45371f6cfe5047ac8cd45-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4a5876b450b45371f6cfe5047ac8cd45-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4a5876b450b45371f6cfe5047ac8cd45-Supplemental.pdf
Neural ordinary differential equations are an attractive option for modelling temporal dynamics. However, a fundamental issue is that the solution to an ordinary differential equation is determined by its initial condition, and there is no mechanism for adjusting the trajectory based on subsequent observations. Here, we demonstrate how this may be resolved through the well-understood mathematics of \emph{controlled differential equations}. The resulting \emph{neural controlled differential equation} model is directly applicable to the general setting of partially-observed irregularly-sampled multivariate time series, and (unlike previous work on this problem) it may utilise memory-efficient adjoint-based backpropagation even across observations. We demonstrate that our model achieves state-of-the-art performance against similar (ODE or RNN based) models in empirical studies on a range of datasets. Finally we provide theoretical results demonstrating universal approximation, and that our model subsumes alternative ODE models.
On Efficiency in Hierarchical Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/4a5cfa9281924139db466a8a19291aff-Abstract.html
Zheng Wen, Doina Precup, Morteza Ibrahimi, Andre Barreto, Benjamin Van Roy, Satinder Singh
https://papers.nips.cc/paper_files/paper/2020/hash/4a5cfa9281924139db466a8a19291aff-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4a5cfa9281924139db466a8a19291aff-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10287-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4a5cfa9281924139db466a8a19291aff-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4a5cfa9281924139db466a8a19291aff-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4a5cfa9281924139db466a8a19291aff-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4a5cfa9281924139db466a8a19291aff-Supplemental.pdf
Hierarchical Reinforcement Learning (HRL) approaches promise to provide more efficient solutions to sequential decision making problems, both in terms of statistical as well as computational efficiency. While this has been demonstrated empirically over time in a variety of tasks, theoretical results quantifying the benefits of such methods are still few and far between. In this paper, we discuss the kind of structure in a Markov decision process which gives rise to efficient HRL methods. Specifically, we formalize the intuition that HRL can exploit well repeating "subMDPs", with similar reward and transition structure. We show that, under reasonable assumptions, a model-based Thompson sampling-style HRL algorithm that exploits this structure is statistically efficient, as established through a finite-time regret bound. We also establish conditions under which planning with structure-induced options is near-optimal and computationally efficient.
On Correctness of Automatic Differentiation for Non-Differentiable Functions
https://papers.nips.cc/paper_files/paper/2020/hash/4aaa76178f8567e05c8e8295c96171d8-Abstract.html
Wonyeol Lee, Hangyeol Yu, Xavier Rival, Hongseok Yang
https://papers.nips.cc/paper_files/paper/2020/hash/4aaa76178f8567e05c8e8295c96171d8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4aaa76178f8567e05c8e8295c96171d8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10288-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4aaa76178f8567e05c8e8295c96171d8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4aaa76178f8567e05c8e8295c96171d8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4aaa76178f8567e05c8e8295c96171d8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4aaa76178f8567e05c8e8295c96171d8-Supplemental.pdf
Differentiation lies at the core of many machine-learning algorithms, and is well-supported by popular autodiff systems, such as TensorFlow and PyTorch. Originally, these systems have been developed to compute derivatives of differentiable functions, but in practice, they are commonly applied to functions with non-differentiabilities. For instance, neural networks using ReLU define non-differentiable functions in general, but the gradients of losses involving those functions are computed using autodiff systems in practice. This status quo raises a natural question: are autodiff systems correct in any formal sense when they are applied to such non-differentiable functions? In this paper, we provide a positive answer to this question. Using counterexamples, we first point out flaws in often-used informal arguments, such as: non-differentiabilities arising in deep learning do not cause any issues because they form a measure-zero set. We then investigate a class of functions, called PAP functions, that includes nearly all (possibly non-differentiable) functions in deep learning nowadays. For these PAP functions, we propose a new type of derivatives, called intensional derivatives, and prove that these derivatives always exist and coincide with standard derivatives for almost all inputs. We also show that these intensional derivatives are what most autodiff systems compute or try to compute essentially. In this way, we formally establish the correctness of autodiff systems applied to non-differentiable functions.
Probabilistic Linear Solvers for Machine Learning
https://papers.nips.cc/paper_files/paper/2020/hash/4afd521d77158e02aed37e2274b90c9c-Abstract.html
Jonathan Wenger, Philipp Hennig
https://papers.nips.cc/paper_files/paper/2020/hash/4afd521d77158e02aed37e2274b90c9c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4afd521d77158e02aed37e2274b90c9c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10289-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4afd521d77158e02aed37e2274b90c9c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4afd521d77158e02aed37e2274b90c9c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4afd521d77158e02aed37e2274b90c9c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4afd521d77158e02aed37e2274b90c9c-Supplemental.pdf
Linear systems are the bedrock of virtually all numerical computation. Machine learning poses specific challenges for the solution of such systems due to their scale, characteristic structure, stochasticity and the central role of uncertainty in the field. Unifying earlier work we propose a class of probabilistic linear solvers which jointly infer the matrix, its inverse and the solution from matrix-vector product observations. This class emerges from a fundamental set of desiderata which constrains the space of possible algorithms and recovers the method of conjugate gradients under certain conditions. We demonstrate how to incorporate prior spectral information in order to calibrate uncertainty and experimentally showcase the potential of such solvers for machine learning.
Dynamic Regret of Policy Optimization in Non-Stationary Environments
https://papers.nips.cc/paper_files/paper/2020/hash/4b0091f82f50ff7095647fe893580d60-Abstract.html
Yingjie Fei, Zhuoran Yang, Zhaoran Wang, Qiaomin Xie
https://papers.nips.cc/paper_files/paper/2020/hash/4b0091f82f50ff7095647fe893580d60-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4b0091f82f50ff7095647fe893580d60-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10290-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4b0091f82f50ff7095647fe893580d60-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4b0091f82f50ff7095647fe893580d60-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4b0091f82f50ff7095647fe893580d60-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4b0091f82f50ff7095647fe893580d60-Supplemental.pdf
We consider reinforcement learning (RL) in episodic MDPs with adversarial full-information reward feedback and unknown fixed transition kernels. We propose two model-free policy optimization algorithms, POWER and POWER++, and establish guarantees for their dynamic regret. Compared with the classical notion of static regret, dynamic regret is a stronger notion as it explicitly accounts for the non-stationarity of environments. The dynamic regret attained by the proposed algorithms interpolates between different regimes of non-stationarity, and moreover satisfies a notion of adaptive (near-)optimality, in the sense that it matches the (near-)optimal static regret under slow-changing environments. The dynamic regret bound features two components, one arising from exploration, which deals with the uncertainty of transition kernels, and the other arising from adaptation, which deals with non-stationary environments. Specifically, we show that POWER++ improves over POWER on the second component of the dynamic regret by actively adapting to non-stationarity through prediction. To the best of our knowledge, our work is the first dynamic regret analysis of model-free RL algorithms in non-stationary environments.
Multipole Graph Neural Operator for Parametric Partial Differential Equations
https://papers.nips.cc/paper_files/paper/2020/hash/4b21cf96d4cf612f239a6c322b10c8fe-Abstract.html
Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Andrew Stuart, Kaushik Bhattacharya, Anima Anandkumar
https://papers.nips.cc/paper_files/paper/2020/hash/4b21cf96d4cf612f239a6c322b10c8fe-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4b21cf96d4cf612f239a6c322b10c8fe-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10291-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4b21cf96d4cf612f239a6c322b10c8fe-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4b21cf96d4cf612f239a6c322b10c8fe-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4b21cf96d4cf612f239a6c322b10c8fe-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4b21cf96d4cf612f239a6c322b10c8fe-Supplemental.zip
One of the main challenges in using deep learning-based methods for simulating physical systems and solving partial differential equations (PDEs) is formulating physics-based data in the desired structure for neural networks. Graph neural networks (GNNs) have gained popularity in this area since graphs offer a natural way of modeling particle interactions and provide a clear way of discretizing the continuum models. However, the graphs constructed for approximating such tasks usually ignore long-range interactions due to unfavorable scaling of the computational complexity with respect to the number of nodes. The errors due to these approximations scale with the discretization of the system, thereby not allowing for generalization under mesh-refinement. Inspired by the classical multipole methods, we purpose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity. Our multi-level formulation is equivalent to recursively adding inducing points to the kernel matrix, unifying GNNs with multi-resolution matrix factorization of the kernel. Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images
https://papers.nips.cc/paper_files/paper/2020/hash/4b29fa4efe4fb7bc667c7b301b74d52d-Abstract.html
Thu H. Nguyen-Phuoc, Christian Richardt, Long Mai, Yongliang Yang, Niloy Mitra
https://papers.nips.cc/paper_files/paper/2020/hash/4b29fa4efe4fb7bc667c7b301b74d52d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4b29fa4efe4fb7bc667c7b301b74d52d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10292-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4b29fa4efe4fb7bc667c7b301b74d52d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4b29fa4efe4fb7bc667c7b301b74d52d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4b29fa4efe4fb7bc667c7b301b74d52d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4b29fa4efe4fb7bc667c7b301b74d52d-Supplemental.zip
We present BlockGAN, an image generative model that learns object-aware 3D scene representations directly from unlabelled 2D images. Current work on scene representation learning either ignores scene background or treats the whole scene as one object. Meanwhile, work that considers scene compositionality treats scene objects only as image patches or 2D layers with alpha maps. Inspired by the computer graphics pipeline, we design BlockGAN to learn to first generate 3D features of background and foreground objects, then combine them into 3D features for the whole scene, and finally render them into realistic images. This allows BlockGAN to reason over occlusion and interaction between objects’ appearance, such as shadow and lighting, and provides control over each object’s 3D pose and identity, while maintaining image realism. BlockGAN is trained end-to-end, using only unlabelled single images, without the need for 3D geometry, pose labels, object masks, or multiple views of the same scene. Our experiments show that using explicit 3D features to represent objects allows BlockGAN to learn disentangled representations both in terms of objects (foreground and background) and their properties (pose and identity).
Online Structured Meta-learning
https://papers.nips.cc/paper_files/paper/2020/hash/4b86ca48d90bd5f0978afa3a012503a4-Abstract.html
Huaxiu Yao, Yingbo Zhou, Mehrdad Mahdavi, Zhenhui (Jessie) Li, Richard Socher, Caiming Xiong
https://papers.nips.cc/paper_files/paper/2020/hash/4b86ca48d90bd5f0978afa3a012503a4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4b86ca48d90bd5f0978afa3a012503a4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10293-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4b86ca48d90bd5f0978afa3a012503a4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4b86ca48d90bd5f0978afa3a012503a4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4b86ca48d90bd5f0978afa3a012503a4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4b86ca48d90bd5f0978afa3a012503a4-Supplemental.pdf
Learning quickly is of great importance for machine intelligence deployed in online platforms. With the capability of transferring knowledge from learned tasks, meta-learning has shown its effectiveness in online scenarios by continuously updating the model with the learned prior. However, current online meta-learning algorithms are limited to learn a globally-shared meta-learner, which may lead to sub-optimal results when the tasks contain heterogeneous information that are difficult to share. We overcome this limitation by proposing an online structured meta-learning (OSML) framework. Inspired by the knowledge organization of human and hierarchical feature representation, OSML explicitly disentangles the meta-learner as a meta-hierarchical graph with different knowledge blocks. When a new task is encountered, it constructs a meta-knowledge pathway by either utilizing the most relevant knowledge blocks or exploring new blocks. Through the meta-knowledge pathway, the model is able to quickly adapt to the new task. In addition, new knowledge is further incorporated into the selected blocks. Experiments on three datasets empirically demonstrate the effectiveness and interpretability of our proposed framework, not only under heterogeneous tasks but also under homogeneous settings.
Learning Strategic Network Emergence Games
https://papers.nips.cc/paper_files/paper/2020/hash/4bb236de7787ceedafdff83bb8ea4710-Abstract.html
Rakshit Trivedi, Hongyuan Zha
https://papers.nips.cc/paper_files/paper/2020/hash/4bb236de7787ceedafdff83bb8ea4710-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4bb236de7787ceedafdff83bb8ea4710-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10294-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4bb236de7787ceedafdff83bb8ea4710-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4bb236de7787ceedafdff83bb8ea4710-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4bb236de7787ceedafdff83bb8ea4710-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4bb236de7787ceedafdff83bb8ea4710-Supplemental.pdf
Real-world networks, especially the ones that emerge due to actions of animate agents (e.g. humans, animals), are the result of underlying strategic mechanisms aimed at maximizing individual or collective benefits. Learning approaches built to capture these strategic insights would gain interpretability and flexibility benefits that are required to generalize beyond observations. To this end, we consider a game-theoretic formalism of network emergence that accounts for the underlying strategic mechanisms and take it to the observed data. We propose MINE (Multi-agent Inverse models of Network Emergence mechanism), a new learning framework that solves Markov-Perfect network emergence games using multi-agent inverse reinforcement learning. MINE jointly discovers agents' strategy profiles in the form of network emergence policy and the latent payoff mechanism in the form of learned reward function. In the experiments, we demonstrate that MINE learns versatile payoff mechanisms that: highly correlates with the ground truth for a synthetic case; can be used to analyze the observed network structure; and enable effective transfer in specific settings. Further, we show that the network emergence game as a learned model supports meaningful strategic predictions, thereby signifying its applicability to a variety of network analysis tasks.
Towards Interpretable Natural Language Understanding with Explanations as Latent Variables
https://papers.nips.cc/paper_files/paper/2020/hash/4be2c8f27b8a420492f2d44463933eb6-Abstract.html
Wangchunshu Zhou, Jinyi Hu, Hanlin Zhang, Xiaodan Liang, Maosong Sun, Chenyan Xiong, Jian Tang
https://papers.nips.cc/paper_files/paper/2020/hash/4be2c8f27b8a420492f2d44463933eb6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4be2c8f27b8a420492f2d44463933eb6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10295-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4be2c8f27b8a420492f2d44463933eb6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4be2c8f27b8a420492f2d44463933eb6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4be2c8f27b8a420492f2d44463933eb6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4be2c8f27b8a420492f2d44463933eb6-Supplemental.zip
Recently generating natural language explanations has shown very promising results in not only offering interpretable explanations but also providing additional information and supervision for prediction. However, existing approaches usually require a large set of human annotated explanations for training while collecting a large set of explanations is not only time consuming but also expensive. In this paper, we develop a general framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training. Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model. We develop a variational EM framework for optimization where an explanation generation module and an explanation-augmented prediction module are alternatively optimized and mutually enhance each other. Moreover, we further propose an explanation-based self-training method under this framework for semi-supervised learning. It alternates between assigning pseudo-labels to unlabeled data and generating new explanations to iteratively improve each other. Experiments on two natural language understanding tasks demonstrate that our framework can not only make effective predictions in both supervised and semi-supervised settings, but is also able to generate good natural language explanations.
The Mean-Squared Error of Double Q-Learning
https://papers.nips.cc/paper_files/paper/2020/hash/4bfbd52f4e8466dc12aaf30b7e057b66-Abstract.html
Wentao Weng, Harsh Gupta, Niao He, Lei Ying, R. Srikant
https://papers.nips.cc/paper_files/paper/2020/hash/4bfbd52f4e8466dc12aaf30b7e057b66-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4bfbd52f4e8466dc12aaf30b7e057b66-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10296-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4bfbd52f4e8466dc12aaf30b7e057b66-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4bfbd52f4e8466dc12aaf30b7e057b66-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4bfbd52f4e8466dc12aaf30b7e057b66-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4bfbd52f4e8466dc12aaf30b7e057b66-Supplemental.pdf
In this paper, we establish a theoretical comparison between the asymptotic mean square errors of double Q-learning and Q-learning. Our result builds upon an analysis for linear stochastic approximation based on Lyapunov equations and applies to both tabular setting or with linear function approximation, provided that the optimal policy is unique and the algorithms converge. We show that the asymptotic mean-square error of Double Q-learning is exactly equal to that of Q-learning if Double Q-learning uses twice the learning rate of Q-learning and the output of Double Q-learning is the average of its two estimators. We also present some practical implications of this theoretical observation using simulations.
What Makes for Good Views for Contrastive Learning?
https://papers.nips.cc/paper_files/paper/2020/hash/4c2e5eaae9152079b9e95845750bb9ab-Abstract.html
Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola
https://papers.nips.cc/paper_files/paper/2020/hash/4c2e5eaae9152079b9e95845750bb9ab-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4c2e5eaae9152079b9e95845750bb9ab-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10297-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4c2e5eaae9152079b9e95845750bb9ab-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4c2e5eaae9152079b9e95845750bb9ab-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4c2e5eaae9152079b9e95845750bb9ab-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4c2e5eaae9152079b9e95845750bb9ab-Supplemental.pdf
Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning. Despite its success, the influence of different view choices has been less studied. In this paper, we use theoretical and empirical analysis to better understand the importance of view selection, and argue that we should reduce the mutual information (MI) between views while keeping task-relevant information intact. To verify this hypothesis, we devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their MI. We also consider data augmentation as a way to reduce MI, and show that increasing data augmentation indeed leads to decreasing MI and improves downstream classification accuracy. As a by-product, we achieve a new state-of-the-art accuracy on unsupervised pre-training for ImageNet classification (73% top-1 linear readout with a ResNet-50).
Denoising Diffusion Probabilistic Models
https://papers.nips.cc/paper_files/paper/2020/hash/4c5bcfec8584af0d967f1ab10179ca4b-Abstract.html
Jonathan Ho, Ajay Jain, Pieter Abbeel
https://papers.nips.cc/paper_files/paper/2020/hash/4c5bcfec8584af0d967f1ab10179ca4b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10298-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Supplemental.pdf
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.
Barking up the right tree: an approach to search over molecule synthesis DAGs
https://papers.nips.cc/paper_files/paper/2020/hash/4cc05b35c2f937c5bd9e7d41d3686fff-Abstract.html
John Bradshaw, Brooks Paige, Matt J. Kusner, Marwin Segler, José Miguel Hernández-Lobato
https://papers.nips.cc/paper_files/paper/2020/hash/4cc05b35c2f937c5bd9e7d41d3686fff-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4cc05b35c2f937c5bd9e7d41d3686fff-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10299-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4cc05b35c2f937c5bd9e7d41d3686fff-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4cc05b35c2f937c5bd9e7d41d3686fff-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4cc05b35c2f937c5bd9e7d41d3686fff-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4cc05b35c2f937c5bd9e7d41d3686fff-Supplemental.pdf
When designing new molecules with particular properties, it is not only important what to make but crucially how to make it. These instructions form a synthesis directed acyclic graph (DAG), describing how a large vocabulary of simple building blocks can be recursively combined through chemical reactions to create more complicated molecules of interest. In contrast, many current deep generative models for molecules ignore synthesizability. We therefore propose a deep generative model that better represents the real world process, by directly outputting molecule synthesis DAGs. We argue that this provides sensible inductive biases, ensuring that our model searches over the same chemical space that chemists would also have access to, as well as interoperability. We show that our approach is able to model chemical space well, producing a wide range of diverse molecules, and allows for unconstrained optimization of an inherently constrained problem: maximize certain chemical properties such that discovered molecules are synthesizable.
On Uniform Convergence and Low-Norm Interpolation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/4cc5400e63624c44fadeda99f57588a6-Abstract.html
Lijia Zhou, Danica J. Sutherland, Nati Srebro
https://papers.nips.cc/paper_files/paper/2020/hash/4cc5400e63624c44fadeda99f57588a6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4cc5400e63624c44fadeda99f57588a6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10300-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4cc5400e63624c44fadeda99f57588a6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4cc5400e63624c44fadeda99f57588a6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4cc5400e63624c44fadeda99f57588a6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4cc5400e63624c44fadeda99f57588a6-Supplemental.pdf
We consider an underdetermined noisy linear regression model where the minimum-norm interpolating predictor is known to be consistent, and ask: can uniform convergence in a norm ball, or at least (following Nagarajan and Kolter) the subset of a norm ball that the algorithm selects on a typical input set, explain this success? We show that uniformly bounding the difference between empirical and population errors cannot show any learning in the norm ball, and cannot show consistency for any set, even one depending on the exact algorithm and distribution. But we argue we can explain the consistency of the minimal-norm interpolator with a slightly weaker, yet standard, notion: uniform convergence of zero-error predictors in a norm ball. We use this to bound the generalization error of low- (but not minimal-)norm interpolating predictors.
Bandit Samplers for Training Graph Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/4cea2358d3cc5f8cd32397ca9bc51b94-Abstract.html
Ziqi Liu, Zhengwei Wu, Zhiqiang Zhang, Jun Zhou, Shuang Yang, Le Song, Yuan Qi
https://papers.nips.cc/paper_files/paper/2020/hash/4cea2358d3cc5f8cd32397ca9bc51b94-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4cea2358d3cc5f8cd32397ca9bc51b94-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10301-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4cea2358d3cc5f8cd32397ca9bc51b94-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4cea2358d3cc5f8cd32397ca9bc51b94-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4cea2358d3cc5f8cd32397ca9bc51b94-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4cea2358d3cc5f8cd32397ca9bc51b94-Supplemental.pdf
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs). However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT). The fundamental reason is that the embeddings of the neighbors or learned weights involved in the optimal sampling distribution are \emph{changing} during the training and \emph{not known a priori}, but only \emph{partially observed} when sampled, thus making the derivation of an optimal variance reduced samplers non-trivial. In this paper, we formulate the optimization of the sampling variance as an adversary bandit problem, where the rewards are related to the node embeddings and learned weights, and can vary constantly. Thus a good sampler needs to acquire variance information about more neighbors (exploration) while at the same time optimizing the immediate sampling variance (exploit). We theoretically show that our algorithm asymptotically approaches the optimal variance within a factor of 3. We show the efficiency and effectiveness of our approach on multiple datasets.
Sampling from a k-DPP without looking at all items
https://papers.nips.cc/paper_files/paper/2020/hash/4d410063822cd9be28f86701c0bc3a31-Abstract.html
Daniele Calandriello, Michal Derezinski, Michal Valko
https://papers.nips.cc/paper_files/paper/2020/hash/4d410063822cd9be28f86701c0bc3a31-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4d410063822cd9be28f86701c0bc3a31-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10302-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4d410063822cd9be28f86701c0bc3a31-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4d410063822cd9be28f86701c0bc3a31-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4d410063822cd9be28f86701c0bc3a31-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4d410063822cd9be28f86701c0bc3a31-Supplemental.zip
Determinantal point processes (DPPs) are a useful probabilistic model for selecting a small diverse subset out of a large collection of items, with applications in summarization, recommendation, stochastic optimization, experimental design and more. Given a kernel function and a subset size k, our goal is to sample k out of n items with probability proportional to the determinant of the kernel matrix induced by the subset (a.k.a. k-DPP). Existing k-DPP sampling algorithms require an expensive preprocessing step which involves multiple passes over all n items, making it infeasible for large datasets. A naïve heuristic addressing this problem is to uniformly subsample a fraction of the data and perform k-DPP sampling only on those items, however this method offers no guarantee that the produced sample will even approximately resemble the target distribution over the original dataset. In this paper, we develop alpha-DPP, an algorithm which adaptively builds a sufficiently large uniform sample of data that is then used to efficiently generate a smaller set of k items, while ensuring that this set is drawn exactly from the target distribution defined on all n items. We show empirically that our algorithm produces a k-DPP sample after observing only a small fraction of all elements, leading to several orders of magnitude faster performance compared to the state-of-the-art. Our implementation of alpha-DPP is provided at https://github.com/guilgautier/DPPy/.
Uncovering the Topology of Time-Varying fMRI Data using Cubical Persistence
https://papers.nips.cc/paper_files/paper/2020/hash/4d771504ddcd28037b4199740df767e6-Abstract.html
Bastian Rieck, Tristan Yates, Christian Bock, Karsten Borgwardt, Guy Wolf, Nicholas Turk-Browne, Smita Krishnaswamy
https://papers.nips.cc/paper_files/paper/2020/hash/4d771504ddcd28037b4199740df767e6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4d771504ddcd28037b4199740df767e6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10303-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4d771504ddcd28037b4199740df767e6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4d771504ddcd28037b4199740df767e6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4d771504ddcd28037b4199740df767e6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4d771504ddcd28037b4199740df767e6-Supplemental.pdf
Functional magnetic resonance imaging (fMRI) is a crucial technology for gaining insights into cognitive processes in humans. Data amassed from fMRI measurements result in volumetric data sets that vary over time. However, analysing such data presents a challenge due to the large degree of noise and person-to-person variation in how information is represented in the brain. To address this challenge, we present a novel topological approach that encodes each time point in an fMRI data set as a persistence diagram of topological features, i.e. high-dimensional voids present in the data. This representation naturally does not rely on voxel-by-voxel correspondence and is robust towards noise. We show that these time-varying persistence diagrams can be clustered to find meaningful groupings between participants, and that they are also useful in studying within-subject brain state trajectories of subjects performing a particular task. Here, we apply both clustering and trajectory analysis techniques to a group of participants watching the movie 'Partly Cloudy'. We observe significant differences in both brain state trajectories and overall topological activity between adults and children watching the same movie.
Hierarchical Poset Decoding for Compositional Generalization in Language
https://papers.nips.cc/paper_files/paper/2020/hash/4d7e0d72898ae7ea3593eb5ebf20c744-Abstract.html
Yinuo Guo, Zeqi Lin, Jian-Guang Lou, Dongmei Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/4d7e0d72898ae7ea3593eb5ebf20c744-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4d7e0d72898ae7ea3593eb5ebf20c744-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10304-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4d7e0d72898ae7ea3593eb5ebf20c744-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4d7e0d72898ae7ea3593eb5ebf20c744-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4d7e0d72898ae7ea3593eb5ebf20c744-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4d7e0d72898ae7ea3593eb5ebf20c744-Supplemental.zip
We formalize human language understanding as a structured prediction task where the output is a partially ordered set (poset). Current encoder-decoder architectures do not take the poset structure of semantics into account properly, thus suffering from poor compositional generalization ability. In this paper, we propose a novel hierarchical poset decoding paradigm for compositional generalization in language. Intuitively: (1) the proposed paradigm enforces partial permutation invariance in semantics, thus avoiding overfitting to bias ordering information; (2) the hierarchical mechanism allows to capture high-level structures of posets. We evaluate our proposed decoder on Compositional Freebase Questions (CFQ), a large and realistic natural language question answering dataset that is specifically designed to measure compositional generalization. Results show that it outperforms current decoders.
Evaluating and Rewarding Teamwork Using Cooperative Game Abstractions
https://papers.nips.cc/paper_files/paper/2020/hash/4d95d05a4fc4eadbc3b9dde67afdca39-Abstract.html
Tom Yan, Christian Kroer, Alexander Peysakhovich
https://papers.nips.cc/paper_files/paper/2020/hash/4d95d05a4fc4eadbc3b9dde67afdca39-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4d95d05a4fc4eadbc3b9dde67afdca39-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10305-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4d95d05a4fc4eadbc3b9dde67afdca39-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4d95d05a4fc4eadbc3b9dde67afdca39-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4d95d05a4fc4eadbc3b9dde67afdca39-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4d95d05a4fc4eadbc3b9dde67afdca39-Supplemental.zip
Can we predict how well a team of individuals will perform together? How should individuals be rewarded for their contributions to the team performance? Cooperative game theory gives us a powerful set of tools for answering these questions: the characteristic function and solution concepts like the Shapley Value. There are two major difficulties in applying these techniques to real world problems: first, the characteristic function is rarely given to us and needs to be learned from data. Second, the Shapley Value is combinatorial in nature. We introduce a parametric model called cooperative game abstractions (CGAs) for estimating characteristic functions from data. CGAs are easy to learn, readily interpretable, and crucially allows linear-time computation of the Shapley Value. We provide identification results and sample complexity bounds for CGA models as well as error bounds in the estimation of the Shapley Value using CGAs. We apply our methods to study teams of artificial RL agents as well as real world teams from professional sports.
Exchangeable Neural ODE for Set Modeling
https://papers.nips.cc/paper_files/paper/2020/hash/4db73860ecb5533b5a6c710341d5bbec-Abstract.html
Yang Li, Haidong Yi, Christopher Bender, Siyuan Shan, Junier B. Oliva
https://papers.nips.cc/paper_files/paper/2020/hash/4db73860ecb5533b5a6c710341d5bbec-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4db73860ecb5533b5a6c710341d5bbec-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10306-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4db73860ecb5533b5a6c710341d5bbec-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4db73860ecb5533b5a6c710341d5bbec-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4db73860ecb5533b5a6c710341d5bbec-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4db73860ecb5533b5a6c710341d5bbec-Supplemental.zip
Reasoning over an instance composed of a set of vectors, like a point cloud, requires that one accounts for intra-set dependent features among elements. However, since such instances are unordered, the elements' features should remain unchanged when the input's order is permuted. This property, permutation equivariance, is a challenging constraint for most neural architectures. While recent work has proposed global pooling and attention-based solutions, these may be limited in the way that intradependencies are captured in practice. In this work we propose a more general formulation to achieve permutation equivariance through ordinary differential equations (ODE). Our proposed module, Exchangeable Neural ODE (ExNODE), can be seamlessly applied for both discriminative and generative tasks. We also extend set modeling in the temporal dimension and propose a VAE based model for temporal set modeling. Extensive experiments demonstrate the efficacy of our method over strong baselines.
Profile Entropy: A Fundamental Measure for the Learnability and Compressibility of Distributions
https://papers.nips.cc/paper_files/paper/2020/hash/4dbf29d90d5780cab50897fb955e4373-Abstract.html
Yi Hao, Alon Orlitsky
https://papers.nips.cc/paper_files/paper/2020/hash/4dbf29d90d5780cab50897fb955e4373-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4dbf29d90d5780cab50897fb955e4373-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10307-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4dbf29d90d5780cab50897fb955e4373-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4dbf29d90d5780cab50897fb955e4373-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4dbf29d90d5780cab50897fb955e4373-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4dbf29d90d5780cab50897fb955e4373-Supplemental.pdf
The profile of a sample is the multiset of its symbol frequencies. We show that for samples of discrete distributions, profile entropy is a fundamental measure unifying the concepts of estimation, inference, and compression. Specifically, profile entropy: a) determines the speed of estimating the distribution relative to the best natural estimator; b) characterizes the rate of inferring all symmetric properties compared with the best estimator over any label-invariant distribution collection; c) serves as the limit of profile compression, for which we derive optimal near-linear-time block and sequential algorithms. To further our understanding of profile entropy, we investigate its attributes, provide algorithms for approximating its value, and determine its magnitude for numerous structural distribution families.
CoADNet: Collaborative Aggregation-and-Distribution Networks for Co-Salient Object Detection
https://papers.nips.cc/paper_files/paper/2020/hash/4dc3ed26a29c9c3df3ec373524377a5b-Abstract.html
Qijian Zhang, Runmin Cong, Junhui Hou, Chongyi Li, Yao Zhao
https://papers.nips.cc/paper_files/paper/2020/hash/4dc3ed26a29c9c3df3ec373524377a5b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4dc3ed26a29c9c3df3ec373524377a5b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10308-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4dc3ed26a29c9c3df3ec373524377a5b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4dc3ed26a29c9c3df3ec373524377a5b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4dc3ed26a29c9c3df3ec373524377a5b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4dc3ed26a29c9c3df3ec373524377a5b-Supplemental.pdf
Co-Salient Object Detection (CoSOD) aims at discovering salient objects that repeatedly appear in a given query group containing two or more relevant images. One challenging issue is how to effectively capture co-saliency cues by modeling and exploiting inter-image relationships. In this paper, we present an end-to-end collaborative aggregation-and-distribution network (CoADNet) to capture both salient and repetitive visual patterns from multiple images. First, we integrate saliency priors into the backbone features to suppress the redundant background information through an online intra-saliency guidance structure. After that, we design a two-stage aggregate-and-distribute architecture to explore group-wise semantic interactions and produce the co-saliency features. In the first stage, we propose a group-attentional semantic aggregation module that models inter-image relationships to generate the group-wise semantic representations. In the second stage, we propose a gated group distribution module that adaptively distributes the learned group semantics to different individuals in a dynamic gating mechanism. Finally, we develop a group consistency preserving decoder tailored for the CoSOD task, which maintains group constraints during feature decoding to predict more consistent full-resolution co-saliency maps. The proposed CoADNet is evaluated on four prevailing CoSOD benchmark datasets, which demonstrates the remarkable performance improvement over ten state-of-the-art competitors.
Regularized linear autoencoders recover the principal components, eventually
https://papers.nips.cc/paper_files/paper/2020/hash/4dd9cec1c21bc54eecb53786a2c5fa09-Abstract.html
Xuchan Bao, James Lucas, Sushant Sachdeva, Roger B. Grosse
https://papers.nips.cc/paper_files/paper/2020/hash/4dd9cec1c21bc54eecb53786a2c5fa09-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4dd9cec1c21bc54eecb53786a2c5fa09-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10309-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4dd9cec1c21bc54eecb53786a2c5fa09-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4dd9cec1c21bc54eecb53786a2c5fa09-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4dd9cec1c21bc54eecb53786a2c5fa09-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4dd9cec1c21bc54eecb53786a2c5fa09-Supplemental.pdf
Our understanding of learning input-output relationships with neural nets has improved rapidly in recent years, but little is known about the convergence of the underlying representations, even in the simple case of linear autoencoders (LAEs). We show that when trained with proper regularization, LAEs can directly learn the optimal representation -- ordered, axis-aligned principal components. We analyze two such regularization schemes: non-uniform L2 regularization and a deterministic variant of nested dropout [Rippel et al, ICML' 2014]. Though both regularization schemes converge to the optimal representation, we show that this convergence is slow due to ill-conditioning that worsens with increasing latent dimension. We show that the inefficiency of learning the optimal representation is not inevitable -- we present a simple modification to the gradient descent update that greatly speeds up convergence empirically.
Semi-Supervised Partial Label Learning via Confidence-Rated Margin Maximization
https://papers.nips.cc/paper_files/paper/2020/hash/4dea382d82666332fb564f2e711cbc71-Abstract.html
Wei Wang, Min-Ling Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/4dea382d82666332fb564f2e711cbc71-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4dea382d82666332fb564f2e711cbc71-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10310-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4dea382d82666332fb564f2e711cbc71-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4dea382d82666332fb564f2e711cbc71-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4dea382d82666332fb564f2e711cbc71-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4dea382d82666332fb564f2e711cbc71-Supplemental.pdf
Partial label learning assumes inaccurate supervision where each training example is associated with a set of candidate labels, among which only one is valid. In many real-world scenarios, however, it is costly and time-consuming to assign candidate label sets to all the training examples. To circumvent this difficulty, the problem of semi-supervised partial label learning is investigated in this paper, where unlabeled data is utilized to facilitate model induction along with partial label training examples. Specifically, label propagation is adopted to instantiate the labeling confidence of partial label examples. After that, maximum margin formulation is introduced to jointly enable the induction of predictive model and the estimation of labeling confidence over unlabeled data. The derived formulation enforces confidence-rated margin maximization and confidence manifold preservation over partial label examples and unlabeled data. We show that the predictive model and labeling confidence can be solved via alternating optimization which admits QP solutions in either alternating step. Extensive experiments on synthetic as well as real-world data sets clearly validate the effectiveness of the proposed semi-supervised partial label learning approach.
GramGAN: Deep 3D Texture Synthesis From 2D Exemplars
https://papers.nips.cc/paper_files/paper/2020/hash/4df5bde009073d3ef60da64d736724d6-Abstract.html
Tiziano Portenier, Siavash Arjomand Bigdeli, Orcun Goksel
https://papers.nips.cc/paper_files/paper/2020/hash/4df5bde009073d3ef60da64d736724d6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4df5bde009073d3ef60da64d736724d6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10311-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4df5bde009073d3ef60da64d736724d6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4df5bde009073d3ef60da64d736724d6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4df5bde009073d3ef60da64d736724d6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4df5bde009073d3ef60da64d736724d6-Supplemental.pdf
We present a novel texture synthesis framework, enabling the generation of infinite, high-quality 3D textures given a 2D exemplar image. Inspired by recent advances in natural texture synthesis, we train deep neural models to generate textures by non-linearly combining learned noise frequencies. To achieve a highly realistic output conditioned on an exemplar patch, we propose a novel loss function that combines ideas from both style transfer and generative adversarial networks. In particular, we train the synthesis network to match the Gram matrices of deep features from a discriminator network. In addition, we propose two architectural concepts and an extrapolation strategy that significantly improve generalization performance. In particular, we inject both model input and condition into hidden network layers by learning to scale and bias hidden activations. Quantitative and qualitative evaluations on a diverse set of exemplars motivate our design decisions and show that our system performs superior to previous state of the art. Finally, we conduct a user study that confirms the benefits of our framework.
UWSOD: Toward Fully-Supervised-Level Capacity Weakly Supervised Object Detection
https://papers.nips.cc/paper_files/paper/2020/hash/4e0928de075538c593fbdabb0c5ef2c3-Abstract.html
Yunhang Shen, Rongrong Ji, Zhiwei Chen, Yongjian Wu, Feiyue Huang
https://papers.nips.cc/paper_files/paper/2020/hash/4e0928de075538c593fbdabb0c5ef2c3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4e0928de075538c593fbdabb0c5ef2c3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10312-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4e0928de075538c593fbdabb0c5ef2c3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4e0928de075538c593fbdabb0c5ef2c3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4e0928de075538c593fbdabb0c5ef2c3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4e0928de075538c593fbdabb0c5ef2c3-Supplemental.pdf
Weakly supervised object detection (WSOD) has attracted extensive research attention due to its great flexibility of exploiting large-scale dataset with only image-level annotations for detector training. Despite its great advance in recent years, WSOD still suffers limited performance, which is far below that of fully supervised object detection (FSOD). As most WSOD methods depend on object proposal algorithms to generate candidate regions and are also confronted with challenges like low-quality predicted bounding boxes and large scale variation. In this paper, we propose a unified WSOD framework, termed UWSOD, to develop a high-capacity general detection model with only image-level labels, which is self-contained and does not require external modules or additional supervision. To this end, we exploit three important components, i.e., object proposal generation, bounding-box fine-tuning and scale-invariant features. First, we propose an anchor-based self-supervised proposal generator to hypothesize object locations, which is trained end-to-end with supervision created by UWSOD for both objectness classification and regression. Second, we develop a step-wise bounding-box fine-tuning to refine both detection scores and coordinates by progressively select high-confidence object proposals as positive samples, which bootstraps the quality of predicted bounding boxes. Third, we construct a multi-rate resampling pyramid to aggregate multi-scale contextual information, which is the first in-network feature hierarchy to handle scale variation in WSOD. Extensive experiments on PASCAL VOC and MS COCO show that the proposed UWSOD achieves competitive results with the state-of-the-art WSOD methods while not requiring external modules or additional supervision. Moreover, the upper-bound performance of UWSOD with class-agnostic ground-truth bounding boxes approaches Faster R-CNN, which demonstrates UWSOD has fully-supervised-level capacity.
Learning Restricted Boltzmann Machines with Sparse Latent Variables
https://papers.nips.cc/paper_files/paper/2020/hash/4e668929edb3bf915e1a3a9d96c3c97e-Abstract.html
Guy Bresler, Rares-Darius Buhai
https://papers.nips.cc/paper_files/paper/2020/hash/4e668929edb3bf915e1a3a9d96c3c97e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4e668929edb3bf915e1a3a9d96c3c97e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10313-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4e668929edb3bf915e1a3a9d96c3c97e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4e668929edb3bf915e1a3a9d96c3c97e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4e668929edb3bf915e1a3a9d96c3c97e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4e668929edb3bf915e1a3a9d96c3c97e-Supplemental.pdf
Restricted Boltzmann Machines (RBMs) are a common family of undirected graphical models with latent variables. An RBM is described by a bipartite graph, with all observed variables in one layer and all latent variables in the other. We consider the task of learning an RBM given samples generated according to it. The best algorithms for this task currently have time complexity $\tilde{O}(n^2)$ for ferromagnetic RBMs (i.e., with attractive potentials) but $\tilde{O}(n^d)$ for general RBMs, where $n$ is the number of observed variables and $d$ is the maximum degree of a latent variable. Let the \textit{MRF neighborhood} of an observed variable be its neighborhood in the Markov Random Field of the marginal distribution of the observed variables. In this paper, we give an algorithm for learning general RBMs with time complexity $\tilde{O}(n^{2^s+1})$, where $s$ is the maximum number of latent variables connected to the MRF neighborhood of an observed variable. This is an improvement when $s < \log_2 (d-1)$, which corresponds to RBMs with sparse latent variables. Furthermore, we give a version of this learning algorithm that recovers a model with small prediction error and whose sample complexity is independent of the minimum potential in the Markov Random Field of the observed variables. This is of interest because the sample complexity of current algorithms scales with the inverse of the minimum potential, which cannot be controlled in terms of natural properties of the RBM.
Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction
https://papers.nips.cc/paper_files/paper/2020/hash/4eab60e55fe4c7dd567a0be28016bff3-Abstract.html
Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen
https://papers.nips.cc/paper_files/paper/2020/hash/4eab60e55fe4c7dd567a0be28016bff3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4eab60e55fe4c7dd567a0be28016bff3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10314-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4eab60e55fe4c7dd567a0be28016bff3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4eab60e55fe4c7dd567a0be28016bff3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4eab60e55fe4c7dd567a0be28016bff3-Review.html
null
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP), based on a single trajectory of Markovian samples induced by a behavior policy. Focusing on a $\gamma$-discounted MDP with state space S and action space A, we demonstrate that the $ \ell_{\infty} $-based sample complexity of classical asynchronous Q-learning --- namely, the number of samples needed to yield an entrywise $\epsilon$-accurate estimate of the Q-function --- is at most on the order of $ \frac{1}{ \mu_{\min}(1-\gamma)^5 \epsilon^2 }+ \frac{ t_{\mathsf{mix}} }{ \mu_{\min}(1-\gamma) } $ up to some logarithmic factor, provided that a proper constant learning rate is adopted. Here, $ t_{\mathsf{mix}} $ and $ \mu_{\min} $ denote respectively the mixing time and the minimum state-action occupancy probability of the sample trajectory. The first term of this bound matches the complexity in the case with independent samples drawn from the stationary distribution of the trajectory. The second term reflects the expense taken for the empirical distribution of the Markovian trajectory to reach a steady state, which is incurred at the very beginning and becomes amortized as the algorithm runs. Encouragingly, the above bound improves upon the state-of-the-art result by a factor of at least |S||A|. Further, the scaling on the discount complexity can be improved by means of variance reduction.
Curriculum learning for multilevel budgeted combinatorial problems
https://papers.nips.cc/paper_files/paper/2020/hash/4eb7d41ae6005f60fe401e56277ebd4e-Abstract.html
Adel Nabli, Margarida Carvalho
https://papers.nips.cc/paper_files/paper/2020/hash/4eb7d41ae6005f60fe401e56277ebd4e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4eb7d41ae6005f60fe401e56277ebd4e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10315-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4eb7d41ae6005f60fe401e56277ebd4e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4eb7d41ae6005f60fe401e56277ebd4e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4eb7d41ae6005f60fe401e56277ebd4e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4eb7d41ae6005f60fe401e56277ebd4e-Supplemental.pdf
Learning heuristics for combinatorial optimization problems through graph neural networks have recently shown promising results on some classic NP-hard problems. These are single-level optimization problems with only one player. Multilevel combinatorial optimization problems are their generalization, encompassing situations with multiple players taking decisions sequentially. By framing them in a multi-agent reinforcement learning setting, we devise a value-based method to learn to solve multilevel budgeted combinatorial problems involving two players in a zero-sum game over a graph. Our framework is based on a simple curriculum: if an agent knows how to estimate the value of instances with budgets up to $B$, then solving instances with budget $B+1$ can be done in polynomial time regardless of the direction of the optimization by checking the value of every possible afterstate. Thus, in a bottom-up approach, we generate datasets of heuristically solved instances with increasingly larger budgets to train our agent. We report results close to optimality on graphs up to $100$ nodes and a $185 \times$ speedup on average compared to the quickest exact solver known for the Multilevel Critical Node problem, a max-min-max trilevel problem that has been shown to be at least $\Sigma_2^p$-hard.
FedSplit: an algorithmic framework for fast federated optimization
https://papers.nips.cc/paper_files/paper/2020/hash/4ebd440d99504722d80de606ea8507da-Abstract.html
Reese Pathak, Martin J. Wainwright
https://papers.nips.cc/paper_files/paper/2020/hash/4ebd440d99504722d80de606ea8507da-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4ebd440d99504722d80de606ea8507da-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10316-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4ebd440d99504722d80de606ea8507da-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4ebd440d99504722d80de606ea8507da-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4ebd440d99504722d80de606ea8507da-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4ebd440d99504722d80de606ea8507da-Supplemental.pdf
Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication. We first study some past procedures for federated optimization, and show that their fixed points need not correspond to stationary points of the original optimization problem, even in simple convex settings with deterministic updates. In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure. We prove that these procedures have the correct fixed points, corresponding to optima of the original optimization problem, and we characterize their convergence rates under different settings. Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities. We complement our theory with some experiments that demonstrate the benefits of our methods in practice.
Estimation and Imputation in Probabilistic Principal Component Analysis with Missing Not At Random Data
https://papers.nips.cc/paper_files/paper/2020/hash/4ecb679fd35dcfd0f0894c399590be1a-Abstract.html
Aude Sportisse, Claire Boyer, Julie Josse
https://papers.nips.cc/paper_files/paper/2020/hash/4ecb679fd35dcfd0f0894c399590be1a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4ecb679fd35dcfd0f0894c399590be1a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10317-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4ecb679fd35dcfd0f0894c399590be1a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4ecb679fd35dcfd0f0894c399590be1a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4ecb679fd35dcfd0f0894c399590be1a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4ecb679fd35dcfd0f0894c399590be1a-Supplemental.zip
Missing Not At Random (MNAR) values where the probability of having missing data may depend on the missing value itself, are notoriously difficult to account for in analyses, although very frequent in the data. One solution to handle MNAR data is to specify a model for the missing data mechanism, which makes inference or imputation tasks more complex. Furthermore, this implies a strong \textit{a priori} on the parametric form of the distribution. However, some works have obtained guarantees on the estimation of parameters in the presence of MNAR data, without specifying the distribution of missing data \citep{mohan2018estimation, tang2003analysis}. This is very useful in practice, but is limited to simple cases such as few self-masked MNAR variables in data generated according to linear regression models. We continue this line of research, but extend it to a more general MNAR mechanism, in a more general model of the probabilistic principal component analysis (PPCA), \textit{i.e.}, a low-rank model with random effects. We prove identifiability of the PPCA parameters. We then propose an estimation of the loading coefficients and a data imputation method. They are based on estimators of means, variances and covariances of missing variables, for which consistency is discussed. These estimators have the great advantage of being calculated using only the observed information, leveraging the underlying low-rank structure of the data. We illustrate the relevance of the method with numerical experiments on synthetic data and also on two datasets, one collected from a medical register and the other one from a recommendation system.
Correlation Robust Influence Maximization
https://papers.nips.cc/paper_files/paper/2020/hash/4ee78d4122ef8503fe01cdad3e9ea4ee-Abstract.html
Louis Chen, Divya Padmanabhan, Chee Chin Lim, Karthik Natarajan
https://papers.nips.cc/paper_files/paper/2020/hash/4ee78d4122ef8503fe01cdad3e9ea4ee-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4ee78d4122ef8503fe01cdad3e9ea4ee-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10318-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4ee78d4122ef8503fe01cdad3e9ea4ee-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4ee78d4122ef8503fe01cdad3e9ea4ee-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4ee78d4122ef8503fe01cdad3e9ea4ee-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4ee78d4122ef8503fe01cdad3e9ea4ee-Supplemental.pdf
We propose a distributionally robust model for the influence maximization problem. Unlike the classical independent cascade model of Kempe et al (2003), this model's diffusion process is adversarially adapted to the choice of seed set. So instead of optimizing under the assumption that all influence relationships in the network are independent, we seek a seed set whose expected influence under the worst correlation, i.e., the ``worst-case, expected influence", is maximized. We show that this worst-case influence can be efficiently computed, and though the optimization is NP-hard, a (1 - 1/e) approximation guarantee holds. We also analyze the structure to the adversary's choice of diffusion process, and contrast with established models. Beyond the key computational advantages, we also study the degree to which the independence assumption may be considered costly, and provide insights from numerical experiments comparing the adversarial and independent cascade model.
Neuronal Gaussian Process Regression
https://papers.nips.cc/paper_files/paper/2020/hash/4ef2f8259495563cb3a8ea4449ec4f9f-Abstract.html
Johannes Friedrich
https://papers.nips.cc/paper_files/paper/2020/hash/4ef2f8259495563cb3a8ea4449ec4f9f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4ef2f8259495563cb3a8ea4449ec4f9f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10319-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4ef2f8259495563cb3a8ea4449ec4f9f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4ef2f8259495563cb3a8ea4449ec4f9f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4ef2f8259495563cb3a8ea4449ec4f9f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4ef2f8259495563cb3a8ea4449ec4f9f-Supplemental.pdf
The brain takes uncertainty intrinsic to our world into account. For example, associating spatial locations with rewards requires to predict not only expected reward at new spatial locations but also its uncertainty to avoid catastrophic events and forage safely. A powerful and flexible framework for nonlinear regression that takes uncertainty into account in a principled Bayesian manner is Gaussian process (GP) regression. Here I propose that the brain implements GP regression and present neural networks (NNs) for it. First layer neurons, e.g.\ hippocampal place cells, have tuning curves that correspond to evaluations of the GP kernel. Output neurons explicitly and distinctively encode predictive mean and variance, as observed in orbitofrontal cortex (OFC) for the case of reward prediction. Because the weights of a NN implementing exact GP regression do not arise with biological plasticity rules, I present approximations to obtain local (anti-)Hebbian synaptic learning rules. The resulting neuronal network approximates the full GP well compared to popular sparse GP approximations and achieves comparable predictive performance.
Nonconvex Sparse Graph Learning under Laplacian Constrained Graphical Model
https://papers.nips.cc/paper_files/paper/2020/hash/4ef42b32bccc9485b10b8183507e5d82-Abstract.html
Jiaxi Ying, José Vinícius de Miranda Cardoso , Daniel Palomar
https://papers.nips.cc/paper_files/paper/2020/hash/4ef42b32bccc9485b10b8183507e5d82-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4ef42b32bccc9485b10b8183507e5d82-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10320-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4ef42b32bccc9485b10b8183507e5d82-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4ef42b32bccc9485b10b8183507e5d82-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4ef42b32bccc9485b10b8183507e5d82-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4ef42b32bccc9485b10b8183507e5d82-Supplemental.pdf
In this paper, we consider the problem of learning a sparse graph from the Laplacian constrained Gaussian graphical model. This problem can be formulated as a penalized maximum likelihood estimation of the precision matrix under Laplacian structural constraints. Like in the classical graphical lasso problem, recent works made use of the $\ell_1$-norm with the goal of promoting sparsity in the Laplacian constrained precision matrix estimation. However, through empirical evidence, we observe that the $\ell_1$-norm is not effective in imposing a sparse solution in this problem. From a theoretical perspective, we prove that a large regularization parameter will surprisingly lead to a solution representing a fully connected graph instead of a sparse graph. To address this issue, we propose a nonconvex penalized maximum likelihood estimation method, and establish the order of the statistical error. Numerical experiments involving synthetic and real-world data sets demonstrate the effectiveness of the proposed method.
Synthetic Data Generators -- Sequential and Private
https://papers.nips.cc/paper_files/paper/2020/hash/4eff0720836a198b6174eecf02cbfdbf-Abstract.html
Olivier Bousquet, Roi Livni, Shay Moran
https://papers.nips.cc/paper_files/paper/2020/hash/4eff0720836a198b6174eecf02cbfdbf-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4eff0720836a198b6174eecf02cbfdbf-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10321-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4eff0720836a198b6174eecf02cbfdbf-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4eff0720836a198b6174eecf02cbfdbf-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4eff0720836a198b6174eecf02cbfdbf-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4eff0720836a198b6174eecf02cbfdbf-Supplemental.pdf
We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient). A differentially private synthetic generator is an algorithm that receives an IID data and publishes synthetic data that is indistinguishable from the true data w.r.t a given fixed class of statistical queries. The synthetic data set can then be used by a data scientist without compromising the privacy of the original data set. Previous work on synthetic data generators focused on the case that the query class $\D$ is finite and obtained sample complexity bounds that scale logarithmically with the size $|\D|$. Here we construct a private synthetic data generator whose sample complexity is independent of the domain size, and we replace finiteness with the assumption that $\D$ is privately PAC learnable (a formally weaker task, hence we obtain equivalence between the two tasks). Our proof relies on a new type of synthetic data generator, Sequential Synthetic Data Generators, which we believe may be of interest of their own right. A sequential SDG is defined by a sequential game between a generator that proposes synthetic distributions and a discriminator that tries to distinguish between real and fake distributions. We characterize the classes that admit a sequential-SDG and show that they are exactly Littlestone classes. Given the online nature of the sequential setting, it is natural that Littlestone classes arise in this context. Nevertheless, the characterization of sequential--SDGs by Littlestone classes turns out to be technically challenging, and to the best of the author's knowledge, does not follow via simple reductions to online prediction.
Uncertainty Quantification for Inferring Hawkes Networks
https://papers.nips.cc/paper_files/paper/2020/hash/4f00921114932db3f8662a41b44ee68f-Abstract.html
Haoyun Wang, Liyan Xie, Alex Cuozzo, Simon Mak, Yao Xie
https://papers.nips.cc/paper_files/paper/2020/hash/4f00921114932db3f8662a41b44ee68f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4f00921114932db3f8662a41b44ee68f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10322-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4f00921114932db3f8662a41b44ee68f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4f00921114932db3f8662a41b44ee68f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4f00921114932db3f8662a41b44ee68f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4f00921114932db3f8662a41b44ee68f-Supplemental.pdf
Multivariate Hawkes processes are commonly used to model streaming networked event data in a wide variety of applications. However, it remains a challenge to extract reliable inference from complex datasets with uncertainty quantification. Aiming towards this, we develop a statistical inference framework to learn causal relationships between nodes from networked data, where the underlying directed graph implies Granger causality. We provide uncertainty quantification for the maximum likelihood estimate of the network multivariate Hawkes process by providing a non-asymptotic confidence set. The main technique is based on the concentration inequalities of continuous-time martingales. We compare our method to the previously-derived asymptotic Hawkes process confidence interval, and demonstrate the strengths of our method in an application to neuronal connectivity reconstruction.
Implicit Distributional Reinforcement Learning
https://papers.nips.cc/paper_files/paper/2020/hash/4f20f7f5d2e7a1b640ebc8244428558c-Abstract.html
Yuguang Yue, Zhendong Wang, Mingyuan Zhou
https://papers.nips.cc/paper_files/paper/2020/hash/4f20f7f5d2e7a1b640ebc8244428558c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4f20f7f5d2e7a1b640ebc8244428558c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10323-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4f20f7f5d2e7a1b640ebc8244428558c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4f20f7f5d2e7a1b640ebc8244428558c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4f20f7f5d2e7a1b640ebc8244428558c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4f20f7f5d2e7a1b640ebc8244428558c-Supplemental.pdf
To improve the sample efficiency of policy-gradient based reinforcement learning algorithms, we propose implicit distributional actor-critic (IDAC) that consists of a distributional critic, built on two deep generator networks (DGNs), and a semi-implicit actor (SIA), powered by a flexible policy distribution. We adopt a distributional perspective on the discounted cumulative return and model it with a state-action-dependent implicit distribution, which is approximated by the DGNs that take state-action pairs and random noises as their input. Moreover, we use the SIA to provide a semi-implicit policy distribution, which mixes the policy parameters with a reparameterizable distribution that is not constrained by an analytic density function. In this way, the policy's marginal distribution is implicit, providing the potential to model complex properties such as covariance structure and skewness, but its parameter and entropy can still be estimated. We incorporate these features with an off-policy algorithm framework to solve problems with continuous action space and compare IDAC with state-of-the-art algorithms on representative OpenAI Gym environments. We observe that IDAC outperforms these baselines in most tasks. Python code is provided.
Auxiliary Task Reweighting for Minimum-data Learning
https://papers.nips.cc/paper_files/paper/2020/hash/4f87658ef0de194413056248a00ce009-Abstract.html
Baifeng Shi, Judy Hoffman, Kate Saenko, Trevor Darrell, Huijuan Xu
https://papers.nips.cc/paper_files/paper/2020/hash/4f87658ef0de194413056248a00ce009-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/4f87658ef0de194413056248a00ce009-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10324-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/4f87658ef0de194413056248a00ce009-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/4f87658ef0de194413056248a00ce009-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/4f87658ef0de194413056248a00ce009-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/4f87658ef0de194413056248a00ce009-Supplemental.pdf
Supervised learning requires a large amount of training data, limiting its application where labeled data is scarce. To compensate for data scarcity, one possible method is to utilize auxiliary tasks to provide additional supervision for the main task. Assigning and optimizing the importance weights for different auxiliary tasks remains an crucial and largely understudied research question. In this work, we propose a method to automatically reweight auxiliary tasks in order to reduce the data requirement on the main task. Specifically, we formulate the weighted likelihood function of auxiliary tasks as a surrogate prior for the main task. By adjusting the auxiliary task weights to minimize the divergence between the surrogate prior and the true prior of the main task, we obtain a more accurate prior estimation, achieving the goal of minimizing the required amount of training data for the main task and avoiding a costly grid search. In multiple experimental settings (e.g. semi-supervised learning, multi-label classification), we demonstrate that our algorithm can effectively utilize limited labeled data of the main task with the benefit of auxiliary tasks compared with previous task reweighting methods. We also show that under extreme cases with only a few extra examples (e.g. few-shot domain adaptation), our algorithm results in significant improvement over the baseline. Our code and video is available at https://sites.google.com/view/auxiliary-task-reweighting.