title
stringlengths 13
150
| url
stringlengths 97
97
| authors
stringlengths 8
467
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | AuthorFeedback
stringlengths 102
102
⌀ | Bibtex
stringlengths 53
54
| MetaReview
stringlengths 99
99
| Paper
stringlengths 93
93
| Review
stringlengths 95
95
| Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 53
2k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models | https://papers.nips.cc/paper_files/paper/2020/hash/32e54441e6382a7fbacbbbaf3c450059-Abstract.html | Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, Tom Claassen | https://papers.nips.cc/paper_files/paper/2020/hash/32e54441e6382a7fbacbbbaf3c450059-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/32e54441e6382a7fbacbbbaf3c450059-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10125-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/32e54441e6382a7fbacbbbaf3c450059-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/32e54441e6382a7fbacbbbaf3c450059-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/32e54441e6382a7fbacbbbaf3c450059-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/32e54441e6382a7fbacbbbaf3c450059-Supplemental.zip | In this paper, we propose a novel framework for computing Shapley values that generalizes recent work that aims to circumvent the independence assumption. By employing Pearl's do-calculus, we show how these `causal' Shapley values can be derived for general causal graphs without sacrificing any of their desirable properties. Moreover, causal Shapley values enable us to separate the contribution of direct and indirect effects. We provide a practical implementation for computing causal Shapley values based on causal chain graphs when only partial information is available and illustrate their utility on a real-world example. |
On the training dynamics of deep networks with $L_2$ regularization | https://papers.nips.cc/paper_files/paper/2020/hash/32fcc8cfe1fa4c77b5c58dafd36d1a98-Abstract.html | Aitor Lewkowycz, Guy Gur-Ari | https://papers.nips.cc/paper_files/paper/2020/hash/32fcc8cfe1fa4c77b5c58dafd36d1a98-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/32fcc8cfe1fa4c77b5c58dafd36d1a98-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10126-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/32fcc8cfe1fa4c77b5c58dafd36d1a98-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/32fcc8cfe1fa4c77b5c58dafd36d1a98-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/32fcc8cfe1fa4c77b5c58dafd36d1a98-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/32fcc8cfe1fa4c77b5c58dafd36d1a98-Supplemental.pdf | We study the role of $L_2$ regularization in deep learning, and uncover simple relations between the performance of the model, the $L_2$ coefficient, the learning rate, and the number of training steps. These empirical relations hold when the network is overparameterized. They can be used to predict the optimal regularization parameter of a given model. In addition, based on these observations we propose a dynamical schedule for the regularization parameter that improves performance and speeds up training. We test these proposals in modern image classification settings. Finally, we show that these empirical relations can be understood theoretically in the context of infinitely wide networks. We derive the gradient flow dynamics of such networks, and compare the role of $L_2$ regularization in this context with that of linear models. |
Improved Algorithms for Convex-Concave Minimax Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/331316d4efb44682092a006307b9ae3a-Abstract.html | Yuanhao Wang, Jian Li | https://papers.nips.cc/paper_files/paper/2020/hash/331316d4efb44682092a006307b9ae3a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/331316d4efb44682092a006307b9ae3a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10127-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/331316d4efb44682092a006307b9ae3a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/331316d4efb44682092a006307b9ae3a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/331316d4efb44682092a006307b9ae3a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/331316d4efb44682092a006307b9ae3a-Supplemental.pdf | This paper studies minimax optimization problems $\min_\x \max_\y f(\x,\y)$, where $f(\x,\y)$ is $m_\x$-strongly convex with respect to $\x$, $m_\y$-strongly concave with respect to $\y$ and $(L_\x,L_{\x\y},L_\y)$-smooth. Zhang et al. \cite{zhang2019lower}
provided the following lower bound of the gradient complexity for
any first-order method:
$\Omega\Bigl(\sqrt{\frac{L_\x}{m_\x}+\frac{L_{\x\y}^2}{m_\x m_\y}+\frac{L_\y}{m_\y}}\ln(1/\epsilon)\Bigr).$
This paper proposes a new algorithm and proved a gradient complexity bound of
$\Tilde{O}\Bigl(\sqrt{\frac{L_\x}{m_\x}+\frac{L\cdot L_{\x\y}}{m_\x m_\y}+\frac{L_\y}{m_\y}}\ln\left(1/\epsilon\right)\Bigr),$ where $L=\max\{L_\x,L_{\x\y},L_\y\}$. This improves over the best known upper bound $\Tilde{O}\left(\sqrt{\nicefrac{L^2}{m_\x m_\y}} \ln^3\left(1/\epsilon\right)\right)$
by Lin et al. \cite{lin2020near}. Our bound achieves linear convergence rate and tighter dependency on condition numbers, especially when
$L_{\x\y}\ll L$ (i.e., the weak interaction regime).
Via simple reduction, our new bound also implies improved bounds for strongly convex-concave problems and convex-concave problems.
When $f$ is quadratic, we can further improve the bound to $O\Bigl(\sqrt{\frac{L_\x}{m_\x}+\frac{L_{\x\y}^2}{m_\x m_\y}+\frac{L_\y}{m_\y}}\left(\frac{L^2}{m_\x m_\y}\right)^{o(1)}\ln(1/\epsilon)\Bigr)$, which matches the lower bound up to a sub-polynomial factor. |
Deep Variational Instance Segmentation | https://papers.nips.cc/paper_files/paper/2020/hash/3341f6f048384ec73a7ba2e77d2db48b-Abstract.html | Jialin Yuan, Chao Chen, Fuxin Li | https://papers.nips.cc/paper_files/paper/2020/hash/3341f6f048384ec73a7ba2e77d2db48b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3341f6f048384ec73a7ba2e77d2db48b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10128-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3341f6f048384ec73a7ba2e77d2db48b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3341f6f048384ec73a7ba2e77d2db48b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3341f6f048384ec73a7ba2e77d2db48b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3341f6f048384ec73a7ba2e77d2db48b-Supplemental.pdf | Instance segmentation, which seeks to obtain both class and instance labels for each pixel in the input image, is a challenging task in computer vision. State-of- the-art algorithms often employ a search-based strategy, which first divides the output image with a regular grid and generate proposals at each grid cell, then the proposals are classified and boundaries refined. In this paper, we propose a novel algorithm that directly utilizes a fully convolutional network (FCN) to predict instance labels. Specifically, we propose a variational relaxation of instance segmentation as minimizing an optimization functional for a piecewise-constant segmentation problem, which can be used to train an FCN end-to-end. It extends the classical Mumford-Shah variational segmentation algorithm to be able to handle the permutation-invariant ground truth in instance segmentation. Experiments on PASCAL VOC 2012 and the MSCOCO 2017 dataset show that the proposed approach efficiently tackles the instance segmentation task. |
Learning Implicit Functions for Topology-Varying Dense 3D Shape Correspondence | https://papers.nips.cc/paper_files/paper/2020/hash/335cd1b90bfa4ee70b39d08a4ae0cf2d-Abstract.html | Feng Liu, Xiaoming Liu | https://papers.nips.cc/paper_files/paper/2020/hash/335cd1b90bfa4ee70b39d08a4ae0cf2d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/335cd1b90bfa4ee70b39d08a4ae0cf2d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10129-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/335cd1b90bfa4ee70b39d08a4ae0cf2d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/335cd1b90bfa4ee70b39d08a4ae0cf2d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/335cd1b90bfa4ee70b39d08a4ae0cf2d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/335cd1b90bfa4ee70b39d08a4ae0cf2d-Supplemental.zip | The goal of this paper is to learn dense 3D shape correspondence for topology-varying objects in an unsupervised manner. Conventional implicit functions estimate the occupancy of a 3D point given a shape latent code. Instead, our novel implicit function produces a part embedding vector for each 3D point, which is assumed to be similar to its densely corresponded point in another 3D shape of the same object category. Furthermore, we implement dense correspondence through an inverse function mapping from the part embedding to a corresponded 3D point. Both functions are jointly learned with several effective loss functions to realize our assumption, together with the encoder generating the shape latent code. During inference, if a user selects an arbitrary point on the source shape, our algorithm can automatically generate a confidence score indicating whether there is a correspondence on the target shape, as well as the corresponding semantic point if there is. Such a mechanism inherently benefits man-made objects with different part constitutions. The effectiveness of our approach is demonstrated through unsupervised 3D semantic correspondence and shape segmentation. |
Deep Multimodal Fusion by Channel Exchanging | https://papers.nips.cc/paper_files/paper/2020/hash/339a18def9898dd60a634b2ad8fbbd58-Abstract.html | Yikai Wang, Wenbing Huang, Fuchun Sun, Tingyang Xu, Yu Rong, Junzhou Huang | https://papers.nips.cc/paper_files/paper/2020/hash/339a18def9898dd60a634b2ad8fbbd58-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/339a18def9898dd60a634b2ad8fbbd58-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10130-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/339a18def9898dd60a634b2ad8fbbd58-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/339a18def9898dd60a634b2ad8fbbd58-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/339a18def9898dd60a634b2ad8fbbd58-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/339a18def9898dd60a634b2ad8fbbd58-Supplemental.pdf | Deep multimodal fusion by using multiple sources of data for classification or regression has exhibited a clear advantage over the unimodal counterpart on various applications. Yet, current methods including aggregation-based and alignment-based fusion are still inadequate in balancing the trade-off between inter-modal fusion and intra-modal processing, incurring a bottleneck of performance improvement. To this end, this paper proposes Channel-Exchanging-Network (CEN), a parameter-free multimodal fusion framework that dynamically exchanges channels between sub-networks of different modalities. Specifically, the channel exchanging process is self-guided by individual channel importance that is measured by the magnitude of Batch-Normalization (BN) scaling factor during training. The validity of such exchanging process is also guaranteed by sharing convolutional filters yet keeping separate BN layers across modalities, which, as an add-on benefit, allows our multimodal architecture to be almost as compact as a unimodal network. Extensive experiments on semantic segmentation via RGB-D data and image translation through multi-domain input verify the effectiveness of our CEN compared to current state-of-the-art methods. Detailed ablation studies have also been carried out, which provably affirm the advantage of each component we propose. Our code is available at https://github.com/yikaiw/CEN. |
Hierarchically Organized Latent Modules for Exploratory Search in Morphogenetic Systems | https://papers.nips.cc/paper_files/paper/2020/hash/33a5435d4f945aa6154b31a73bab3b73-Abstract.html | Mayalen Etcheverry, Clément Moulin-Frier, Pierre-Yves Oudeyer | https://papers.nips.cc/paper_files/paper/2020/hash/33a5435d4f945aa6154b31a73bab3b73-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/33a5435d4f945aa6154b31a73bab3b73-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10131-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/33a5435d4f945aa6154b31a73bab3b73-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/33a5435d4f945aa6154b31a73bab3b73-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/33a5435d4f945aa6154b31a73bab3b73-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/33a5435d4f945aa6154b31a73bab3b73-Supplemental.pdf | Self-organization of complex morphological patterns from local interactions is a
fascinating phenomenon in many natural and artificial systems. In the artificial
world, typical examples of such morphogenetic systems are cellular automata. Yet,
their mechanisms are often very hard to grasp and so far scientific discoveries of
novel patterns have primarily been relying on manual tuning and ad hoc exploratory
search. The problem of automated diversity-driven discovery in these systems was
recently introduced [26, 62], highlighting that two key ingredients are autonomous
exploration and unsupervised representation learning to describe “relevant” degrees
of variations in the patterns. In this paper, we motivate the need for what we call
Meta-diversity search, arguing that there is not a unique ground truth interesting
diversity as it strongly depends on the final observer and its motives. Using a
continuous game-of-life system for experiments, we provide empirical evidences
that relying on monolithic architectures for the behavioral embedding design tends
to bias the final discoveries (both for hand-defined and unsupervisedly-learned
features) which are unlikely to be aligned with the interest of a final end-user. To
address these issues, we introduce a novel dynamic and modular architecture that
enables unsupervised learning of a hierarchy of diverse representations. Combined
with intrinsically motivated goal exploration algorithms, we show that this system
forms a discovery assistant that can efficiently adapt its diversity search towards
preferences of a user using only a very small amount of user feedback. |
AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity | https://papers.nips.cc/paper_files/paper/2020/hash/33a854e247155d590883b93bca53848a-Abstract.html | Silviu-Marian Udrescu, Andrew Tan, Jiahai Feng, Orisvaldo Neto, Tailin Wu, Max Tegmark | https://papers.nips.cc/paper_files/paper/2020/hash/33a854e247155d590883b93bca53848a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/33a854e247155d590883b93bca53848a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10132-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/33a854e247155d590883b93bca53848a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/33a854e247155d590883b93bca53848a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/33a854e247155d590883b93bca53848a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/33a854e247155d590883b93bca53848a-Supplemental.pdf | We present an improved method for symbolic regression that seeks to fit data to formulas that are Pareto-optimal, in the sense of having the best accuracy for a given complexity. It improves on the previous state-of-the-art by typically being orders of magnitude more robust toward noise and bad data, and also by discovering many formulas that stumped previous methods. We develop a method for discovering generalized symmetries (arbitrary modularity in the computational graph of a formula) from gradient properties of a neural network fit. We use normalizing flows to generalize our symbolic regression method to probability distributions from which we only have samples, and employ statistical hypothesis testing to accelerate robust brute-force search. |
Delay and Cooperation in Nonstochastic Linear Bandits | https://papers.nips.cc/paper_files/paper/2020/hash/33c5f5bff65aa05a8cd3e5d2597f44ae-Abstract.html | Shinji Ito, Daisuke Hatano, Hanna Sumita, Kei Takemura, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi | https://papers.nips.cc/paper_files/paper/2020/hash/33c5f5bff65aa05a8cd3e5d2597f44ae-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/33c5f5bff65aa05a8cd3e5d2597f44ae-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10133-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/33c5f5bff65aa05a8cd3e5d2597f44ae-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/33c5f5bff65aa05a8cd3e5d2597f44ae-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/33c5f5bff65aa05a8cd3e5d2597f44ae-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/33c5f5bff65aa05a8cd3e5d2597f44ae-Supplemental.pdf | This paper offers a nearly optimal algorithm for online linear optimization with delayed bandit feedback. Online linear optimization with bandit feedback, or nonstochastic linear bandits, provides a generic framework for sequential decision-making problems with limited information. This framework, however, assumes that feedback can be observed just after choosing the action, and, hence, does not apply directly to many practical applications, in which the feedback can often only be obtained after a while. To cope with such situations, we consider problem settings in which the feedback can be observed $d$ rounds after the choice of an action, and propose an algorithm for which the expected regret is $\tilde{O}( \sqrt{m (m + d) T} )$, ignoring logarithmic factors in $m$ and $T$, where $m$ and $T$ denote the dimensionality of the action set and the number of rounds, respectively. This algorithm achieves nearly optimal performance, as we are able to show that arbitrary algorithms suffer the regret of $\Omega(\sqrt{m (m+d) T})$ in the worst case. To develop the algorithm, we introduce a technique we refer to as \textit{distribution truncation}, which plays an essential role in bounding the regret. We also apply our approach to cooperative bandits, as studied by Cesa-Bianchi et al. [17] and Bar-On and Mansour [12], and extend their results to the linear bandits setting. |
Probabilistic Orientation Estimation with Matrix Fisher Distributions | https://papers.nips.cc/paper_files/paper/2020/hash/33cc2b872dfe481abef0f61af181dfcf-Abstract.html | David Mohlin, Josephine Sullivan, Gérald Bianchi | https://papers.nips.cc/paper_files/paper/2020/hash/33cc2b872dfe481abef0f61af181dfcf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/33cc2b872dfe481abef0f61af181dfcf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10134-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/33cc2b872dfe481abef0f61af181dfcf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/33cc2b872dfe481abef0f61af181dfcf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/33cc2b872dfe481abef0f61af181dfcf-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/33cc2b872dfe481abef0f61af181dfcf-Supplemental.pdf | This paper focuses on estimating probability distributions over the set of 3D ro-
tations (SO(3)) using deep neural networks. Learning to regress models to the
set of rotations is inherently difficult due to differences in topology between
R^N and SO(3). We overcome this issue by using a neural network to out-
put the parameters for a matrix Fisher distribution since these parameters are
homeomorphic to R^9 . By using a negative log likelihood loss for this distri-
bution we get a loss which is convex with respect to the network outputs. By
optimizing this loss we improve state-of-the-art on several challenging applica-
ble datasets, namely Pascal3D+, ModelNet10-SO(3). Our code is available at
https://github.com/Davmo049/Publicproborientationestimationwithmatrix
_fisherdistributions |
Minimax Dynamics of Optimally Balanced Spiking Networks of Excitatory and Inhibitory Neurons | https://papers.nips.cc/paper_files/paper/2020/hash/33cf42b38bbcf1dd6ba6b0f0cd005328-Abstract.html | Qianyi Li, Cengiz Pehlevan | https://papers.nips.cc/paper_files/paper/2020/hash/33cf42b38bbcf1dd6ba6b0f0cd005328-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/33cf42b38bbcf1dd6ba6b0f0cd005328-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10135-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/33cf42b38bbcf1dd6ba6b0f0cd005328-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/33cf42b38bbcf1dd6ba6b0f0cd005328-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/33cf42b38bbcf1dd6ba6b0f0cd005328-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/33cf42b38bbcf1dd6ba6b0f0cd005328-Supplemental.pdf | Excitation-inhibition balance is ubiquitously observed in the cortex. Recent studies suggest an intriguing link between balance on fast timescales, tight balance, and efficient information coding with spikes. We further this connection by taking a principled approach to optimal balanced networks of excitatory (E) and inhibitory(I) neurons. By deriving E-I spiking neural networks from greedy spike-based optimizations of constrained minimax objectives, we show that tight balance arises from correcting for deviations from the minimax optimum. We predict specific neuron firing rates in the networks by solving the minimax problems, going beyond statistical theories of balanced networks. We design minimax objectives for reconstruction of an input signal, associative memory, and storage of manifold attractors, and derive from them E-I networks that perform the computation. Overall, we present a novel normative modeling approach for spiking E-I networks, going beyond the widely-used energy-minimizing networks that violate Dale’s law. Our networks can be used to model cortical circuits and computations. |
Telescoping Density-Ratio Estimation | https://papers.nips.cc/paper_files/paper/2020/hash/33d3b157ddc0896addfb22fa2a519097-Abstract.html | Benjamin Rhodes, Kai Xu, Michael U. Gutmann | https://papers.nips.cc/paper_files/paper/2020/hash/33d3b157ddc0896addfb22fa2a519097-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/33d3b157ddc0896addfb22fa2a519097-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10136-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/33d3b157ddc0896addfb22fa2a519097-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/33d3b157ddc0896addfb22fa2a519097-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/33d3b157ddc0896addfb22fa2a519097-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/33d3b157ddc0896addfb22fa2a519097-Supplemental.pdf | Density-ratio estimation via classification is a cornerstone of unsupervised learning. It has provided the foundation for state-of-the-art methods in representation learning and generative modelling, with the number of use-cases continuing to proliferate. However, it suffers from a critical limitation: it fails to accurately estimate ratios p/q for which the two densities differ significantly. Empirically, we find this occurs whenever the KL divergence between p and q exceeds tens of nats. To resolve this limitation, we introduce a new framework, telescoping density-ratio estimation (TRE), that enables the estimation of ratios between highly dissimilar densities in high-dimensional spaces. Our experiments demonstrate that TRE can yield substantial improvements over existing single-ratio methods for mutual information estimation, representation learning and energy-based modelling. |
Towards Deeper Graph Neural Networks with Differentiable Group Normalization | https://papers.nips.cc/paper_files/paper/2020/hash/33dd6dba1d56e826aac1cbf23cdcca87-Abstract.html | Kaixiong Zhou, Xiao Huang, Yuening Li, Daochen Zha, Rui Chen, Xia Hu | https://papers.nips.cc/paper_files/paper/2020/hash/33dd6dba1d56e826aac1cbf23cdcca87-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/33dd6dba1d56e826aac1cbf23cdcca87-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10137-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/33dd6dba1d56e826aac1cbf23cdcca87-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/33dd6dba1d56e826aac1cbf23cdcca87-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/33dd6dba1d56e826aac1cbf23cdcca87-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/33dd6dba1d56e826aac1cbf23cdcca87-Supplemental.zip | Graph neural networks (GNNs), which learn the representation of a node by aggregating its neighbors, have become an effective computational tool in downstream applications. Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases. It is because the stacked aggregators would make node representations converge to indistinguishable vectors. Several attempts have been made to tackle the issue by bringing linked node pairs close and unlinked pairs distinct. However, they often ignore the intrinsic community structures and would result in sub-optimal performance. The representations of nodes within the same community/class need be similar to facilitate the classification, while different classes are expected to be separated in embedding space. To bridge the gap, we introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN). It normalizes nodes within the same group independently to increase their smoothness, and separates node distributions among different groups to significantly alleviate the over-smoothing issue. Experiments on real-world datasets demonstrate that DGN makes GNN models more robust to over-smoothing and achieves better performance with deeper GNNs. |
Stochastic Optimization for Performative Prediction | https://papers.nips.cc/paper_files/paper/2020/hash/33e75ff09dd601bbe69f351039152189-Abstract.html | Celestine Mendler-Dünner, Juan Perdomo, Tijana Zrnic, Moritz Hardt | https://papers.nips.cc/paper_files/paper/2020/hash/33e75ff09dd601bbe69f351039152189-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/33e75ff09dd601bbe69f351039152189-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10138-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/33e75ff09dd601bbe69f351039152189-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/33e75ff09dd601bbe69f351039152189-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/33e75ff09dd601bbe69f351039152189-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/33e75ff09dd601bbe69f351039152189-Supplemental.pdf | In performative prediction, the choice of a model influences the distribution of future data, typically through actions taken based on the model's predictions. We initiate the study of stochastic optimization for performative prediction. What sets this setting apart from traditional stochastic optimization is the difference between merely updating model parameters and deploying the new model. The latter triggers a shift in the distribution that affects future data, while the former keeps the distribution as is. Assuming smoothness and strong convexity, we prove rates of convergence for both greedily deploying models after each stochastic update (greedy deploy) as well as for taking several updates before redeploying (lazy deploy). In both cases, our bounds smoothly recover the optimal $O(1/k)$ rate as the strength of performativity decreases. Furthermore, they illustrate how depending on the strength of performative effects, there exists a regime where either approach outperforms the other. We experimentally explore the trade-off on both synthetic data and a strategic classification simulator. |
Learning Differentiable Programs with Admissible Neural Heuristics | https://papers.nips.cc/paper_files/paper/2020/hash/342285bb2a8cadef22f667eeb6a63732-Abstract.html | Ameesh Shah, Eric Zhan, Jennifer Sun, Abhinav Verma, Yisong Yue, Swarat Chaudhuri | https://papers.nips.cc/paper_files/paper/2020/hash/342285bb2a8cadef22f667eeb6a63732-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/342285bb2a8cadef22f667eeb6a63732-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10139-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/342285bb2a8cadef22f667eeb6a63732-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/342285bb2a8cadef22f667eeb6a63732-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/342285bb2a8cadef22f667eeb6a63732-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/342285bb2a8cadef22f667eeb6a63732-Supplemental.zip | We study the problem of learning differentiable functions expressed as programs in a domain-specific language. Such programmatic models can offer benefits such as composability and interpretability; however, learning them requires optimizing over a combinatorial space of program "architectures". We frame this optimization problem as a search in a weighted graph whose paths encode top-down derivations of program syntax. Our key innovation is to view various classes of neural networks as continuous relaxations over the space of programs, which can then be used to complete any partial program. All the parameters of this relaxed program can be trained end-to-end, and the resulting training loss is an approximately admissible heuristic that can guide the combinatorial search. We instantiate our approach on top of the A* and Iterative Deepening Depth-First Search algorithms and use these algorithms to learn programmatic classifiers in three sequence classification tasks. Our experiments show that the algorithms outperform state-of-the-art methods for program learning, and that they discover programmatic classifiers that yield natural interpretations and achieve competitive accuracy. |
Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystrom method | https://papers.nips.cc/paper_files/paper/2020/hash/342c472b95d00421be10e9512b532866-Abstract.html | Michal Derezinski, Rajiv Khanna, Michael W. Mahoney | https://papers.nips.cc/paper_files/paper/2020/hash/342c472b95d00421be10e9512b532866-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/342c472b95d00421be10e9512b532866-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10140-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/342c472b95d00421be10e9512b532866-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/342c472b95d00421be10e9512b532866-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/342c472b95d00421be10e9512b532866-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/342c472b95d00421be10e9512b532866-Supplemental.pdf | The Column Subset Selection Problem (CSSP) and the Nystrom method
are among the leading tools for constructing small low-rank
approximations of large datasets in machine learning and scientific
computing. A fundamental question in this area is: how well can a data subset of
size k compete with the best rank k approximation?
We develop techniques which exploit spectral properties of the data
matrix to obtain improved approximation guarantees which go beyond the
standard worst-case analysis.
Our approach leads to significantly better bounds for datasets with
known rates of singular value decay, e.g., polynomial or exponential decay.
Our analysis also reveals an intriguing phenomenon: the approximation
factor as a function of k may exhibit multiple peaks and valleys,
which we call a multiple-descent curve.
A lower bound we establish shows that this behavior is not an artifact
of our analysis, but rather it is an inherent property of the CSSP and
Nystrom tasks. Finally, using the example of a radial basis function (RBF)
kernel, we show that both our improved bounds and the multiple-descent
curve can be observed on real datasets simply by varying the RBF parameter. |
Domain Adaptation as a Problem of Inference on Graphical Models | https://papers.nips.cc/paper_files/paper/2020/hash/3430095c577593aad3c39c701712bcfe-Abstract.html | Kun Zhang, Mingming Gong, Petar Stojanov, Biwei Huang, QINGSONG LIU, Clark Glymour | https://papers.nips.cc/paper_files/paper/2020/hash/3430095c577593aad3c39c701712bcfe-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3430095c577593aad3c39c701712bcfe-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10141-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3430095c577593aad3c39c701712bcfe-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3430095c577593aad3c39c701712bcfe-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3430095c577593aad3c39c701712bcfe-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3430095c577593aad3c39c701712bcfe-Supplemental.pdf | This paper is concerned with data-driven unsupervised domain adaptation, where it is unknown in advance how the joint distribution changes across domains, i.e., what factors or modules of the data distribution remain invariant or change across domains. To develop an automated way of domain adaptation with multiple source domains, we propose to use a graphical model as a compact way to encode the change property of the joint distribution, which can be learned from data, and then view domain adaptation as a problem of Bayesian inference on the graphical models. Such a graphical model distinguishes between constant and varied modules of the distribution and specifies the properties of the changes across domains, which serves as prior knowledge of the changing modules for the purpose of deriving the posterior of the target variable $Y$ in the target domain. This provides an end-to-end framework of domain adaptation, in which additional knowledge about how the joint distribution changes, if available, can be directly incorporated to improve the graphical representation. We discuss how causality-based domain adaptation can be put under this umbrella. Experimental results on both synthetic and real data demonstrate the efficacy of the proposed framework for domain adaptation. |
Network size and size of the weights in memorization with two-layers neural networks | https://papers.nips.cc/paper_files/paper/2020/hash/34609bdc08a07ace4e1526bbb1777673-Abstract.html | Sebastien Bubeck, Ronen Eldan, Yin Tat Lee, Dan Mikulincer | https://papers.nips.cc/paper_files/paper/2020/hash/34609bdc08a07ace4e1526bbb1777673-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/34609bdc08a07ace4e1526bbb1777673-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10142-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/34609bdc08a07ace4e1526bbb1777673-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/34609bdc08a07ace4e1526bbb1777673-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/34609bdc08a07ace4e1526bbb1777673-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/34609bdc08a07ace4e1526bbb1777673-Supplemental.pdf | In 1988, Eric B. Baum showed that two-layers neural networks with threshold activation function can perfectly memorize the binary labels of $n$ points in general position in $\R^d$ using only $\ulcorner n/d \urcorner$ neurons. We observe that with ReLU networks, using four times as many neurons one can fit arbitrary real labels. Moreover, for approximate memorization up to error $\epsilon$, the neural tangent kernel can also memorize with only $O\left(\frac{n}{d} \cdot \log(1/\epsilon) \right)$ neurons (assuming that the data is well dispersed too). We show however that these constructions give rise to networks where the \emph{magnitude} of the neurons' weights are far from optimal. In contrast we propose a new training procedure for ReLU networks, based on {\em complex} (as opposed to {\em real}) recombination of the neurons, for which we show approximate memorization with both $O\left(\frac{n}{d} \cdot \frac{\log(1/\epsilon)}{\epsilon}\right)$ neurons, as well as nearly-optimal size of the weights. |
Certifying Strategyproof Auction Networks | https://papers.nips.cc/paper_files/paper/2020/hash/3465ab6e0c21086020e382f09a482ced-Abstract.html | Michael Curry, Ping-yeh Chiang, Tom Goldstein, John Dickerson | https://papers.nips.cc/paper_files/paper/2020/hash/3465ab6e0c21086020e382f09a482ced-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3465ab6e0c21086020e382f09a482ced-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10143-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3465ab6e0c21086020e382f09a482ced-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3465ab6e0c21086020e382f09a482ced-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3465ab6e0c21086020e382f09a482ced-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3465ab6e0c21086020e382f09a482ced-Supplemental.zip | Optimal auctions maximize a seller's expected revenue subject to individual rationality and strategyproofness for the buyers. Myerson's seminal work in 1981 settled the case of auctioning a single item; however, subsequent decades of work have yielded little progress moving beyond a single item, leaving the design of revenue-maximizing auctions as a central open problem in the field of mechanism design. A recent thread of work in ``differentiable economics'' has used tools from modern deep learning to instead learn good mechanisms. We focus on the RegretNet architecture, which can represent auctions with arbitrary numbers of items and participants; it is trained to be empirically strategyproof, but the property is never exactly verified leaving potential loopholes for market participants to exploit. We propose ways to explicitly verify strategyproofness under a particular valuation profile using techniques from the neural network verification literature. Doing so requires making several modifications to the RegretNet architecture in order to represent it exactly in an integer program. We train our network and produce certificates in several settings, including settings for which the optimal strategyproof mechanism is not known. |
Continual Learning of Control Primitives : Skill Discovery via Reset-Games | https://papers.nips.cc/paper_files/paper/2020/hash/3472ab80b6dff70c54758fd6dfc800c2-Abstract.html | Kelvin Xu, Siddharth Verma, Chelsea Finn, Sergey Levine | https://papers.nips.cc/paper_files/paper/2020/hash/3472ab80b6dff70c54758fd6dfc800c2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3472ab80b6dff70c54758fd6dfc800c2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10144-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3472ab80b6dff70c54758fd6dfc800c2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3472ab80b6dff70c54758fd6dfc800c2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3472ab80b6dff70c54758fd6dfc800c2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3472ab80b6dff70c54758fd6dfc800c2-Supplemental.pdf | Reinforcement learning has the potential to automate the acquisition of behavior in complex settings, but in order for it to be successfully deployed, a number of practical challenges must be addressed. First, in real world settings, when an agent attempts a tasks and fails, the environment must somehow "reset" so that the agent can attempt the task again. While easy in simulation, this could require considerable human effort in the real world, especially if the number of trials is very large. Second, real world learning is often limited by challenges in exploration, as complex, temporally extended behavior is often times difficult to acquire with random exploration. In this work, we show how a single method can allow an agent to acquire skills with minimal supervision while removing the need for resets. We do this by exploiting the insight that the need to reset" an agent to a broad set of initial states for a learning task provides a natural setting to learn a diverse set ofreset-skills." We propose a general-sum game formulation that naturally balances the objective of resetting and learning skills, and demonstrate that this approach improves performance on reset-free tasks, and additionally show that the skills we obtain can be used to significantly accelerate downstream learning. |
HOI Analysis: Integrating and Decomposing Human-Object Interaction | https://papers.nips.cc/paper_files/paper/2020/hash/3493894fa4ea036cfc6433c3e2ee63b0-Abstract.html | Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, Cewu Lu | https://papers.nips.cc/paper_files/paper/2020/hash/3493894fa4ea036cfc6433c3e2ee63b0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3493894fa4ea036cfc6433c3e2ee63b0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10145-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3493894fa4ea036cfc6433c3e2ee63b0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3493894fa4ea036cfc6433c3e2ee63b0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3493894fa4ea036cfc6433c3e2ee63b0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3493894fa4ea036cfc6433c3e2ee63b0-Supplemental.pdf | Human-Object Interaction (HOI) consists of human, object and implicit interaction/verb. Different from previous methods that directly map pixels to HOI semantics, we propose a novel perspective for HOI learning in an analytical manner. In analogy to Harmonic Analysis, whose goal is to study how to represent the signals with the superposition of basic waves, we propose the HOI Analysis. We argue that coherent HOI can be decomposed into isolated human and object. Meanwhile, isolated human and object can also be integrated into coherent HOI again. Moreover, transformations between human-object pairs with the same HOI can also be easier approached with integration and decomposition. As a result, the implicit verb will be represented in the transformation function space. In light of this, we propose an Integration-Decomposition Network (IDN) to implement the above transformations and achieve state-of-the-art performance on widely-used HOI detection benchmarks. Code is available at https://github.com/DirtyHarryLYL/HAKE-Action-Torch/tree/IDN-(Integrating-Decomposing-Network). |
Strongly local p-norm-cut algorithms for semi-supervised learning and local graph clustering | https://papers.nips.cc/paper_files/paper/2020/hash/3501672ebc68a5524629080e3ef60aef-Abstract.html | Meng Liu, David F. Gleich | https://papers.nips.cc/paper_files/paper/2020/hash/3501672ebc68a5524629080e3ef60aef-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3501672ebc68a5524629080e3ef60aef-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10146-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3501672ebc68a5524629080e3ef60aef-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3501672ebc68a5524629080e3ef60aef-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3501672ebc68a5524629080e3ef60aef-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3501672ebc68a5524629080e3ef60aef-Supplemental.zip | Graph based semi-supervised learning is the problem of learning a labeling function for the graph nodes given a few example nodes, often called seeds, usually under the assumption that the graph’s edges indicate similarity of labels.
This is closely related to the local graph clustering or community detection problem of finding a cluster or community of nodes around a given seed.
For this problem, we propose a novel generalization of random walk, diffusion, or smooth function methods in the literature to a convex p-norm cut function.
The need for our p-norm methods is that, in our study of existing methods, we find those principled methods based on eigenvector, spectral, random walk, or linear system often have difficulty capturing the correct boundary of a target label or target cluster.
In contrast, 1-norm or maxflow-mincut based methods capture the boundary, but cannot grow from small seed set; hybrid procedures that use both have many hard to set parameters.
In this paper, we propose a generalization of the objective function behind these methods involving p-norms.
To solve the p-norm cut problem we give a strongly local algorithm -- one whose runtime depends on the size of the output rather than the size of the graph.
Our method can be thought as a nonlinear generalization of the Anderson-Chung-Lang push procedure to approximate a personalized PageRank vector efficiently.
Our procedure is general and can solve other types of nonlinear objective functions, such as p-norm variants of Huber losses. We provide a theoretical analysis of finding planted target clusters with our method and show that the p-norm cut functions improve on the standard Cheeger inequalities for random walk and spectral methods. Finally, we demonstrate the speed and accuracy of our new method in synthetic and real world datasets. |
Deep Direct Likelihood Knockoffs | https://papers.nips.cc/paper_files/paper/2020/hash/350a7f5ee27d22dbe36698b10930ff96-Abstract.html | Mukund Sudarshan, Wesley Tansey, Rajesh Ranganath | https://papers.nips.cc/paper_files/paper/2020/hash/350a7f5ee27d22dbe36698b10930ff96-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/350a7f5ee27d22dbe36698b10930ff96-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10147-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/350a7f5ee27d22dbe36698b10930ff96-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/350a7f5ee27d22dbe36698b10930ff96-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/350a7f5ee27d22dbe36698b10930ff96-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/350a7f5ee27d22dbe36698b10930ff96-Supplemental.zip | Predictive modeling often uses black box machine learning methods, such as deep neural networks, to achieve state-of-the-art performance. In scientific domains, the scientist often wishes to discover which features are actually important for making the predictions. These discoveries may lead to costly follow-up experiments and as such it is important that the error rate on discoveries is not too high. Model-X knockoffs enable important features to be discovered with control of the false discovery rate (FDR). However, knockoffs require rich generative models capable of accurately modeling the knockoff features while ensuring they obey the so-called "swap" property. We develop Deep Direct Likelihood Knockoffs (DDLK), which directly minimizes the KL divergence implied by the knockoff swap property. DDLK consists of two stages: it first maximizes the explicit likelihood of the features, then minimizes the KL divergence between the joint distribution of features and knockoffs and any swap between them. To ensure that the generated knockoffs are valid under any possible swap, DDLK uses the Gumbel-Softmax trick to optimize the knockoff generator under the worst-case swap. We find DDLK has higher power than baselines while controlling the false discovery rate on a variety of synthetic and real benchmarks including a task involving the largest COVID-19 health record dataset in the United States. |
Meta-Neighborhoods | https://papers.nips.cc/paper_files/paper/2020/hash/35464c848f410e55a13bb9d78e7fddd0-Abstract.html | Siyuan Shan, Yang Li, Junier B. Oliva | https://papers.nips.cc/paper_files/paper/2020/hash/35464c848f410e55a13bb9d78e7fddd0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/35464c848f410e55a13bb9d78e7fddd0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10148-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/35464c848f410e55a13bb9d78e7fddd0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/35464c848f410e55a13bb9d78e7fddd0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/35464c848f410e55a13bb9d78e7fddd0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/35464c848f410e55a13bb9d78e7fddd0-Supplemental.zip | Making an adaptive prediction based on input is an important ability for general artificial intelligence. In this work, we step forward in this direction and propose a semi-parametric method, Meta-Neighborhoods, where predictions are made adaptively to the neighborhood of the input. We show that Meta-Neighborhoods is a generalization of k-nearest-neighbors. Due to the simpler manifold structure around a local neighborhood, Meta-Neighborhoods represent the predictive distribution p(y | x) more accurately. To reduce memory and computation overheads, we propose induced neighborhoods that summarize the training data into a much smaller dictionary. A meta-learning based training mechanism is then exploited to jointly learn the induced neighborhoods and the model. Extensive studies demonstrate the superiority of our method. |
Neural Dynamic Policies for End-to-End Sensorimotor Learning | https://papers.nips.cc/paper_files/paper/2020/hash/354ac345fd8c6d7ef634d9a8e3d47b83-Abstract.html | Shikhar Bahl, Mustafa Mukadam, Abhinav Gupta, Deepak Pathak | https://papers.nips.cc/paper_files/paper/2020/hash/354ac345fd8c6d7ef634d9a8e3d47b83-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/354ac345fd8c6d7ef634d9a8e3d47b83-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10149-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/354ac345fd8c6d7ef634d9a8e3d47b83-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/354ac345fd8c6d7ef634d9a8e3d47b83-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/354ac345fd8c6d7ef634d9a8e3d47b83-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/354ac345fd8c6d7ef634d9a8e3d47b83-Supplemental.zip | The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces such as torque, joint angle, or end-effector position. This forces the agent to make decision at each point in training, and hence, limits the scalability to continuous, high-dimensional, and long-horizon tasks. In contrast, research in classical robotics has, for a long time, exploited dynamical systems as a policy representation to learn robot behaviors via demonstrations. These techniques, however, lack the flexibility and generalizability provided by deep learning or deep reinforcement learning and have remained under-explored in such settings. In this work, we begin to close this gap and embed dynamics structure into deep neural network-based policies by reparameterizing action spaces with differential equations. We propose Neural Dynamic Policies (NPDs) that make predictions in trajectory distribution space as opposed to prior policy learning methods where action represents the raw control space. The embedded structure allows us to perform end-to-end policy learning under both reinforcement and imitation learning setups. We show that NDPs achieve better or comparable performance to state-of-the-art approaches on many robotic control tasks using both reward-based training and demonstrations. Project video and code are available at: https://shikharbahl.github.io/neural-dynamic-policies/. |
A new inference approach for training shallow and deep generalized linear models of noisy interacting neurons | https://papers.nips.cc/paper_files/paper/2020/hash/356dc40642abeb3a437e7e06f178701c-Abstract.html | Gabriel Mahuas, Giulio Isacchini, Olivier Marre, Ulisse Ferrari, Thierry Mora | https://papers.nips.cc/paper_files/paper/2020/hash/356dc40642abeb3a437e7e06f178701c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/356dc40642abeb3a437e7e06f178701c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10150-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/356dc40642abeb3a437e7e06f178701c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/356dc40642abeb3a437e7e06f178701c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/356dc40642abeb3a437e7e06f178701c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/356dc40642abeb3a437e7e06f178701c-Supplemental.pdf | Generalized linear models are one of the most efficient paradigms for predicting the correlated stochastic activity of neuronal networks in response to external stimuli, with applications in many brain areas. However, when dealing with complex stimuli, the inferred coupling parameters often do not generalise across different stimulus statistics, leading to degraded performance and blowup instabilities. Here, we develop a two-step inference strategy that allows us to train robust generalised linear models of interacting neurons, by explicitly separating the effects of correlations in the stimulus from network interactions in each training step. Applying this approach to the responses of retinal ganglion cells to complex visual stimuli, we show that, compared to classical methods, the models trained in this way exhibit improved performance, are more stable, yield robust interaction networks, and generalise well across complex visual statistics. The method can be extended to deep convolutional neural networks, leading to models with high predictive accuracy for both the neuron firing rates and their correlations. |
Decision-Making with Auto-Encoding Variational Bayes | https://papers.nips.cc/paper_files/paper/2020/hash/357a6fdf7642bf815a88822c447d9dc4-Abstract.html | Romain Lopez, Pierre Boyeau, Nir Yosef, Michael Jordan, Jeffrey Regier | https://papers.nips.cc/paper_files/paper/2020/hash/357a6fdf7642bf815a88822c447d9dc4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/357a6fdf7642bf815a88822c447d9dc4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10151-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/357a6fdf7642bf815a88822c447d9dc4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/357a6fdf7642bf815a88822c447d9dc4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/357a6fdf7642bf815a88822c447d9dc4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/357a6fdf7642bf815a88822c447d9dc4-Supplemental.pdf | To make decisions based on a model fit with auto-encoding variational Bayes (AEVB), practitioners often let the variational distribution serve as a surrogate for the posterior distribution. This approach yields biased estimates of the expected risk, and therefore leads to poor decisions for two reasons. First, the model fit with AEVB may not equal the underlying data distribution. Second, the variational distribution may not equal the posterior distribution under the fitted model.
We explore how fitting the variational distribution based on several objective functions other than the ELBO, while continuing to fit the generative model based on the ELBO, affects the quality of downstream decisions.
For the probabilistic principal component analysis model, we investigate how importance sampling error, as well as the bias of the model parameter estimates, varies across several approximate posteriors when used as proposal distributions.
Our theoretical results suggest that a posterior approximation distinct from the variational distribution should be used for making decisions. Motivated by these theoretical results, we propose learning several approximate proposals for the best model and combining them using multiple importance sampling for decision-making. In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing. In this challenging instance of multiple hypothesis testing, our proposed approach surpasses the current state of the art. |
Attribution Preservation in Network Compression for Reliable Network Interpretation | https://papers.nips.cc/paper_files/paper/2020/hash/35adf1ae7eb5734122c84b7a9ea5cc13-Abstract.html | Geondo Park, June Yong Yang, Sung Ju Hwang, Eunho Yang | https://papers.nips.cc/paper_files/paper/2020/hash/35adf1ae7eb5734122c84b7a9ea5cc13-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/35adf1ae7eb5734122c84b7a9ea5cc13-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10152-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/35adf1ae7eb5734122c84b7a9ea5cc13-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/35adf1ae7eb5734122c84b7a9ea5cc13-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/35adf1ae7eb5734122c84b7a9ea5cc13-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/35adf1ae7eb5734122c84b7a9ea5cc13-Supplemental.pdf | Neural networks embedded in safety-sensitive applications such as self-driving cars and wearable health monitors rely on two important techniques: input attribution for hindsight analysis and network compression to reduce its size for edge-computing. In this paper, we show that these seemingly unrelated techniques conflict with each other as network compression deforms the produced attributions, which could lead to dire consequences for mission-critical applications. This phenomenon arises due to the fact that conventional network compression methods only preserve the predictions of the network while ignoring the quality of the attributions. To combat the attribution inconsistency problem, we present a framework that can preserve the attributions while compressing a network. By employing the Weighted Collapsed Attribution Matching regularizer, we match the attribution maps of the network being compressed to its pre-compression former self. We demonstrate the effectiveness of our algorithm both quantitatively and qualitatively on diverse compression methods. |
Feature Importance Ranking for Deep Learning | https://papers.nips.cc/paper_files/paper/2020/hash/36ac8e558ac7690b6f44e2cb5ef93322-Abstract.html | Maksymilian Wojtas, Ke Chen | https://papers.nips.cc/paper_files/paper/2020/hash/36ac8e558ac7690b6f44e2cb5ef93322-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/36ac8e558ac7690b6f44e2cb5ef93322-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10153-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/36ac8e558ac7690b6f44e2cb5ef93322-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/36ac8e558ac7690b6f44e2cb5ef93322-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/36ac8e558ac7690b6f44e2cb5ef93322-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/36ac8e558ac7690b6f44e2cb5ef93322-Supplemental.pdf | Feature importance ranking has become a powerful tool for explainable AI. However, its nature of combinatorial optimization poses a great challenge for deep learning. In this paper, we propose a novel dual-net architecture consisting of operator and selector for discovery of an optimal feature subset of a fixed size and ranking the importance of those features in the optimal subset simultaneously. During learning, the operator is trained for a supervised learning task via optimal feature subset candidates generated by the selector that learns predicting the learning performance of the operator working on different optimal subset candidates. We develop an alternate learning algorithm that trains two nets jointly and incorporates a stochastic local search procedure into learning to address the combinatorial optimization challenge. In deployment, the selector generates an optimal feature subset and ranks feature importance, while the operator makes predictions based on the optimal subset for test data. A thorough evaluation on synthetic, benchmark and real data sets suggests that our approach outperforms several state-of-the-art feature importance ranking and supervised feature selection methods. (Our source code is available: https://github.com/maksym33/FeatureImportanceDL) |
Causal Estimation with Functional Confounders | https://papers.nips.cc/paper_files/paper/2020/hash/36dcd524971019336af02550264b8a08-Abstract.html | Aahlad Puli, Adler Perotte, Rajesh Ranganath | https://papers.nips.cc/paper_files/paper/2020/hash/36dcd524971019336af02550264b8a08-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/36dcd524971019336af02550264b8a08-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10154-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/36dcd524971019336af02550264b8a08-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/36dcd524971019336af02550264b8a08-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/36dcd524971019336af02550264b8a08-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/36dcd524971019336af02550264b8a08-Supplemental.pdf | Causal inference relies on two fundamental assumptions: ignorability and positivity. We study causal inference when the true confounder value can be expressed as a function of the observed data; we call this setting estimation with functional confounders (EFC). In this setting ignorability is satisfied, however positivity is violated, and causal inference is impossible in general. We consider two scenarios where causal effects are estimable. First, we discuss interventions on a part of the treatment called functional interventions and a sufficient condition for effect estimation of these interventions called functional positivity. Second, we develop conditions for nonparametric effect estimation based on the gradient fields of the functional confounder and the true outcome function. To estimate effects under these conditions, we develop Level-set Orthogonal Descent Estimation (LODE). Further, we prove error bounds on LODE’s effect estimates, evaluate our methods on simulated and real data, and empirically demonstrate the value of EFC. |
Model Inversion Networks for Model-Based Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/373e4c5d8edfa8b74fd4b6791d0cf6dc-Abstract.html | Aviral Kumar, Sergey Levine | https://papers.nips.cc/paper_files/paper/2020/hash/373e4c5d8edfa8b74fd4b6791d0cf6dc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/373e4c5d8edfa8b74fd4b6791d0cf6dc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10155-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/373e4c5d8edfa8b74fd4b6791d0cf6dc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/373e4c5d8edfa8b74fd4b6791d0cf6dc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/373e4c5d8edfa8b74fd4b6791d0cf6dc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/373e4c5d8edfa8b74fd4b6791d0cf6dc-Supplemental.pdf | This work addresses data-driven optimization problems, where the goal is to find an input that maximizes an unknown score or reward function given access to a dataset of inputs with corresponding scores. When the inputs are high-dimensional and valid inputs constitute a small subset of this space (e.g., valid protein sequences or valid natural images), such model-based optimization problems become exceptionally difficult, since the optimizer must avoid out-of-distribution and invalid inputs. We propose to address such problems with model inversion networks (MINs), which learn an inverse mapping from scores to inputs. MINs can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems. MINs can also handle both purely offline data sources and active data collection. We evaluate MINs on high- dimensional model-based optimization problems over images, protein designs, and neural network controller parameters, and bandit optimization from logged data. |
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/37693cfc748049e45d87b8c7d8b9aacd-Abstract.html | Umut Simsekli, Ozan Sener, George Deligiannidis, Murat A. Erdogdu | https://papers.nips.cc/paper_files/paper/2020/hash/37693cfc748049e45d87b8c7d8b9aacd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/37693cfc748049e45d87b8c7d8b9aacd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10156-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/37693cfc748049e45d87b8c7d8b9aacd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/37693cfc748049e45d87b8c7d8b9aacd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/37693cfc748049e45d87b8c7d8b9aacd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/37693cfc748049e45d87b8c7d8b9aacd-Supplemental.pdf | Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge. While modeling the trajectories of SGD via stochastic differential equations (SDE) under heavy-tailed gradient noise has recently shed light over several peculiar characteristics of SGD, a rigorous treatment of the generalization properties of such SDEs in a learning theoretical framework is still missing. Aiming to bridge this gap, in this paper, we prove generalization bounds for SGD under the assumption that its trajectories can be well-approximated by a \emph{Feller process}, which defines a rich class of Markov processes that include several recent SDE representations (both Brownian or heavy-tailed) as its special case. We show that the generalization error can be controlled by the \emph{Hausdorff dimension} of the trajectories, which is intimately linked to the tail behavior of the driving process. Our results imply that heavier-tailed processes should achieve better generalization; hence, the tail-index of the process can be used as a notion of ``capacity metric''. We support our theory with experiments on deep neural networks illustrating that the proposed capacity metric accurately estimates the generalization error, and it does not necessarily grow with the number of parameters unlike the existing capacity metrics in the literature. |
Exact expressions for double descent and implicit regularization via surrogate random design | https://papers.nips.cc/paper_files/paper/2020/hash/37740d59bb0eb7b4493725b2e0e5289b-Abstract.html | Michal Derezinski, Feynman T. Liang, Michael W. Mahoney | https://papers.nips.cc/paper_files/paper/2020/hash/37740d59bb0eb7b4493725b2e0e5289b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/37740d59bb0eb7b4493725b2e0e5289b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10157-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/37740d59bb0eb7b4493725b2e0e5289b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/37740d59bb0eb7b4493725b2e0e5289b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/37740d59bb0eb7b4493725b2e0e5289b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/37740d59bb0eb7b4493725b2e0e5289b-Supplemental.pdf | Double descent refers to the phase transition that is exhibited by
the generalization error of unregularized learning models when varying the ratio
between the number of parameters and the number of training
samples. The recent success of highly over-parameterized machine learning
models such as deep neural networks has motivated a theoretical analysis of
the double descent phenomenon in classical models such as linear
regression which can also generalize well in the over-parameterized
regime. We provide the first exact non-asymptotic
expressions for double descent of the minimum norm linear
estimator. Our approach involves constructing a special
determinantal point process which we call surrogate random
design, to replace the standard i.i.d. design of the training
sample. This surrogate design admits exact expressions for the mean
squared error of the estimator while preserving the key properties
of the standard design. We also establish an exact implicit
regularization result for over-parameterized training samples. In
particular, we show that, for the surrogate design, the implicit bias
of the unregularized minimum norm estimator precisely corresponds to
solving a ridge-regularized least squares problem on the population
distribution. In our analysis we introduce a new mathematical tool of
independent interest: the class of random matrices for which
determinant commutes with expectation. |
Certifying Confidence via Randomized Smoothing | https://papers.nips.cc/paper_files/paper/2020/hash/37aa5dfc44dddd0d19d4311e2c7a0240-Abstract.html | Aounon Kumar, Alexander Levine, Soheil Feizi, Tom Goldstein | https://papers.nips.cc/paper_files/paper/2020/hash/37aa5dfc44dddd0d19d4311e2c7a0240-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/37aa5dfc44dddd0d19d4311e2c7a0240-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10158-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/37aa5dfc44dddd0d19d4311e2c7a0240-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/37aa5dfc44dddd0d19d4311e2c7a0240-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/37aa5dfc44dddd0d19d4311e2c7a0240-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/37aa5dfc44dddd0d19d4311e2c7a0240-Supplemental.zip | Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems.
It uses the probabilities of predicting the top two most-likely classes around an input point under a smoothing distribution to generate a certified radius for a classifier's prediction.
However, most smoothing methods do not give us any information about the \emph{confidence} with which the underlying classifier (e.g., deep neural network) makes a prediction.
In this work, we propose a method to generate certified radii for the prediction confidence of the smoothed classifier.
We consider two notions for quantifying confidence: average prediction score of a class and the margin by which the average prediction score of one class exceeds that of another.
We modify the Neyman-Pearson lemma (a key theorem in randomized smoothing) to design a procedure for computing the certified radius where the confidence is guaranteed to stay above a certain threshold.
Our experimental results on CIFAR-10 and ImageNet datasets show that using information about the distribution of the confidence scores allows us to achieve a significantly better certified radius than ignoring it.
Thus, we demonstrate that extra information about the base classifier at the input point can help improve certified guarantees for the smoothed classifier.
Code for the experiments is available at \url{https://github.com/aounon/cdf-smoothing}. |
Learning Physical Constraints with Neural Projections | https://papers.nips.cc/paper_files/paper/2020/hash/37bc5e7fb6931a50b3464ec66179085f-Abstract.html | Shuqi Yang, Xingzhe He, Bo Zhu | https://papers.nips.cc/paper_files/paper/2020/hash/37bc5e7fb6931a50b3464ec66179085f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/37bc5e7fb6931a50b3464ec66179085f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10159-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/37bc5e7fb6931a50b3464ec66179085f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/37bc5e7fb6931a50b3464ec66179085f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/37bc5e7fb6931a50b3464ec66179085f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/37bc5e7fb6931a50b3464ec66179085f-Supplemental.zip | We propose a new family of neural networks to predict the behaviors of physical systems by learning their underpinning constraints. A neural projection operator lies at the heart of our approach, composed of a lightweight network with an embedded recursive architecture that interactively enforces learned underpinning constraints and predicts the various governed behaviors of different physical systems. Our neural projection operator is motivated by the position-based dynamics model that has been used widely in game and visual effects industries to unify the various fast physics simulators. Our method can automatically and effectively uncover a broad range of constraints from observation point data, such as length, angle, bending, collision, boundary effects, and their arbitrary combinations, without any connectivity priors. We provide a multi-group point representation in conjunction with a configurable network connection mechanism to incorporate prior inputs for processing complex physical systems. We demonstrated the efficacy of our approach by learning a set of challenging physical systems all in a unified and simple fashion including: rigid bodies with complex geometries, ropes with varying length and bending, articulated soft and rigid bodies, and multi-object collisions with complex boundaries. |
Robust Optimization for Fairness with Noisy Protected Groups | https://papers.nips.cc/paper_files/paper/2020/hash/37d097caf1299d9aa79c2c2b843d2d78-Abstract.html | Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Michael Jordan | https://papers.nips.cc/paper_files/paper/2020/hash/37d097caf1299d9aa79c2c2b843d2d78-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/37d097caf1299d9aa79c2c2b843d2d78-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10160-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/37d097caf1299d9aa79c2c2b843d2d78-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/37d097caf1299d9aa79c2c2b843d2d78-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/37d097caf1299d9aa79c2c2b843d2d78-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/37d097caf1299d9aa79c2c2b843d2d78-Supplemental.pdf | Many existing fairness criteria for machine learning involve equalizing some metric across protected groups such as race or gender. However, practitioners trying to audit or enforce such group-based criteria can easily face the problem of noisy or biased protected group information. First, we study the consequences of naively relying on noisy protected group labels: we provide an upper bound on the fairness violations on the true groups $G$ when the fairness criteria are satisfied on noisy groups $\hat{G}$. Second, we introduce two new approaches using robust optimization that, unlike the naive approach of only relying on $\hat{G}$, are guaranteed to satisfy fairness criteria on the true protected groups $G$ while minimizing a training objective. We provide theoretical guarantees that one such approach converges to an optimal feasible solution. Using two case studies, we show empirically that the robust approaches achieve better true group fairness guarantees than the naive approach. |
Noise-Contrastive Estimation for Multivariate Point Processes | https://papers.nips.cc/paper_files/paper/2020/hash/37e7897f62e8d91b1ce60515829ca282-Abstract.html | Hongyuan Mei, Tom Wan, Jason Eisner | https://papers.nips.cc/paper_files/paper/2020/hash/37e7897f62e8d91b1ce60515829ca282-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/37e7897f62e8d91b1ce60515829ca282-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10161-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/37e7897f62e8d91b1ce60515829ca282-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/37e7897f62e8d91b1ce60515829ca282-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/37e7897f62e8d91b1ce60515829ca282-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/37e7897f62e8d91b1ce60515829ca282-Supplemental.pdf | The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation---a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time. |
A Game-Theoretic Analysis of the Empirical Revenue Maximization Algorithm with Endogenous Sampling | https://papers.nips.cc/paper_files/paper/2020/hash/37e79373884f0f0b70b5cb91fb947148-Abstract.html | Xiaotie Deng, Ron Lavi, Tao Lin, Qi Qi, Wenwei WANG, Xiang Yan | https://papers.nips.cc/paper_files/paper/2020/hash/37e79373884f0f0b70b5cb91fb947148-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/37e79373884f0f0b70b5cb91fb947148-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10162-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/37e79373884f0f0b70b5cb91fb947148-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/37e79373884f0f0b70b5cb91fb947148-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/37e79373884f0f0b70b5cb91fb947148-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/37e79373884f0f0b70b5cb91fb947148-Supplemental.pdf | The Empirical Revenue Maximization (ERM) is one of the most important price learning algorithms in auction design: as the literature shows it can learn approximately optimal reserve prices for revenue-maximizing auctioneers in both repeated auctions and uniform-price auctions. However, in these applications the agents who provide inputs to ERM have incentives to manipulate the inputs to lower the outputted price. We generalize the definition of an incentive-awareness measure proposed by Lavi et al (2019), to quantify the reduction of ERM's outputted price due to a change of m>=1 out of N input samples, and provide specific convergence rates of this measure to zero as N goes to infinity for different types of input distributions. By adopting this measure, we construct an efficient, approximately incentive-compatible, and revenue-optimal learning algorithm using ERM in repeated auctions against non-myopic bidders, and show approximate group incentive-compatibility in uniform-price auctions. |
Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning | https://papers.nips.cc/paper_files/paper/2020/hash/37f76c6fe3ab45e0cd7ecb176b5a046d-Abstract.html | Chandrashekar Lakshminarayanan, Amit Vikram Singh | https://papers.nips.cc/paper_files/paper/2020/hash/37f76c6fe3ab45e0cd7ecb176b5a046d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/37f76c6fe3ab45e0cd7ecb176b5a046d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10163-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/37f76c6fe3ab45e0cd7ecb176b5a046d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/37f76c6fe3ab45e0cd7ecb176b5a046d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/37f76c6fe3ab45e0cd7ecb176b5a046d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/37f76c6fe3ab45e0cd7ecb176b5a046d-Supplemental.pdf | In this paper, we analytically characterise the role of gates and active sub-networks in deep learning. To this end, we encode the on/off state of the gates for a given input in a novel 'neural path feature' (NPF), and the weights of the DNN are encoded in a novel 'neural path value' (NPV). Further, we show that the output of network is indeed the inner product of NPF and NPV. The main result of the paper shows that the 'neural path kernel' associated with the NPF is a fundamental quantity that characterises the information stored in the gates of a DNN. We show via experiments (on MNIST and CIFAR-10) that in standard DNNs with ReLU activations NPFs are learnt during training and such learning is key for generalisation. Furthermore, NPFs and NPVs can be learnt in two separate networks and such learning also generalises well in experiments. In our experiments, we observe that almost all the information learnt by a DNN with ReLU activations is stored in the gates - a novel observation that underscores the need to investigate the role of the gates in DNNs. |
Multiscale Deep Equilibrium Models | https://papers.nips.cc/paper_files/paper/2020/hash/3812f9a59b634c2a9c574610eaba5bed-Abstract.html | Shaojie Bai, Vladlen Koltun, J. Zico Kolter | https://papers.nips.cc/paper_files/paper/2020/hash/3812f9a59b634c2a9c574610eaba5bed-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3812f9a59b634c2a9c574610eaba5bed-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10164-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3812f9a59b634c2a9c574610eaba5bed-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3812f9a59b634c2a9c574610eaba5bed-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3812f9a59b634c2a9c574610eaba5bed-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3812f9a59b634c2a9c574610eaba5bed-Supplemental.pdf | We propose a new class of implicit networks, the multiscale deep equilibrium model (MDEQ), suited to large-scale and highly hierarchical pattern recognition domains. An MDEQ directly solves for and backpropagates through the equilibrium points of multiple feature resolutions simultaneously, using implicit differentiation to avoid storing intermediate states (and thus requiring only O(1) memory consumption). These simultaneously-learned multi-resolution features allow us to train a single model on a diverse set of tasks and loss functions, such as using a single MDEQ to perform both image classification and semantic segmentation. We illustrate the effectiveness of this approach on two large-scale vision tasks: ImageNet classification and semantic segmentation on high-resolution images from the Cityscapes dataset. In both settings, MDEQs are able to match or exceed the performance of recent competitive computer vision models: the first time such performance and scale have been achieved by an implicit deep learning approach. The code and pre-trained models are at https://github.com/locuslab/mdeq. |
Sparse Graphical Memory for Robust Planning | https://papers.nips.cc/paper_files/paper/2020/hash/385822e359afa26d52b5b286226f2cea-Abstract.html | Scott Emmons, Ajay Jain, Misha Laskin, Thanard Kurutach, Pieter Abbeel, Deepak Pathak | https://papers.nips.cc/paper_files/paper/2020/hash/385822e359afa26d52b5b286226f2cea-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/385822e359afa26d52b5b286226f2cea-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10165-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/385822e359afa26d52b5b286226f2cea-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/385822e359afa26d52b5b286226f2cea-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/385822e359afa26d52b5b286226f2cea-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/385822e359afa26d52b5b286226f2cea-Supplemental.zip | To operate effectively in the real world, agents should be able to act from high-dimensional raw sensory input such as images and achieve diverse goals across long time-horizons. Current deep reinforcement and imitation learning methods can learn directly from high-dimensional inputs but do not scale well to long-horizon tasks. In contrast, classical graphical methods like A* search are able to solve long-horizon tasks, but assume that the state space is abstracted away from raw sensory input. Recent works have attempted to combine the strengths of deep learning and classical planning; however, dominant methods in this domain are still quite brittle and scale poorly with the size of the environment. We introduce Sparse Graphical Memory (SGM), a new data structure that stores states and feasible transitions in a sparse memory. SGM aggregates states according to a novel two-way consistency objective, adapting classic state aggregation criteria to goal-conditioned RL: two states are redundant when they are interchangeable both as goals and as starting states. Theoretically, we prove that merging nodes according to two-way consistency leads to an increase in shortest path lengths that scales only linearly with the merging threshold. Experimentally, we show that SGM significantly outperforms current state of the art methods on long horizon, sparse-reward visual navigation tasks. Project video and code are available at https://sites.google.com/view/sparse-graphical-memory. |
Second Order PAC-Bayesian Bounds for the Weighted Majority Vote | https://papers.nips.cc/paper_files/paper/2020/hash/386854131f58a556343e056f03626e00-Abstract.html | Andres Masegosa, Stephan Lorenzen, Christian Igel, Yevgeny Seldin | https://papers.nips.cc/paper_files/paper/2020/hash/386854131f58a556343e056f03626e00-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/386854131f58a556343e056f03626e00-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10166-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/386854131f58a556343e056f03626e00-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/386854131f58a556343e056f03626e00-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/386854131f58a556343e056f03626e00-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/386854131f58a556343e056f03626e00-Supplemental.pdf | We present a novel analysis of the expected risk of weighted majority vote in multiclass classification. The analysis takes correlation of predictions by ensemble members into account and provides a bound that is amenable to efficient minimization, which yields improved weighting for the majority vote. We also provide a specialized version of our bound for binary classification, which allows to exploit additional unlabeled data for tighter risk estimation. In experiments, we apply the bound to improve weighting of trees in random forests and show that, in contrast to the commonly used first order bound, minimization of the new bound typically does not lead to degradation of the test error of the ensemble. |
Dirichlet Graph Variational Autoencoder | https://papers.nips.cc/paper_files/paper/2020/hash/38a77aa456fc813af07bb428f2363c8d-Abstract.html | Jia Li, Jianwei Yu, Jiajin Li, Honglei Zhang, Kangfei Zhao, Yu Rong, Hong Cheng, Junzhou Huang | https://papers.nips.cc/paper_files/paper/2020/hash/38a77aa456fc813af07bb428f2363c8d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/38a77aa456fc813af07bb428f2363c8d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10167-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/38a77aa456fc813af07bb428f2363c8d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/38a77aa456fc813af07bb428f2363c8d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/38a77aa456fc813af07bb428f2363c8d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/38a77aa456fc813af07bb428f2363c8d-Supplemental.pdf | Graph Neural Networks (GNN) and Variational Autoencoders (VAEs) have been widely used in modeling and generating graphs with latent factors. However there is no clear explanation of what these latent factors are and why they perform well. In this work, we present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors. Our study connects VAEs based graph generation and balanced graph cut, and provides a new way to understand and improve the internal mechanism of VAEs based graph generation. Specifically, we first interpret the reconstruction term of DGVAE as balanced graph cut in a principled way. Furthermore, motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships. Heatts utilizes the Taylor series for fast computation of Heat kernels and has better low pass characteristics than Graph Convolutional Networks (GCN). Through experiments on graph generation and graph clustering, we demonstrate the effectiveness of our proposed framework. |
Modeling Task Effects on Meaning Representation in the Brain via Zero-Shot MEG Prediction | https://papers.nips.cc/paper_files/paper/2020/hash/38a8e18d75e95ca619af8df0da1417f2-Abstract.html | Mariya Toneva, Otilia Stretcu, Barnabas Poczos, Leila Wehbe, Tom M. Mitchell | https://papers.nips.cc/paper_files/paper/2020/hash/38a8e18d75e95ca619af8df0da1417f2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/38a8e18d75e95ca619af8df0da1417f2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10168-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/38a8e18d75e95ca619af8df0da1417f2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/38a8e18d75e95ca619af8df0da1417f2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/38a8e18d75e95ca619af8df0da1417f2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/38a8e18d75e95ca619af8df0da1417f2-Supplemental.pdf | How meaning is represented in the brain is still one of the big open questions in neuroscience. Does a word (e.g., bird) always have the same representation, or does the task under which the word is processed alter its representation (answering can you eat it?" versuscan it fly?")? The brain activity of subjects who read the same word while performing different semantic tasks has been shown to differ across tasks. However, it is still not understood how the task itself contributes to this difference. In the current work, we study Magnetoencephalography (MEG) brain recordings of participants tasked with answering questions about concrete nouns. We investigate the effect of the task (i.e. the question being asked) on the processing of the concrete noun by predicting the millisecond-resolution MEG recordings as a function of both the semantics of the noun and the task. Using this approach, we test several hypotheses about the task-stimulus interactions by comparing the zero-shot predictions made by these hypotheses for novel tasks and nouns not seen during training. We find that incorporating the task semantics significantly improves the prediction of MEG recordings, across participants. The improvement occurs 475-550ms after the participants first see the word, which corresponds to what is considered to be the ending time of semantic processing for a word. These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli. |
Counterfactual Vision-and-Language Navigation: Unravelling the Unseen | https://papers.nips.cc/paper_files/paper/2020/hash/39016cfe079db1bfb359ca72fcba3fd8-Abstract.html | Amin Parvaneh, Ehsan Abbasnejad, Damien Teney, Javen Qinfeng Shi, Anton van den Hengel | https://papers.nips.cc/paper_files/paper/2020/hash/39016cfe079db1bfb359ca72fcba3fd8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/39016cfe079db1bfb359ca72fcba3fd8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10169-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/39016cfe079db1bfb359ca72fcba3fd8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/39016cfe079db1bfb359ca72fcba3fd8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/39016cfe079db1bfb359ca72fcba3fd8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/39016cfe079db1bfb359ca72fcba3fd8-Supplemental.pdf | The task of vision-and-language navigation (VLN) requires an agent to follow text instructions to find its way through simulated household environments. A prominent challenge is to train an agent capable of generalising to new environments at test time, rather than one that simply memorises trajectories and visual details observed during training. We propose a new learning strategy that learns both from observations and generated counterfactual environments. We describe an effective algorithm to generate counterfactual observations on the fly for VLN, as linear combinations of existing environments. Simultaneously, we encourage the agent's actions to remain stable between original and counterfactual environments through our novel training objective-effectively removing the spurious features that otherwise bias the agent. Our experiments show that this technique provides significant improvements in generalisation on benchmarks for Room-to-Room navigation and Embodied Question Answering. |
Robust Quantization: One Model to Rule Them All | https://papers.nips.cc/paper_files/paper/2020/hash/3948ead63a9f2944218de038d8934305-Abstract.html | moran shkolnik, Brian Chmiel, Ron Banner, Gil Shomron, Yury Nahshan, Alex Bronstein, Uri Weiser | https://papers.nips.cc/paper_files/paper/2020/hash/3948ead63a9f2944218de038d8934305-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3948ead63a9f2944218de038d8934305-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10170-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3948ead63a9f2944218de038d8934305-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3948ead63a9f2944218de038d8934305-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3948ead63a9f2944218de038d8934305-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3948ead63a9f2944218de038d8934305-Supplemental.zip | Neural network quantization methods often involve simulating the quantization process during training, making the trained model highly dependent on the target bit-width and precise way quantization is performed. Robust quantization offers an alternative approach with improved tolerance to different classes of data-types and quantization policies. It opens up new exciting applications where the quantization process is not static and can vary to meet different circumstances and implementations. To address this issue, we propose a method that provides intrinsic robustness to the model against a broad range of quantization processes. Our method is motivated by theoretical arguments and enables us to store a single generic model capable of operating at various bit-widths and quantization policies. We validate our method's effectiveness on different ImageNet Models. A reference implementation accompanies the paper. |
Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming | https://papers.nips.cc/paper_files/paper/2020/hash/397d6b4c83c91021fe928a8c4220386b-Abstract.html | Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato, Rudy R. Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy S. Liang, Pushmeet Kohli | https://papers.nips.cc/paper_files/paper/2020/hash/397d6b4c83c91021fe928a8c4220386b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/397d6b4c83c91021fe928a8c4220386b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10171-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/397d6b4c83c91021fe928a8c4220386b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/397d6b4c83c91021fe928a8c4220386b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/397d6b4c83c91021fe928a8c4220386b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/397d6b4c83c91021fe928a8c4220386b-Supplemental.pdf | Convex relaxations have emerged as a promising approach for verifying properties of neural networks, but widely used using Linear Programming (LP) relaxations only provide meaningful certificates when networks are specifically trained to facilitate verification. This precludes many important applications which involve \emph{verification-agnostic} networks that are not trained specifically to promote verifiability. On the other hand, semidefinite programming (SDP) relaxations have shown success on verification-agnostic networks, such as adversarially trained image classifiers without additional regularization, but do not currently scale beyond small networks due to poor time and space asymptotics. In this work, we propose a first-order dual SDP algorithm that provides (1) any-time bounds (2) requires memory only linear in the total number of network activations and (3) has per-iteration complexity that scales linearly with the complexity of a forward/backward pass through the network. By exploiting iterative eigenvector methods, we express all solver operations in terms of forward and backward passes through the network, enabling efficient use of hardware optimized for deep learning. This allows us to dramatically improve the magnitude of $\ell_\infty$ perturbations for which we can verify robustness verification-agnostic networks ($1\% \to 88\%$ on MNIST, $6\%\to 40\%$ on CIFAR-10). We also demonstrate tight verification for a quadratic stability specification for the decoder of a variational autoencoder. |
Federated Accelerated Stochastic Gradient Descent | https://papers.nips.cc/paper_files/paper/2020/hash/39d0a8908fbe6c18039ea8227f827023-Abstract.html | Honglin Yuan, Tengyu Ma | https://papers.nips.cc/paper_files/paper/2020/hash/39d0a8908fbe6c18039ea8227f827023-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/39d0a8908fbe6c18039ea8227f827023-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10172-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/39d0a8908fbe6c18039ea8227f827023-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/39d0a8908fbe6c18039ea8227f827023-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/39d0a8908fbe6c18039ea8227f827023-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/39d0a8908fbe6c18039ea8227f827023-Supplemental.pdf | We propose Federated Accelerated Stochastic Gradient Descent (FedAc), a principled acceleration of Federated Averaging (FedAvg, also known as Local SGD) for distributed optimization. FedAc is the first provable acceleration of FedAvg that improves convergence speed and communication efficiency on various types of convex functions. For example, for strongly convex and smooth functions, when using M workers, the previous state-of-the-art FedAvg analysis can achieve a linear speedup in M if given M rounds of synchronization, whereas FedAc only requires M^⅓ rounds. Moreover, we prove stronger guarantees for FedAc when the objectives are third-order smooth. Our technique is based on a potential-based perturbed iterate analysis, a novel stability analysis of generalized accelerated SGD, and a strategic tradeoff between acceleration and stability. |
Robust Density Estimation under Besov IPM Losses | https://papers.nips.cc/paper_files/paper/2020/hash/39d4b545fb02556829aab1db805021c3-Abstract.html | Ananya Uppal, Shashank Singh, Barnabas Poczos | https://papers.nips.cc/paper_files/paper/2020/hash/39d4b545fb02556829aab1db805021c3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/39d4b545fb02556829aab1db805021c3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10173-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/39d4b545fb02556829aab1db805021c3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/39d4b545fb02556829aab1db805021c3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/39d4b545fb02556829aab1db805021c3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/39d4b545fb02556829aab1db805021c3-Supplemental.pdf | We study minimax convergence rates of nonparametric density estimation under the Huber contamination model, in which a ``contaminated'' proportion of the data comes from an unknown outlier distribution. We provide the first results for this problem under a large family of losses, called Besov integral probability metrics (IPMs), that include L^p, Wasserstein, Kolmogorov-Smirnov, Cramer-von Mises, and other commonly used metrics. Under a range of smoothness assumptions on the population and outlier distributions, we show that a re-scaled thresholding wavelet estimator converges at the minimax optimal rate under a wide variety of losses and also exhibits optimal dependence on the contamination proportion. We also provide a purely data-dependent extension of the estimator that adapts to both an unknown contamination proportion and the unknown smoothness of the true density. Finally, based on connections shown recently between density estimation under IPM losses and generative adversarial networks (GANs), we show that certain GAN architectures are robustly minimax optimal. |
An analytic theory of shallow networks dynamics for hinge loss classification | https://papers.nips.cc/paper_files/paper/2020/hash/3a01fc0853ebeba94fde4d1cc6fb842a-Abstract.html | Franco Pellegrini, Giulio Biroli | https://papers.nips.cc/paper_files/paper/2020/hash/3a01fc0853ebeba94fde4d1cc6fb842a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3a01fc0853ebeba94fde4d1cc6fb842a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10174-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3a01fc0853ebeba94fde4d1cc6fb842a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3a01fc0853ebeba94fde4d1cc6fb842a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3a01fc0853ebeba94fde4d1cc6fb842a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3a01fc0853ebeba94fde4d1cc6fb842a-Supplemental.pdf | Neural networks have been shown to perform incredibly well in classification tasks over structured high-dimensional datasets. However, the learning dynamics of such networks is still poorly understood. In this paper we study in detail the training dynamics of a simple type of neural network: a single hidden layer trained to perform a classification task. We show that in a suitable mean-field limit this case maps to a single-node learning problem with a time-dependent dataset determined self-consistently from the average nodes population. We specialize our theory to the prototypical case of a linearly separable dataset and a linear hinge loss, for which the dynamics can be explicitly solved in the infinite dataset limit. This allow us to address in a simple setting several phenomena appearing in modern networks such as slowing down of training dynamics, crossover between feature and lazy learning, and overfitting. Finally, we asses the limitations of mean-field theory by studying the case of large but finite number of nodes and of training samples. |
Fixed-Support Wasserstein Barycenters: Computational Hardness and Fast Algorithm | https://papers.nips.cc/paper_files/paper/2020/hash/3a029f04d76d32e79367c4b3255dda4d-Abstract.html | Tianyi Lin, Nhat Ho, Xi Chen, Marco Cuturi, Michael Jordan | https://papers.nips.cc/paper_files/paper/2020/hash/3a029f04d76d32e79367c4b3255dda4d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3a029f04d76d32e79367c4b3255dda4d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10175-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3a029f04d76d32e79367c4b3255dda4d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3a029f04d76d32e79367c4b3255dda4d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3a029f04d76d32e79367c4b3255dda4d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3a029f04d76d32e79367c4b3255dda4d-Supplemental.pdf | We study the fixed-support Wasserstein barycenter problem (FS-WBP), which consists in computing the Wasserstein barycenter of $m$ discrete probability measures supported on a finite metric space of size $n$. We show first that the constraint matrix arising from the standard linear programming (LP) representation of the FS-WBP is \textit{not totally unimodular} when $m \geq 3$ and $n \geq 3$. This result resolves an open question pertaining to the relationship between the FS-WBP and the minimum-cost flow (MCF) problem since it proves that the FS-WBP in the standard LP form is not an MCF problem when $m \geq 3$ and $n \geq 3$. We also develop a provably fast \textit{deterministic} variant of the celebrated iterative Bregman projection (IBP) algorithm, named \textsc{FastIBP}, with a complexity bound of $\tilde{O}(mn^{7/3}\varepsilon^{-4/3})$, where $\varepsilon \in (0, 1)$ is the desired tolerance. This complexity bound is better than the best known complexity bound of $\tilde{O}(mn^2\varepsilon^{-2})$ for the IBP algorithm in terms of $\varepsilon$, and that of $\tilde{O}(mn^{5/2}\varepsilon^{-1})$ from accelerated alternating minimization algorithm or accelerated primal-dual adaptive gradient algorithm in terms of $n$. Finally, we conduct extensive experiments with both synthetic data and real images and demonstrate the favorable performance of the \textsc{FastIBP} algorithm in practice. |
Learning to Orient Surfaces by Self-supervised Spherical CNNs | https://papers.nips.cc/paper_files/paper/2020/hash/3a0772443a0739141292a5429b952fe6-Abstract.html | Riccardo Spezialetti, Federico Stella, Marlon Marcon, Luciano Silva, Samuele Salti, Luigi Di Stefano | https://papers.nips.cc/paper_files/paper/2020/hash/3a0772443a0739141292a5429b952fe6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3a0772443a0739141292a5429b952fe6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10176-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3a0772443a0739141292a5429b952fe6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3a0772443a0739141292a5429b952fe6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3a0772443a0739141292a5429b952fe6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3a0772443a0739141292a5429b952fe6-Supplemental.pdf | Defining and reliably finding a canonical orientation for 3D surfaces is key to many Computer Vision and Robotics applications. This task is commonly addressed by handcrafted algorithms exploiting geometric cues deemed as distinctive and robust by the designer. Yet, one might conjecture that humans learn the notion of the inherent orientation of 3D objects from experience and that machines may do so alike. In this work, we show the feasibility of learning a robust canonical orientation for surfaces represented as point clouds. Based on the observation that the quintessential property of a canonical orientation is equivariance to 3D rotations, we propose to employ Spherical CNNs, a recently introduced machinery that can learn equivariant representations defined on the Special Ortoghonal group SO(3). Specifically, spherical correlations compute feature maps whose elements define 3D rotations. Our method learns such feature maps from raw data by a self-supervised training procedure and robustly selects a rotation to transform the input point cloud into a learned canonical orientation. Thereby, we realize the first end-to-end learning approach to define and extract the canonical orientation of 3D shapes, which we aptly dub Compass. Experiments on several public datasets prove its effectiveness at orienting local surface patches as well as whole objects. |
Adam with Bandit Sampling for Deep Learning | https://papers.nips.cc/paper_files/paper/2020/hash/3a077e8acfc4a2b463c47f2125fdfac5-Abstract.html | Rui Liu, Tianyi Wu, Barzan Mozafari | https://papers.nips.cc/paper_files/paper/2020/hash/3a077e8acfc4a2b463c47f2125fdfac5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3a077e8acfc4a2b463c47f2125fdfac5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10177-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3a077e8acfc4a2b463c47f2125fdfac5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3a077e8acfc4a2b463c47f2125fdfac5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3a077e8acfc4a2b463c47f2125fdfac5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3a077e8acfc4a2b463c47f2125fdfac5-Supplemental.pdf | Adam is a widely used optimization method for training deep learning models. It computes individual adaptive learning rates for different parameters. In this paper, we propose a generalization of Adam, called Adambs, that allows us to also adapt to different training examples based on their importance in the model's convergence. To achieve this, we maintain a distribution over all examples, selecting a mini-batch in each iteration by sampling according to this distribution, which we update using a multi-armed bandit algorithm. This ensures that examples that are more beneficial to the model training are sampled with higher probabilities. We theoretically show that Adambs improves the convergence rate of Adam---$O(\sqrt{\frac{\log n}{T} })$ instead of $O(\sqrt{\frac{n}{T}})$ in some cases. Experiments on various models and datasets demonstrate Adambs's fast convergence in practice. |
Parabolic Approximation Line Search for DNNs | https://papers.nips.cc/paper_files/paper/2020/hash/3a30be93eb45566a90f4e95ee72a089a-Abstract.html | Maximus Mutschler, Andreas Zell | https://papers.nips.cc/paper_files/paper/2020/hash/3a30be93eb45566a90f4e95ee72a089a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3a30be93eb45566a90f4e95ee72a089a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10178-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3a30be93eb45566a90f4e95ee72a089a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3a30be93eb45566a90f4e95ee72a089a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3a30be93eb45566a90f4e95ee72a089a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3a30be93eb45566a90f4e95ee72a089a-Supplemental.zip | A major challenge in current optimization research for deep learning is to automatically find optimal step sizes for each update step. The optimal step size is closely related to the shape of the loss in the update step direction. However, this shape has not yet been examined in detail. This work shows empirically that the sample loss over lines in negative gradient direction is mostly convex and well suited for one-dimensional parabolic approximations. Exploiting this parabolic property we introduce a simple and robust line search approach, which performs loss-shape dependent update steps. Our approach combines well-known methods such as parabolic approximation, line search and conjugate gradient, to perform efficiently. It successfully competes with common and state-of-the-art optimization methods on a large variety of experiments without the need of hand-designed step size schedules. Thus, it is of interest for objectives where step-size schedules are unknown or do not perform well. Our excessive evaluation includes multiple comprehensive hyperparameter grid searches on several datasets and architectures. We provide proof of convergence for an adapted scenario. Finally, we give a general investigation of exact line searches in the context of sample losses and exact losses, including their relation to our line search approach. |
Agnostic Learning of a Single Neuron with Gradient Descent | https://papers.nips.cc/paper_files/paper/2020/hash/3a37abdeefe1dab1b30f7c5c7e581b93-Abstract.html | Spencer Frei, Yuan Cao, Quanquan Gu | https://papers.nips.cc/paper_files/paper/2020/hash/3a37abdeefe1dab1b30f7c5c7e581b93-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3a37abdeefe1dab1b30f7c5c7e581b93-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10179-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3a37abdeefe1dab1b30f7c5c7e581b93-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3a37abdeefe1dab1b30f7c5c7e581b93-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3a37abdeefe1dab1b30f7c5c7e581b93-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3a37abdeefe1dab1b30f7c5c7e581b93-Supplemental.pdf | We consider the problem of learning the best-fitting single neuron as measured by the expected square loss $\E_{(x,y)\sim \mathcal{D}}[(\sigma(w^\top x)-y)^2]$ over some unknown joint distribution $\mathcal{D}$ by using gradient descent to minimize the empirical risk induced by a set of i.i.d. samples $S\sim \mathcal{D}^n$. The activation function $\sigma$ is an arbitrary Lipschitz and non-decreasing function, making the optimization problem nonconvex and nonsmooth in general, and covers typical neural network activation functions and inverse link functions in the generalized linear model setting. In the agnostic PAC learning setting, where no assumption on the relationship between the labels $y$ and the input $x$ is made, if the optimal population risk is $\mathsf{OPT}$, we show that gradient descent achieves population risk $O(\mathsf{OPT})+\eps$ in polynomial time and sample complexity when $\sigma$ is strictly increasing. For the ReLU activation, our population risk guarantee is $O(\mathsf{OPT}^{1/2})+\eps$. When labels take the form $y = \sigma(v^\top x) + \xi$ for zero-mean sub-Gaussian noise $\xi$, we show that the population risk guarantees for gradient descent improve to $\mathsf{OPT} + \eps$. Our sample complexity and runtime guarantees are (almost) dimension independent, and when $\sigma$ is strictly increasing, require no distributional assumptions beyond boundedness. For ReLU, we show the same results under a nondegeneracy assumption for the marginal distribution of the input. |
Statistical Efficiency of Thompson Sampling for Combinatorial Semi-Bandits | https://papers.nips.cc/paper_files/paper/2020/hash/3a4496776767aaa99f9804d0905fe584-Abstract.html | Pierre Perrault, Etienne Boursier, Michal Valko, Vianney Perchet | https://papers.nips.cc/paper_files/paper/2020/hash/3a4496776767aaa99f9804d0905fe584-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3a4496776767aaa99f9804d0905fe584-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10180-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3a4496776767aaa99f9804d0905fe584-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3a4496776767aaa99f9804d0905fe584-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3a4496776767aaa99f9804d0905fe584-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3a4496776767aaa99f9804d0905fe584-Supplemental.pdf | We investigate stochastic combinatorial multi-armed bandit with semi-bandit feedback (CMAB). In CMAB, the question of the existence of an efficient policy with an optimal asymptotic regret (up to a factor poly-logarithmic with the action size) is still open for many families of distributions, including mutually independent outcomes, and more generally the multivariate \emph{sub-Gaussian} family.
We propose to answer the above question for these two families by analyzing variants of the Combinatorial Thompson Sampling policy (CTS). For mutually independent outcomes in $[0,1]$, we propose a tight analysis of CTS using Beta priors.
We then look at the more general setting of multivariate sub-Gaussian outcomes and propose a tight analysis of CTS using Gaussian priors. This last result gives us an alternative to the Efficient Sampling for Combinatorial Bandit policy (ESCB), which, although optimal, is not computationally efficient. |
Analytic Characterization of the Hessian in Shallow ReLU Models: A Tale of Symmetry | https://papers.nips.cc/paper_files/paper/2020/hash/3a61ed715ee66c48bacf237fa7bb5289-Abstract.html | Yossi Arjevani, Michael Field | https://papers.nips.cc/paper_files/paper/2020/hash/3a61ed715ee66c48bacf237fa7bb5289-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3a61ed715ee66c48bacf237fa7bb5289-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10181-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3a61ed715ee66c48bacf237fa7bb5289-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3a61ed715ee66c48bacf237fa7bb5289-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3a61ed715ee66c48bacf237fa7bb5289-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3a61ed715ee66c48bacf237fa7bb5289-Supplemental.pdf | We consider the optimization problem associated with fitting two-layers ReLU networks with respect to the squared loss, where labels are generated by a target network. We leverage the rich symmetry structure to analytically characterize the Hessian at various families of spurious minima in the natural regime where the number of inputs $d$ and the number of hidden neurons $k$ is finite. In particular, we prove that for $d\ge k$ standard Gaussian inputs: (a) of the $dk$ eigenvalues of the Hessian, $dk - O(d)$ concentrate near zero, (b) $\Omega(d)$ of the eigenvalues grow linearly with $k$. Although this phenomenon of extremely skewed spectrum has been observed many times before, to our knowledge, this is
the first time it has been established {rigorously}. Our analytic approach uses techniques, new to the field, from symmetry breaking and representation theory, and carries important implications for our ability to argue about statistical
generalization through local curvature. |
Generative causal explanations of black-box classifiers | https://papers.nips.cc/paper_files/paper/2020/hash/3a93a609b97ec0ab0ff5539eb79ef33a-Abstract.html | Matthew O'Shaughnessy, Gregory Canal, Marissa Connor, Christopher Rozell, Mark Davenport | https://papers.nips.cc/paper_files/paper/2020/hash/3a93a609b97ec0ab0ff5539eb79ef33a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3a93a609b97ec0ab0ff5539eb79ef33a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10182-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3a93a609b97ec0ab0ff5539eb79ef33a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3a93a609b97ec0ab0ff5539eb79ef33a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3a93a609b97ec0ab0ff5539eb79ef33a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3a93a609b97ec0ab0ff5539eb79ef33a-Supplemental.pdf | We develop a method for generating causal post-hoc explanations of black-box classifiers based on a learned low-dimensional representation of the data. The explanation is causal in the sense that changing learned latent factors produces a change in the classifier output statistics. To construct these explanations, we design a learning framework that leverages a generative model and information-theoretic measures of causal influence. Our objective function encourages both the generative model to faithfully represent the data distribution and the latent factors to have a large causal influence on the classifier output. Our method learns both global and local explanations, is compatible with any classifier that admits class probabilities and a gradient, and does not require labeled attributes or knowledge of causal structure. Using carefully controlled test cases, we provide intuition that illuminates the function of our causal objective. We then demonstrate the practical utility of our method on image recognition tasks. |
Sub-sampling for Efficient Non-Parametric Bandit Exploration | https://papers.nips.cc/paper_files/paper/2020/hash/3ab6be46e1d6b21d59a3c3a0b9d0f6ef-Abstract.html | Dorian Baudry, Emilie Kaufmann, Odalric-Ambrym Maillard | https://papers.nips.cc/paper_files/paper/2020/hash/3ab6be46e1d6b21d59a3c3a0b9d0f6ef-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3ab6be46e1d6b21d59a3c3a0b9d0f6ef-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10183-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3ab6be46e1d6b21d59a3c3a0b9d0f6ef-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3ab6be46e1d6b21d59a3c3a0b9d0f6ef-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3ab6be46e1d6b21d59a3c3a0b9d0f6ef-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3ab6be46e1d6b21d59a3c3a0b9d0f6ef-Supplemental.pdf | In this paper we propose the first multi-armed bandit algorithm based on re-sampling that achieves asymptotically optimal regret simultaneously for different families of arms (namely Bernoulli, Gaussian and Poisson distributions). Unlike Thompson Sampling which requires to specify a different prior to be optimal in each case, our proposal RB-SDA does not need any distribution-dependent tuning. RB-SDA belongs to the family of Sub-sampling Duelling Algorithms (SDA) which combines the sub-sampling idea first used by the BESA and SSMC algorithms with different sub-sampling schemes. In particular, RB-SDA uses Random Block sampling. We perform an experimental study assessing the flexibility and robustness of this promising novel approach for exploration in bandit models. |
Learning under Model Misspecification: Applications to Variational and Ensemble methods | https://papers.nips.cc/paper_files/paper/2020/hash/3ac48664b7886cf4e4ab4aba7e6b6bc9-Abstract.html | Andres Masegosa | https://papers.nips.cc/paper_files/paper/2020/hash/3ac48664b7886cf4e4ab4aba7e6b6bc9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3ac48664b7886cf4e4ab4aba7e6b6bc9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10184-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3ac48664b7886cf4e4ab4aba7e6b6bc9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3ac48664b7886cf4e4ab4aba7e6b6bc9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3ac48664b7886cf4e4ab4aba7e6b6bc9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3ac48664b7886cf4e4ab4aba7e6b6bc9-Supplemental.pdf | Virtually any model we use in machine learning to make predictions does not perfectly represent reality. So, most of the learning happens under model misspecification. In this work, we present a novel analysis of the generalization performance of Bayesian model averaging under model misspecification and i.i.d. data using a new family of second-order PAC-Bayes bounds. This analysis shows, in simple and intuitive terms, that Bayesian model averaging provides suboptimal generalization performance when the model is misspecified. In consequence, we provide strong theoretical arguments showing that Bayesian methods are not optimal for learning predictive models, unless the model class is perfectly specified. Using novel second-order PAC-Bayes bounds, we derive a new family of Bayesian-like algorithms, which can be implemented as variational and ensemble methods. The output of these algorithms is a new posterior distribution, different from the Bayesian posterior, which induces a posterior predictive distribution with better generalization performance. Experiments with Bayesian neural networks illustrate these findings. |
Language Through a Prism: A Spectral Approach for Multiscale Language Representations | https://papers.nips.cc/paper_files/paper/2020/hash/3acb2a202ae4bea8840224e6fce16fd0-Abstract.html | Alex Tamkin, Dan Jurafsky, Noah Goodman | https://papers.nips.cc/paper_files/paper/2020/hash/3acb2a202ae4bea8840224e6fce16fd0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3acb2a202ae4bea8840224e6fce16fd0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10185-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3acb2a202ae4bea8840224e6fce16fd0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3acb2a202ae4bea8840224e6fce16fd0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3acb2a202ae4bea8840224e6fce16fd0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3acb2a202ae4bea8840224e6fce16fd0-Supplemental.pdf | Language exhibits structure at a wide range of scales, from subwords to words, sentences, paragraphs, and documents. We propose building models that isolate scale-specific information in deep representations, and develop methods for encouraging models during training to learn more about particular scales of interest. Our method for creating scale-specific neurons in deep NLP models constrains how the activation of a neuron can change across the tokens of an input by interpreting those activations as a digital signal and filtering out parts of its frequency spectrum. This technique enables us to extract scale-specific information from BERT representations: by filtering out different frequencies we can produce new representations that perform well on part of speech tagging (word-level), dialog speech acts classification (utterance-level), or topic classification (document-level), while performing poorly on the other tasks. We also present a prism layer for use during training, which constrains different neurons of a BERT model to different parts of the frequency spectrum. Our proposed BERT + Prism model is better able to predict masked tokens using long-range context, and produces individual multiscale representations that perform with comparable or improved performance across all three tasks. Our methods are general and readily applicable to other domains besides language, such as images, audio, and video. |
DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles | https://papers.nips.cc/paper_files/paper/2020/hash/3ad7c2ebb96fcba7cda0cf54a2e802f5-Abstract.html | Huanrui Yang, Jingyang Zhang, Hongliang Dong, Nathan Inkawhich, Andrew Gardner, Andrew Touchet, Wesley Wilkes, Heath Berry, Hai Li | https://papers.nips.cc/paper_files/paper/2020/hash/3ad7c2ebb96fcba7cda0cf54a2e802f5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3ad7c2ebb96fcba7cda0cf54a2e802f5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10186-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3ad7c2ebb96fcba7cda0cf54a2e802f5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3ad7c2ebb96fcba7cda0cf54a2e802f5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3ad7c2ebb96fcba7cda0cf54a2e802f5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3ad7c2ebb96fcba7cda0cf54a2e802f5-Supplemental.pdf | Recent research finds CNN models for image classification demonstrate overlapped adversarial vulnerabilities: adversarial attacks can mislead CNN models with small perturbations, which can effectively transfer between different models trained on the same dataset. Adversarial training, as a general robustness improvement technique, eliminates the vulnerability in a single model by forcing it to learn robust features. The process is hard, often requires models with large capacity, and suffers from significant loss on clean data accuracy. Alternatively, ensemble methods are proposed to induce sub-models with diverse outputs against a transfer adversarial example, making the ensemble robust against transfer attacks even if each sub-model is individually non-robust. Only small clean accuracy drop is observed in the process. However, previous ensemble training methods are not efficacious in inducing such diversity and thus ineffective on reaching robust ensemble. We propose DVERGE, which isolates the adversarial vulnerability in each sub-model by distilling non-robust features, and diversifies the adversarial vulnerability to induce diverse outputs against a transfer attack. The novel diversity metric and training procedure enables DVERGE to achieve higher robustness against transfer attacks comparing to previous ensemble methods, and enables the improved robustness when more sub-models are added to the ensemble. The code of this work is available at https://github.com/zjysteven/DVERGE. |
Towards practical differentially private causal graph discovery | https://papers.nips.cc/paper_files/paper/2020/hash/3b13b1eb44b05f57735764786fab9c2c-Abstract.html | Lun Wang, Qi Pang, Dawn Song | https://papers.nips.cc/paper_files/paper/2020/hash/3b13b1eb44b05f57735764786fab9c2c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3b13b1eb44b05f57735764786fab9c2c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10187-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3b13b1eb44b05f57735764786fab9c2c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3b13b1eb44b05f57735764786fab9c2c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3b13b1eb44b05f57735764786fab9c2c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3b13b1eb44b05f57735764786fab9c2c-Supplemental.zip | Causal graph discovery refers to the process of discovering causal relation graphs
from purely observational data. Like other statistical data, a causal graph might
leak sensitive information about participants in the dataset. In this paper, we
present a differentially private causal graph discovery algorithm, Priv-PC, which
improves both utility and running time compared to the state-of-the-art. The design of Priv-PC follows a novel paradigm called sieve-and-examine which
uses a small amount of privacy budget to filter out “insignificant” queries, and
leverages the remaining budget to obtain highly accurate answers for the “significant” queries. We also conducted the first sensitivity analysis for conditional independence tests including conditional Kendall’s τ and conditional Spearman’s ρ. We evaluated Priv-PC on 7 public datasets and compared with the
state-of-the-art. The results show that Priv-PC achieves 10.61 to 293.87 times
speedup and better utility. The implementation of Priv-PC, including the code
used in our evaluation, is available at https://github.com/sunblaze-ucb/
Priv-PC-Differentially-Private-Causal-Graph-Discovery. |
Independent Policy Gradient Methods for Competitive Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/3b2acfe2e38102074656ed938abf4ac3-Abstract.html | Constantinos Daskalakis, Dylan J. Foster, Noah Golowich | https://papers.nips.cc/paper_files/paper/2020/hash/3b2acfe2e38102074656ed938abf4ac3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3b2acfe2e38102074656ed938abf4ac3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10188-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3b2acfe2e38102074656ed938abf4ac3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3b2acfe2e38102074656ed938abf4ac3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3b2acfe2e38102074656ed938abf4ac3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3b2acfe2e38102074656ed938abf4ac3-Supplemental.pdf | We obtain global, non-asymptotic convergence guarantees for independent learning algorithms in competitive reinforcement learning settings with two agents (i.e., zero-sum stochastic games). We consider an episodic setting where in each episode, each player independently selects a policy and observes only their own actions and rewards, along with the state. We show that if both players run policy gradient methods in tandem, their policies will converge to a min-max equilibrium of the game, as long as their learning rates follow a two-timescale rule (which is necessary). To the best of our knowledge, this constitutes the first finite-sample convergence result for independent learning in competitive RL, as prior work has largely focused on centralized/coordinated procedures for equilibrium computation. |
The Value Equivalence Principle for Model-Based Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/3bb585ea00014b0e3ebe4c6dd165a358-Abstract.html | Christopher Grimm, Andre Barreto, Satinder Singh, David Silver | https://papers.nips.cc/paper_files/paper/2020/hash/3bb585ea00014b0e3ebe4c6dd165a358-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3bb585ea00014b0e3ebe4c6dd165a358-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10189-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3bb585ea00014b0e3ebe4c6dd165a358-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3bb585ea00014b0e3ebe4c6dd165a358-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3bb585ea00014b0e3ebe4c6dd165a358-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3bb585ea00014b0e3ebe4c6dd165a358-Supplemental.pdf | Learning models of the environment from data is often viewed as an essential component to building intelligent reinforcement learning (RL) agents. The common practice is to separate the learning of the model from its use, by constructing a model of the environment’s dynamics that correctly predicts the observed state transitions. In this paper we argue that the limited representational resources of model-based RL agents are better used to build models that are directly useful for value-based planning. As our main contribution, we introduce the principle of value equivalence: two models are value equivalent with respect to a set of functions and policies if they yield the same Bellman updates. We propose a formulation of the model learning problem based on the value equivalence principle and analyze how the set of feasible solutions is impacted by the choice of policies and functions. Specifically, we show that, as we augment the set of policies and functions considered, the class of value equivalent models shrinks, until eventually collapsing to a single point corresponding to a model that perfectly describes the environment. In many problems, directly modelling state-to-state transitions may be both difficult and unnecessary. By leveraging the value-equivalence principle one may find simpler models without compromising performance, saving computation and memory. We illustrate the benefits of value-equivalent model learning with experiments comparing it against more traditional counterparts like maximum likelihood estimation. More generally, we argue that the principle of value equivalence underlies a number of recent empirical successes in RL, such as Value Iteration Networks, the Predictron, Value Prediction Networks, TreeQN, and MuZero, and provides a first theoretical underpinning of those results. |
Structured Convolutions for Efficient Neural Network Design | https://papers.nips.cc/paper_files/paper/2020/hash/3be0214185d6177a9aa6adea5a720b09-Abstract.html | Yash Bhalgat, Yizhe Zhang, Jamie Menjay Lin, Fatih Porikli | https://papers.nips.cc/paper_files/paper/2020/hash/3be0214185d6177a9aa6adea5a720b09-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3be0214185d6177a9aa6adea5a720b09-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10190-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3be0214185d6177a9aa6adea5a720b09-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3be0214185d6177a9aa6adea5a720b09-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3be0214185d6177a9aa6adea5a720b09-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3be0214185d6177a9aa6adea5a720b09-Supplemental.pdf | In this work, we tackle model efficiency by exploiting redundancy in the implicit structure of the building blocks of convolutional neural networks. We start our analysis by introducing a general definition of Composite Kernel structures that enable the execution of convolution operations in the form of efficient, scaled, sum-pooling components. As its special case, we propose Structured Convolutions and show that these allow decomposition of the convolution operation into a sum-pooling operation followed by a convolution with significantly lower complexity and fewer weights. We show how this decomposition can be applied to 2D and 3D kernels as well as the fully-connected layers. Furthermore, we present a Structural Regularization loss that promotes neural network layers to leverage on this desired structure in a way that, after training, they can be decomposed with negligible performance loss. By applying our method to a wide range of CNN architectures, we demonstrate 'structured' versions of the ResNets that are up to 2x smaller and a new Structured-MobileNetV2 that is more efficient while staying within an accuracy loss of 1% on ImageNet and CIFAR-10 datasets. We also show similar structured versions of EfficientNet on ImageNet and HRNet architecture for semantic segmentation on the Cityscapes dataset. Our method performs equally well or superior in terms of the complexity reduction in comparison to the existing tensor decomposition and channel pruning methods. |
Latent World Models For Intrinsically Motivated Exploration | https://papers.nips.cc/paper_files/paper/2020/hash/3c09bb10e2189124fdd8f467cc8b55a7-Abstract.html | Aleksandr Ermolov, Nicu Sebe | https://papers.nips.cc/paper_files/paper/2020/hash/3c09bb10e2189124fdd8f467cc8b55a7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3c09bb10e2189124fdd8f467cc8b55a7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10191-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3c09bb10e2189124fdd8f467cc8b55a7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3c09bb10e2189124fdd8f467cc8b55a7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3c09bb10e2189124fdd8f467cc8b55a7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3c09bb10e2189124fdd8f467cc8b55a7-Supplemental.pdf | In this work we consider partially observable environments with sparse rewards. We present a self-supervised representation learning method for image-based observations, which arranges embeddings respecting temporal distance of observations. This representation is empirically robust to stochasticity and suitable for novelty detection from the error of a predictive forward model. We consider episodic and life-long uncertainties to guide the exploration. We propose to estimate the missing information about the environment with the world model, which operates in the learned latent space. As a motivation of the method, we analyse the exploration problem in a tabular Partially Observable Labyrinth. We demonstrate the method on image-based hard exploration environments from the Atari benchmark and report significant improvement with respect to prior work. The source code of the method and all the experiments is available at https://github.com/htdt/lwm. |
Estimating Rank-One Spikes from Heavy-Tailed Noise via Self-Avoiding Walks | https://papers.nips.cc/paper_files/paper/2020/hash/3c0de3fec9ab8a3df01109251f137119-Abstract.html | Jingqiu Ding, Samuel Hopkins, David Steurer | https://papers.nips.cc/paper_files/paper/2020/hash/3c0de3fec9ab8a3df01109251f137119-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3c0de3fec9ab8a3df01109251f137119-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10192-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3c0de3fec9ab8a3df01109251f137119-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3c0de3fec9ab8a3df01109251f137119-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3c0de3fec9ab8a3df01109251f137119-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3c0de3fec9ab8a3df01109251f137119-Supplemental.pdf | Our estimator can be evaluated in polynomial time by counting self-avoiding walks
via a color coding technique. Moreover, we extend our estimator to spiked tensor
models and establish analogous results. |
Policy Improvement via Imitation of Multiple Oracles | https://papers.nips.cc/paper_files/paper/2020/hash/3c56fe2f24038c4d22b9eb0aca78f590-Abstract.html | Ching-An Cheng, Andrey Kolobov, Alekh Agarwal | https://papers.nips.cc/paper_files/paper/2020/hash/3c56fe2f24038c4d22b9eb0aca78f590-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3c56fe2f24038c4d22b9eb0aca78f590-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10193-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3c56fe2f24038c4d22b9eb0aca78f590-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3c56fe2f24038c4d22b9eb0aca78f590-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3c56fe2f24038c4d22b9eb0aca78f590-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3c56fe2f24038c4d22b9eb0aca78f590-Supplemental.pdf | Despite its promise, reinforcement learning’s real-world adoption has been hampered by the need for costly exploration to learn a good policy. Imitation learning (IL) mitigates this shortcoming by using an oracle policy during training as a bootstrap to accelerate the learning process. However, in many practical situations, the learner has access to multiple suboptimal oracles, which may provide conflicting advice in a state. The existing IL literature provides a limited treatment of such scenarios. Whereas in the single-oracle case, the return of the oracle’s policy provides an obvious benchmark for the learner to compete against, neither such a benchmark nor principled ways of outperforming it are known for the multi-oracle setting. In this paper, we propose the state-wise maximum of the oracle policies’ values as a natural baseline to resolve conflicting advice from multiple oracles. Using a reduction of policy optimization to online learning, we introduce a novel IL algorithm MAMBA, which can provably learn a policy competitive with this benchmark. In particular, MAMBA optimizes policies by using a gradient estimator in the style of generalized advantage estimation (GAE). Our theoretical analysis shows that this design makes MAMBA robust and enables it to outperform the oracle policies by a larger margin than the IL state of the art, even in the single-oracle case. In an evaluation against standard policy gradient with GAE and AggreVaTe(D), we showcase MAMBA’s ability to leverage demonstrations both from a single and from multiple weak oracles, and significantly speed up policy optimization. |
Training Generative Adversarial Networks by Solving Ordinary Differential Equations | https://papers.nips.cc/paper_files/paper/2020/hash/3c8f9a173f749710d6377d3150cf90da-Abstract.html | Chongli Qin, Yan Wu, Jost Tobias Springenberg, Andy Brock, Jeff Donahue, Timothy Lillicrap, Pushmeet Kohli | https://papers.nips.cc/paper_files/paper/2020/hash/3c8f9a173f749710d6377d3150cf90da-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3c8f9a173f749710d6377d3150cf90da-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10194-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3c8f9a173f749710d6377d3150cf90da-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3c8f9a173f749710d6377d3150cf90da-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3c8f9a173f749710d6377d3150cf90da-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3c8f9a173f749710d6377d3150cf90da-Supplemental.pdf | The instability of Generative Adversarial Network (GAN) training has frequently been attributed to gradient descent. Consequently, recent methods have aimed to tailor the models and training procedures to stabilise the discrete updates. In contrast, we study the continuous-time dynamics induced by GAN training. Both theory and toy experiments suggest that these dynamics are in fact surprisingly stable. From this perspective, we hypothesise that instabilities in training GANs arise from the integration error in discretising the continuous dynamics. We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training - when combined with a regulariser that controls the integration error. Our approach represents a radical departure from previous methods which typically use adaptive optimisation and stabilisation techniques that constrain the functional space (e.g. Spectral Normalisation). Evaluation on CIFAR-10 and ImageNet shows that our method outperforms several strong baselines, demonstrating its efficacy. |
Learning of Discrete Graphical Models with Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/3cc697419ea18cc98d525999665cb94a-Abstract.html | Abhijith Jayakumar, Andrey Lokhov, Sidhant Misra, Marc Vuffray | https://papers.nips.cc/paper_files/paper/2020/hash/3cc697419ea18cc98d525999665cb94a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3cc697419ea18cc98d525999665cb94a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10195-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3cc697419ea18cc98d525999665cb94a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3cc697419ea18cc98d525999665cb94a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3cc697419ea18cc98d525999665cb94a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3cc697419ea18cc98d525999665cb94a-Supplemental.pdf | Graphical models are widely used in science to represent joint probability distributions with an underlying conditional dependence structure. The inverse problem of learning a discrete graphical model given i.i.d samples from its joint distribution can be solved with near-optimal sample complexity using a convex optimization method known as Generalized Regularized Interaction Screening Estimator (GRISE). But the computational cost of GRISE becomes prohibitive when the energy function of the true graphical model has higher order terms. We introduce NeurISE, a neural net based algorithm for graphical model learning, to tackle this limitation of GRISE. We use neural nets as function approximators in an Interaction Screening objective function. The optimization of this objective then produces a neural-net representation for the conditionals of the graphical model. NeurISE algorithm is seen to be a better alternative to GRISE when the energy function of the true model has a high order with a high degree of symmetry. In these cases NeurISE is able to find the correct parsimonious representation for the conditionals without being fed any prior information about the true model. NeurISE can also be used to learn the underlying structure of the true model with some simple modifications to its training procedure. In addition, we also show a variant of NeurISE that can be used to learn a neural net representation for the full energy function of the true model. |
RepPoints v2: Verification Meets Regression for Object Detection | https://papers.nips.cc/paper_files/paper/2020/hash/3ce3bd7d63a2c9c81983cc8e9bd02ae5-Abstract.html | Yihong Chen, Zheng Zhang, Yue Cao, Liwei Wang, Stephen Lin, Han Hu | https://papers.nips.cc/paper_files/paper/2020/hash/3ce3bd7d63a2c9c81983cc8e9bd02ae5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3ce3bd7d63a2c9c81983cc8e9bd02ae5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10196-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3ce3bd7d63a2c9c81983cc8e9bd02ae5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3ce3bd7d63a2c9c81983cc8e9bd02ae5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3ce3bd7d63a2c9c81983cc8e9bd02ae5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3ce3bd7d63a2c9c81983cc8e9bd02ae5-Supplemental.zip | Verification and regression are two general methodologies for prediction in neural networks. Each has its own strengths: verification can be easier to infer accurately, and regression is more efficient and applicable to continuous target variables. Hence, it is often beneficial to carefully combine them to take advantage of their benefits. In this paper, we take this philosophy to improve state-of-the-art object detection, specifically by RepPoints. Though RepPoints provides high performance, we find that its heavy reliance on regression for object localization leaves room for improvement. We introduce verification tasks into the localization prediction of RepPoints, producing RepPoints v2, which proves consistent improvements of about 2.0 mAP over the original RepPoints on COCO object detection benchmark using different backbones and training methods. RepPoints v2 also achieves 52.1 mAP on the COCO \texttt{test-dev} by a single model. Moreover, we show that the proposed approach can more generally elevate other object detection frameworks as well as applications such as instance segmentation. |
Unfolding the Alternating Optimization for Blind Super Resolution | https://papers.nips.cc/paper_files/paper/2020/hash/3d2d8ccb37df977cb6d9da15b76c3f3a-Abstract.html | zhengxiong luo, Yan Huang, Shang Li, Liang Wang, Tieniu Tan | https://papers.nips.cc/paper_files/paper/2020/hash/3d2d8ccb37df977cb6d9da15b76c3f3a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3d2d8ccb37df977cb6d9da15b76c3f3a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10197-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3d2d8ccb37df977cb6d9da15b76c3f3a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3d2d8ccb37df977cb6d9da15b76c3f3a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3d2d8ccb37df977cb6d9da15b76c3f3a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3d2d8ccb37df977cb6d9da15b76c3f3a-Supplemental.zip | Previous methods decompose blind super resolution (SR) problem into two sequential steps: \textit{i}) estimating blur kernel from given low-resolution (LR) image and \textit{ii}) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, which may not well compatible with each other. Small estimation error of the first step could cause severe performance drop of the second one. While on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. Towards these issues, instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel and restore SR image in a single model. Specifically, we design two convolutional neural modules, namely \textit{Restorer} and \textit{Estimator}. \textit{Restorer} restores SR image based on predicted kernel, and \textit{Estimator} estimates blur kernel with the help of restored SR image. We alternate these two modules repeatedly and unfold this process to form an end-to-end trainable network. In this way, \textit{Estimator} utilizes information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, \textit{Restorer} is trained with the kernel estimated by \textit{Estimator}, instead of ground-truth kernel, thus \textit{Restorer} could be more tolerant to the estimation error of \textit{Estimator}. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the-art methods and produce more visually favorable results at much higher speed. The source code will be publicly available. |
Entrywise convergence of iterative methods for eigenproblems | https://papers.nips.cc/paper_files/paper/2020/hash/3d8e03e8b133b16f13a586f0c01b6866-Abstract.html | Vasileios Charisopoulos, Austin R. Benson, Anil Damle | https://papers.nips.cc/paper_files/paper/2020/hash/3d8e03e8b133b16f13a586f0c01b6866-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3d8e03e8b133b16f13a586f0c01b6866-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10198-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3d8e03e8b133b16f13a586f0c01b6866-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3d8e03e8b133b16f13a586f0c01b6866-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3d8e03e8b133b16f13a586f0c01b6866-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3d8e03e8b133b16f13a586f0c01b6866-Supplemental.zip | Several problems in machine learning, statistics, and other fields rely on computing eigenvectors. For large scale problems, the computation of these eigenvectors is typically performed via iterative schemes such as subspace iteration or Krylov methods. While there is classical and comprehensive analysis for subspace convergence guarantees with respect to the spectral norm, in many modern applications other notions of subspace distance are more appropriate. Recent theoretical work has focused on perturbations of subspaces measured in the ℓ2→∞ norm, but does not consider the actual computation of eigenvectors. Here we address the convergence of subspace iteration when distances are measured in the ℓ2→∞ norm and provide deterministic bounds. We complement our analysis with a practical stopping criterion and demonstrate its applicability via numerical experiments. Our results show that one can get comparable performance on downstream tasks while requiring fewer iterations, thereby saving substantial computational time. |
Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views | https://papers.nips.cc/paper_files/paper/2020/hash/3d9dabe52805a1ea21864b09f3397593-Abstract.html | Nanbo Li, Cian Eastwood, Robert Fisher | https://papers.nips.cc/paper_files/paper/2020/hash/3d9dabe52805a1ea21864b09f3397593-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3d9dabe52805a1ea21864b09f3397593-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10199-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3d9dabe52805a1ea21864b09f3397593-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3d9dabe52805a1ea21864b09f3397593-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3d9dabe52805a1ea21864b09f3397593-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3d9dabe52805a1ea21864b09f3397593-Supplemental.pdf | Learning object-centric representations of multi-object scenes is a promising approach towards machine intelligence, facilitating high-level reasoning and control from visual sensory data. However, current approaches for \textit{unsupervised object-centric scene representation} are incapable of aggregating information from multiple observations of a scene. As a result, these ``single-view'' methods form their representations of a 3D scene based only on a single 2D observation (view). Naturally, this leads to several inaccuracies, with these methods falling victim to single-view spatial ambiguities. To address this, we propose \textit{The Multi-View and Multi-Object Network (MulMON)}---a method for learning accurate, object-centric representations of multi-object scenes by leveraging multiple views. In order to sidestep the main technical difficulty of the \textit{multi-object-multi-view} scenario---maintaining object correspondences across views---MulMON iteratively updates the latent object representations for a scene over multiple views. To ensure that these iterative updates do indeed aggregate spatial information to form a complete 3D scene understanding, MulMON is asked to predict the appearance of the scene from novel viewpoints during training. Through experiments we show that MulMON better-resolves spatial ambiguities than single-view methods---learning more accurate and disentangled object representations---and also achieves new functionality in predicting object segmentations for novel viewpoints. |
A Catalyst Framework for Minimax Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/3db54f5573cd617a0112d35dd1e6b1ef-Abstract.html | Junchi Yang, Siqi Zhang, Negar Kiyavash, Niao He | https://papers.nips.cc/paper_files/paper/2020/hash/3db54f5573cd617a0112d35dd1e6b1ef-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3db54f5573cd617a0112d35dd1e6b1ef-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10200-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3db54f5573cd617a0112d35dd1e6b1ef-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3db54f5573cd617a0112d35dd1e6b1ef-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3db54f5573cd617a0112d35dd1e6b1ef-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3db54f5573cd617a0112d35dd1e6b1ef-Supplemental.pdf | We introduce a generic \emph{two-loop} scheme for smooth minimax optimization with strongly-convex-concave objectives. Our approach applies the accelerated proximal point framework (or Catalyst) to the associated \emph{dual problem} and takes full advantage of existing gradient-based algorithms to solve a sequence of well-balanced strongly-convex-strongly-concave minimax problems. Despite its simplicity, this leads to a family of near-optimal algorithms with improved complexity over all existing methods designed for strongly-convex-concave minimax problems. Additionally, we obtain the first variance-reduced algorithms for this class of minimax problems with finite-sum structure and establish even faster convergence rate. Furthermore, when extended to the nonconvex-concave minimax optimization, our algorithm again achieves the state-of-the-art complexity for finding a stationary point. We carry out several numerical experiments showcasing the superiority of the Catalyst framework in practice. |
Self-supervised Co-Training for Video Representation Learning | https://papers.nips.cc/paper_files/paper/2020/hash/3def184ad8f4755ff269862ea77393dd-Abstract.html | Tengda Han, Weidi Xie, Andrew Zisserman | https://papers.nips.cc/paper_files/paper/2020/hash/3def184ad8f4755ff269862ea77393dd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3def184ad8f4755ff269862ea77393dd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10201-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3def184ad8f4755ff269862ea77393dd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3def184ad8f4755ff269862ea77393dd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3def184ad8f4755ff269862ea77393dd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3def184ad8f4755ff269862ea77393dd-Supplemental.pdf | The objective of this paper is visual-only self-supervised video representation learning. We make the following contributions: (i) we investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation (InfoNCE) training, showing that this form of supervised contrastive learning leads to a clear improvement in performance; (ii) we propose a novel self-supervised co-training scheme to improve the popular infoNCE loss, exploiting the complementary information from different views, RGB streams and optical flow, of the same data source by using one view to obtain positive class samples for the other; (iii) we thoroughly evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval. In both cases, the proposed approach demonstrates state-of-the-art or comparable performance with other self-supervised approaches, whilst being significantly more efficient to train, i.e. requiring far less training data to achieve similar performance. |
Gradient Estimation with Stochastic Softmax Tricks | https://papers.nips.cc/paper_files/paper/2020/hash/3df80af53dce8435cf9ad6c3e7a403fd-Abstract.html | Max Paulus, Dami Choi, Daniel Tarlow, Andreas Krause, Chris J. Maddison | https://papers.nips.cc/paper_files/paper/2020/hash/3df80af53dce8435cf9ad6c3e7a403fd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3df80af53dce8435cf9ad6c3e7a403fd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10202-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3df80af53dce8435cf9ad6c3e7a403fd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3df80af53dce8435cf9ad6c3e7a403fd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3df80af53dce8435cf9ad6c3e7a403fd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3df80af53dce8435cf9ad6c3e7a403fd-Supplemental.pdf | The Gumbel-Max trick is the basis of many relaxed gradient estimators. These estimators are easy to implement and low variance, but the goal of scaling them comprehensively to large combinatorial distributions is still outstanding. Working within the perturbation model framework, we introduce stochastic softmax tricks, which generalize the Gumbel-Softmax trick to combinatorial spaces. Our framework is a unified perspective on existing relaxed estimators for perturbation models, and it contains many novel relaxations. We design structured relaxations for subset selection, spanning trees, arborescences, and others. When compared to less structured baselines, we find that stochastic softmax tricks can be used to train latent variable models that perform better and discover more latent structure. |
Meta-Learning Requires Meta-Augmentation | https://papers.nips.cc/paper_files/paper/2020/hash/3e5190eeb51ebe6c5bbc54ee8950c548-Abstract.html | Janarthanan Rajendran, Alexander Irpan, Eric Jang | https://papers.nips.cc/paper_files/paper/2020/hash/3e5190eeb51ebe6c5bbc54ee8950c548-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3e5190eeb51ebe6c5bbc54ee8950c548-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10203-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3e5190eeb51ebe6c5bbc54ee8950c548-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3e5190eeb51ebe6c5bbc54ee8950c548-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3e5190eeb51ebe6c5bbc54ee8950c548-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3e5190eeb51ebe6c5bbc54ee8950c548-Supplemental.pdf | Meta-learning algorithms aim to learn two components: a model that predicts targets for a task, and a base learner that updates that model when given examples from a new task. This additional level of learning can be powerful, but it also creates another potential source of overfitting, since we can now overfit in either the model or the base learner. We describe both of these forms of meta-learning overfitting, and demonstrate that they appear experimentally in common meta-learning benchmarks. We introduce an information-theoretic framework of meta-augmentation, whereby adding randomness discourages the base learner and model from learning trivial solutions that do not generalize to new tasks. We demonstrate that meta-augmentation produces large complementary benefits to recently proposed meta-regularization techniques. |
SLIP: Learning to predict in unknown dynamical systems with long-term memory | https://papers.nips.cc/paper_files/paper/2020/hash/3e91970f771a2c473ae36b60d1146068-Abstract.html | Paria Rashidinejad, Jiantao Jiao, Stuart Russell | https://papers.nips.cc/paper_files/paper/2020/hash/3e91970f771a2c473ae36b60d1146068-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3e91970f771a2c473ae36b60d1146068-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10204-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3e91970f771a2c473ae36b60d1146068-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3e91970f771a2c473ae36b60d1146068-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3e91970f771a2c473ae36b60d1146068-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3e91970f771a2c473ae36b60d1146068-Supplemental.pdf | We present an efficient and practical (polynomial time) algorithm for online prediction in unknown and partially observed linear dynamical systems (LDS) under stochastic noise. When the system parameters are known, the optimal linear predictor is the Kalman filter. However, in unknown systems, the performance of existing predictive models is poor in important classes of LDS that are only marginally stable and exhibit long-term forecast memory. We tackle this problem by bounding the generalized Kolmogorov width of the Kalman filter coefficient set. This motivates the design of an algorithm, which we call spectral LDS improper predictor (SLIP), based on conducting a tight convex relaxation of the Kalman predictive model via spectral methods. We provide a finite-sample analysis, showing that our algorithm competes with the Kalman filter in hindsight with only logarithmic regret. Our regret analysis relies on Mendelson’s small-ball method, providing sharp error bounds without concentration, boundedness, or exponential forgetting assumptions. Empirical evaluations demonstrate that SLIP outperforms state-of-the-art methods in LDS prediction. Our theoretical and experimental results shed light on the conditions required for efficient probably approximately correct (PAC) learning of the Kalman filter from partially observed data. |
Improving GAN Training with Probability Ratio Clipping and Sample Reweighting | https://papers.nips.cc/paper_files/paper/2020/hash/3eb46aa5d93b7a5939616af91addfa88-Abstract.html | Yue Wu, Pan Zhou, Andrew G. Wilson, Eric Xing, Zhiting Hu | https://papers.nips.cc/paper_files/paper/2020/hash/3eb46aa5d93b7a5939616af91addfa88-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3eb46aa5d93b7a5939616af91addfa88-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10205-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3eb46aa5d93b7a5939616af91addfa88-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3eb46aa5d93b7a5939616af91addfa88-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3eb46aa5d93b7a5939616af91addfa88-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3eb46aa5d93b7a5939616af91addfa88-Supplemental.zip | Despite success on a wide range of problems related to vision, generative adversarial networks (GANs) often suffer from inferior performance due to unstable training, especially for text generation. To solve this issue, we propose a new variational GAN training framework which enjoys superior training stability. Our approach is inspired by a connection of GANs and reinforcement learning under a variational perspective. The connection leads to (1) probability ratio clipping that regularizes generator training to prevent excessively large updates, and (2) a sample re-weighting mechanism that improves discriminator training by downplaying bad-quality fake samples. Moreover, our variational GAN framework can provably overcome the training issue in many GANs that an optimal discriminator cannot provide any informative gradient to training generator. By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks, including text generation, text style transfer, and image generation. |
Bayesian Bits: Unifying Quantization and Pruning | https://papers.nips.cc/paper_files/paper/2020/hash/3f13cf4ddf6fc50c0d39a1d5aeb57dd8-Abstract.html | Mart van Baalen, Christos Louizos, Markus Nagel, Rana Ali Amjad, Ying Wang, Tijmen Blankevoort, Max Welling | https://papers.nips.cc/paper_files/paper/2020/hash/3f13cf4ddf6fc50c0d39a1d5aeb57dd8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3f13cf4ddf6fc50c0d39a1d5aeb57dd8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10206-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3f13cf4ddf6fc50c0d39a1d5aeb57dd8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3f13cf4ddf6fc50c0d39a1d5aeb57dd8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3f13cf4ddf6fc50c0d39a1d5aeb57dd8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3f13cf4ddf6fc50c0d39a1d5aeb57dd8-Supplemental.pdf | We introduce Bayesian Bits, a practical method for joint mixed precision quantization and pruning through gradient based optimization. Bayesian Bits employs a novel decomposition of the quantization operation, which sequentially considers doubling the bit width. At each new bit width, the residual error between the full precision value and the previously rounded value is quantized. We then decide whether or not to add this quantized residual error for a higher effective bit width and lower quantization noise. By starting with a power-of-two bit width, this decomposition will always produce hardware-friendly configurations, and through an additional 0-bit option, serves as a unified view of pruning and quantization. Bayesian Bits then introduces learnable stochastic gates, which collectively control the bit width of the given tensor. As a result, we can obtain low bit solutions by performing approximate inference over the gates, with prior distributions that encourage most of them to be switched off. We experimentally validate our proposed method on several benchmark datasets and show that we can learn pruned, mixed precision networks that provide a better trade-off between accuracy and efficiency than their static bit width equivalents. |
On Testing of Samplers | https://papers.nips.cc/paper_files/paper/2020/hash/3f1656d9668dffcf8119e3ecff873558-Abstract.html | Kuldeep S Meel, Yash Pralhad Pote, Sourav Chakraborty | https://papers.nips.cc/paper_files/paper/2020/hash/3f1656d9668dffcf8119e3ecff873558-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3f1656d9668dffcf8119e3ecff873558-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10207-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3f1656d9668dffcf8119e3ecff873558-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3f1656d9668dffcf8119e3ecff873558-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3f1656d9668dffcf8119e3ecff873558-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3f1656d9668dffcf8119e3ecff873558-Supplemental.pdf | Given a set of items F and a weight function W: F -> (0,1), the problem of sampling seeks to sample an item proportional to its weight. Sampling is a fundamental problem in machine learning. The daunting computational complexity of sampling with formal guarantees leads designers to propose heuristics-based techniques for which no rigorous theoretical analysis exists to quantify the quality of the generated distributions.
This poses a challenge in designing a testing methodology to test whether a sampler under test generates samples according to a given distribution. Only recently, Chakraborty and Meel (2019) designed the first scalable verifier, called Barbarik, for samplers in the special case when the weight function W is constant, that is, when the sampler is supposed to sample uniformly from F. The techniques in Barbarik, however, fail to handle general weight functions.
The primary contribution of this paper is an affirmative answer to the above challenge: motivated by Barbarik, but using different techniques and analysis, we design Barbarik2, an algorithm to test whether the distribution generated by a sampler is epsilon-close or eta-far from any target distribution. In contrast to black-box sampling techniques that require a number of samples proportional to |F|, Barbarik2 requires only \tilde{O}(Tilt(W, F)^2/eta(eta - 6*epsilon)^3) samples, where the Tilt is the maximum ratio of weights of two points in F. Barbarik2 can handle any arbitrary weight function. We present a prototype implementation of Barbarik2 and use it to test three state-of-the-art samplers. |
Gaussian Process Bandit Optimization of the Thermodynamic Variational Objective | https://papers.nips.cc/paper_files/paper/2020/hash/3f2dff7862a70f97a59a1fa02c3ec110-Abstract.html | Vu Nguyen, Vaden Masrani, Rob Brekelmans, Michael Osborne, Frank Wood | https://papers.nips.cc/paper_files/paper/2020/hash/3f2dff7862a70f97a59a1fa02c3ec110-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3f2dff7862a70f97a59a1fa02c3ec110-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10208-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3f2dff7862a70f97a59a1fa02c3ec110-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3f2dff7862a70f97a59a1fa02c3ec110-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3f2dff7862a70f97a59a1fa02c3ec110-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3f2dff7862a70f97a59a1fa02c3ec110-Supplemental.pdf | Achieving the full promise of the Thermodynamic Variational Objective (TVO), a recently proposed variational inference objective that lower-bounds the log evidence via one-dimensional Riemann integration, requires choosing a ``schedule'' of sorted discretization points. This paper introduces a bespoke Gaussian process bandit optimization method for automatically choosing these points. Our approach not only automates their one-time selection, but also dynamically adapts their positions over the course of optimization, leading to improved model learning and inference. We provide theoretical guarantees that our bandit optimization converges to the regret-minimizing choice of integration points. Empirical validation of our algorithm is provided in terms of improved learning and inference in Variational Autoencoders and sigmoid belief networks. |
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers | https://papers.nips.cc/paper_files/paper/2020/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html | Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, Ming Zhou | https://papers.nips.cc/paper_files/paper/2020/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3f5ee243547dee91fbd053c1c4a845aa-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10209-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3f5ee243547dee91fbd053c1c4a845aa-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3f5ee243547dee91fbd053c1c4a845aa-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3f5ee243547dee91fbd053c1c4a845aa-Supplemental.pdf | Pre-trained language models (e.g., BERT (Devlin et al., 2018) and its variants) have achieved remarkable success in varieties of NLP tasks. However, these models usually consist of hundreds of millions of parameters which brings challenges for fine-tuning and online serving in real-life applications due to latency and capacity constraints. In this work, we present a simple and effective approach to compress large Transformer (Vaswani et al., 2017) based pre-trained models, termed as deep self-attention distillation. The small model (student) is trained by deeply mimicking the self-attention module, which plays a vital role in Transformer networks, of the large model (teacher). Specifically, we propose distilling the self-attention module of the last Transformer layer of the teacher, which is effective and flexible for the student. Furthermore, we introduce the scaled dot-product between values in the self-attention module as the new deep self-attention knowledge, in addition to the attention distributions (i.e., the scaled dot-product of queries and keys) that have been used in existing works. Moreover, we show that introducing a teacher assistant (Mirzadeh et al., 2019) also helps the distillation of large pre-trained Transformer models. Experimental results demonstrate that our monolingual model outperforms state-of-the-art baselines in different parameter size of student models. In particular, it retains more than 99% accuracy on SQuAD 2.0 and several GLUE benchmark tasks using 50% of the Transformer parameters and computations of the teacher model. We also obtain competitive results in applying deep self-attention distillation to multilingual pre-trained models. |
Optimal Epoch Stochastic Gradient Descent Ascent Methods for Min-Max Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/3f8b2a81da929223ae025fcec26dde0d-Abstract.html | Yan Yan, Yi Xu, Qihang Lin, Wei Liu, Tianbao Yang | https://papers.nips.cc/paper_files/paper/2020/hash/3f8b2a81da929223ae025fcec26dde0d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3f8b2a81da929223ae025fcec26dde0d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10210-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3f8b2a81da929223ae025fcec26dde0d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3f8b2a81da929223ae025fcec26dde0d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3f8b2a81da929223ae025fcec26dde0d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3f8b2a81da929223ae025fcec26dde0d-Supplemental.pdf | Epoch gradient descent method (a.k.a. Epoch-GD) proposed by (Hazan and Kale, 2011) was deemeda breakthrough for stochastic strongly convex minimization, which achieves theoptimal convergence rate of O(1/T) with T iterative updates for the objective gap. However, its extension to solving stochastic min-max problems with strong convexity and strong concavity still remains open, and it is still unclear whethera fast rate ofO(1/T)for theduality gapis achievable for stochastic min-max optimization under strong convexity and strong concavity. Although some re-cent studies have proposed stochastic algorithms with fast convergence rates formin-max problems, they require additional assumptions about the problem, e.g.,smoothness, bi-linear structure, etc. In this paper, we bridge this gap by providinga sharp analysis of epoch-wise stochastic gradient descent ascent method (referredto as Epoch-GDA) for solving strongly convex strongly concave (SCSC) min-maxproblems, without imposing any additional assumption about smoothness or the function’s structure. To the best of our knowledge, our result is the first one that shows Epoch-GDA can achieve the optimal rate ofO(1/T)for the duality gapof general SCSC min-max problems. We emphasize that such generalization of Epoch-GD for strongly convex minimization problems to Epoch-GDA for SCSC min-max problems is non-trivial and requires novel technical analysis. Moreover, we notice that the key lemma can also be used for proving the convergence of Epoch-GDA for weakly-convex strongly-concave min-max problems, leading to a nearly optimal complexity without resorting to smoothness or other structural conditions. |
Woodbury Transformations for Deep Generative Flows | https://papers.nips.cc/paper_files/paper/2020/hash/3fb04953d95a94367bb133f862402bce-Abstract.html | You Lu, Bert Huang | https://papers.nips.cc/paper_files/paper/2020/hash/3fb04953d95a94367bb133f862402bce-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3fb04953d95a94367bb133f862402bce-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10211-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3fb04953d95a94367bb133f862402bce-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3fb04953d95a94367bb133f862402bce-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3fb04953d95a94367bb133f862402bce-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3fb04953d95a94367bb133f862402bce-Supplemental.pdf | Normalizing flows are deep generative models that allow efficient likelihood calculation and sampling. The core requirement for this advantage is that they are constructed using functions that can be efficiently inverted and for which the determinant of the function's Jacobian can be efficiently computed. Researchers have introduced various such flow operations, but few of these allow rich interactions among variables without incurring significant computational costs. In this paper, we introduce Woodbury transformations, which achieve efficient invertibility via the Woodbury matrix identity and efficient determinant calculation via Sylvester's determinant identity. In contrast with other operations used in state-of-the-art normalizing flows, Woodbury transformations enable (1) high-dimensional interactions, (2) efficient sampling, and (3) efficient likelihood evaluation. Other similar operations, such as 1x1 convolutions, emerging convolutions, or periodic convolutions allow at most two of these three advantages. In our experiments on multiple image datasets, we find that Woodbury transformations allow learning of higher-likelihood models than other flow architectures while still enjoying their efficiency advantages. |
Graph Contrastive Learning with Augmentations | https://papers.nips.cc/paper_files/paper/2020/hash/3fe230348e9a12c13120749e3f9fa4cd-Abstract.html | Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, Yang Shen | https://papers.nips.cc/paper_files/paper/2020/hash/3fe230348e9a12c13120749e3f9fa4cd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3fe230348e9a12c13120749e3f9fa4cd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10212-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3fe230348e9a12c13120749e3f9fa4cd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3fe230348e9a12c13120749e3f9fa4cd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3fe230348e9a12c13120749e3f9fa4cd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3fe230348e9a12c13120749e3f9fa4cd-Supplemental.pdf | Generalizable, transferrable, and robust representation learning on graph-structured data remains a challenge for current graph neural networks (GNNs). Unlike what has been developed for convolutional neural networks (CNNs) for image data, self-supervised learning and pre-training are less explored for GNNs. In this paper, we propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data. We first design four types of graph augmentations to incorporate various priors. We then systematically study the impact of various combinations of graph augmentations on multiple datasets, in four different settings: semi-supervised, unsupervised, and transfer learning as well as adversarial attacks. The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, our GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods. We also investigate the impact of parameterized graph augmentation extents and patterns, and observe further performance gains in preliminary experiments. Our codes are available at https://github.com/Shen-Lab/GraphCL. |
Gradient Surgery for Multi-Task Learning | https://papers.nips.cc/paper_files/paper/2020/hash/3fe78a8acf5fda99de95303940a2420c-Abstract.html | Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn | https://papers.nips.cc/paper_files/paper/2020/hash/3fe78a8acf5fda99de95303940a2420c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3fe78a8acf5fda99de95303940a2420c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10213-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3fe78a8acf5fda99de95303940a2420c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3fe78a8acf5fda99de95303940a2420c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3fe78a8acf5fda99de95303940a2420c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3fe78a8acf5fda99de95303940a2420c-Supplemental.pdf | While deep learning and deep reinforcement learning (RL) systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood. In this work, we identify a set of three conditions of the multi-task optimization landscape that cause detrimental gradient interference, and develop a simple yet general approach for avoiding such interference between task gradients. We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a gradient. On a series of challenging multi-task supervised and multi-task RL problems, this approach leads to substantial gains in efficiency and performance. Further, it is model-agnostic and can be combined with previously-proposed multi-task architectures for enhanced performance. |
Bayesian Probabilistic Numerical Integration with Tree-Based Models | https://papers.nips.cc/paper_files/paper/2020/hash/3fe94a002317b5f9259f82690aeea4cd-Abstract.html | Harrison Zhu, Xing Liu, Ruya Kang, Zhichao Shen, Seth Flaxman, Francois-Xavier Briol | https://papers.nips.cc/paper_files/paper/2020/hash/3fe94a002317b5f9259f82690aeea4cd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/3fe94a002317b5f9259f82690aeea4cd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10214-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/3fe94a002317b5f9259f82690aeea4cd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/3fe94a002317b5f9259f82690aeea4cd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/3fe94a002317b5f9259f82690aeea4cd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/3fe94a002317b5f9259f82690aeea4cd-Supplemental.pdf | Bayesian quadrature (BQ) is a method for solving numerical integration problems in a Bayesian manner, which allows users to quantify their uncertainty about the solution. The standard approach to BQ is based on a Gaussian process (GP) approximation of the integrand. As a result, BQ is inherently limited to cases where GP approximations can be done in an efficient manner, thus often prohibiting very high-dimensional or non-smooth target functions. This paper proposes to tackle this issue with a new Bayesian numerical integration algorithm based on Bayesian Additive Regression Trees (BART) priors, which we call BART-Int. BART priors are easy to tune and well-suited for discontinuous functions. We demonstrate that they also lend themselves naturally to a sequential design setting and that explicit convergence rates can be obtained in a variety of settings. The advantages and disadvantages of this new methodology are highlighted on a set of benchmark tests including the Genz functions, on a rare-event simulation problem and on a Bayesian survey design problem. |
Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the Neural Tangent Kernel | https://papers.nips.cc/paper_files/paper/2020/hash/405075699f065e43581f27d67bb68478-Abstract.html | Stanislav Fort, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M. Roy, Surya Ganguli | https://papers.nips.cc/paper_files/paper/2020/hash/405075699f065e43581f27d67bb68478-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/405075699f065e43581f27d67bb68478-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10215-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/405075699f065e43581f27d67bb68478-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/405075699f065e43581f27d67bb68478-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/405075699f065e43581f27d67bb68478-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/405075699f065e43581f27d67bb68478-Supplemental.pdf | In suitably initialized wide networks, small learning rates transform deep neural networks (DNNs) into neural tangent kernel (NTK) machines, whose training dynamics is well-approximated by a linear weight expansion of the network at initialization. Standard training, however, diverges from its linearization in ways that are poorly understood. We study the relationship between the training dynamics of nonlinear deep networks, the geometry of the loss landscape, and the time evolution of a data-dependent NTK. We do so through a large-scale phenomenological analysis of training, synthesizing diverse measures characterizing loss landscape geometry and NTK dynamics. In multiple neural architectures and datasets, we find these diverse measures evolve in a highly correlated manner, revealing a universal picture of the deep learning process. In this picture, deep network training exhibits a highly chaotic rapid initial transient that within 2 to 3 epochs determines the final linearly connected basin of low loss containing the end point of training. During this chaotic transient, the NTK changes rapidly, learning useful features from the training data that enables it to outperform the standard initial NTK by a factor of 3 in less than 3 to 4 epochs. After this rapid chaotic transient, the NTK changes at constant velocity, and its performance matches that of full network training in 15\% to 45\% of training time. Overall, our analysis reveals a striking correlation between a diverse set of metrics over training time, governed by a rapid chaotic to stable transition in the first few epochs, that together poses challenges and opportunities for the development of more accurate theories of deep learning. |
Graph Meta Learning via Local Subgraphs | https://papers.nips.cc/paper_files/paper/2020/hash/412604be30f701b1b1e3124c252065e6-Abstract.html | Kexin Huang, Marinka Zitnik | https://papers.nips.cc/paper_files/paper/2020/hash/412604be30f701b1b1e3124c252065e6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/412604be30f701b1b1e3124c252065e6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10216-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/412604be30f701b1b1e3124c252065e6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/412604be30f701b1b1e3124c252065e6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/412604be30f701b1b1e3124c252065e6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/412604be30f701b1b1e3124c252065e6-Supplemental.pdf | Prevailing methods for graphs require abundant label and edge information for learning. When data for a new task are scarce, meta-learning can learn from prior experiences and form much-needed inductive biases for fast adaption to new tasks. Here, we introduce G-Meta, a novel meta-learning algorithm for graphs. G-Meta uses local subgraphs to transfer subgraph-specific information and learn transferable knowledge faster via meta gradients. G-Meta learns how to quickly adapt to a new task using only a handful of nodes or edges in the new task and does so by learning from data points in other graphs or related, albeit disjoint label sets. G-Meta is theoretically justified as we show that the evidence for a prediction can be found in the local subgraph surrounding the target node or edge. Experiments on seven datasets and nine baseline methods show that G-Meta outperforms existing methods by up to 16.3%. Unlike previous methods, G-Meta successfully learns in challenging, few-shot learning settings that require generalization to completely new graphs and never-before-seen labels. Finally, G-Meta scales to large graphs, which we demonstrate on a new Tree-of-Life dataset comprising of 1,840 graphs, a two-orders of magnitude increase in the number of graphs used in prior work. |
Stochastic Deep Gaussian Processes over Graphs | https://papers.nips.cc/paper_files/paper/2020/hash/415e1af7ea95f89f4e375162b21ae38c-Abstract.html | Naiqi Li, Wenjie Li, Jifeng Sun, Yinghua Gao, Yong Jiang, Shu-Tao Xia | https://papers.nips.cc/paper_files/paper/2020/hash/415e1af7ea95f89f4e375162b21ae38c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/415e1af7ea95f89f4e375162b21ae38c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10217-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/415e1af7ea95f89f4e375162b21ae38c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/415e1af7ea95f89f4e375162b21ae38c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/415e1af7ea95f89f4e375162b21ae38c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/415e1af7ea95f89f4e375162b21ae38c-Supplemental.pdf | In this paper we propose Stochastic Deep Gaussian Processes over Graphs (DGPG), which are deep structure models that learn the mappings between input and output signals in graph domains. The approximate posterior distributions of the latent variables are derived with variational inference, and the evidence lower bound is evaluated and optimized by the proposed recursive sampling scheme. The Bayesian non-parametric natural of our model allows it to resist overfitting, while the expressive deep structure grants it the potential to learn complex relations. Extensive experiments demonstrate that our method achieves superior performances in both small size (< 50) and large size (> 35,000) datasets. We show that DGPG outperforms another Gaussian-based approach, and is competitive to a state-of-the-art method in the challenging task of traffic flow prediction. Our model is also capable of capturing uncertainties in a mathematical principled way and automatically discovering which vertices and features are relevant to the prediction. |
Bayesian Causal Structural Learning with Zero-Inflated Poisson Bayesian Networks | https://papers.nips.cc/paper_files/paper/2020/hash/4175a4b46a45813fccf4bd34c779d817-Abstract.html | Junsouk Choi, Robert Chapkin, Yang Ni | https://papers.nips.cc/paper_files/paper/2020/hash/4175a4b46a45813fccf4bd34c779d817-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/4175a4b46a45813fccf4bd34c779d817-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10218-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/4175a4b46a45813fccf4bd34c779d817-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/4175a4b46a45813fccf4bd34c779d817-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/4175a4b46a45813fccf4bd34c779d817-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/4175a4b46a45813fccf4bd34c779d817-Supplemental.pdf | Multivariate zero-inflated count data arise in a wide range of areas such as economics, social sciences, and biology. To infer causal relationships in zero-inflated count data, we propose a new zero-inflated Poisson Bayesian network (ZIPBN) model. We show that the proposed ZIPBN is identifiable with cross-sectional data. The proof is based on the well-known characterization of Markov equivalence class which is applicable to other distribution families. For causal structural learning, we introduce a fully Bayesian inference approach which exploits the parallel tempering Markov chain Monte Carlo algorithm to efficiently explore the multi-modal network space. We demonstrate the utility of the proposed ZIPBN in causal discoveries for zero-inflated count data by simulation studies with comparison to alternative Bayesian network methods. Additionally, real single-cell RNA-sequencing data with known causal relationships will be used to assess the capability of ZIPBN for discovering causal relationships in real-world problems. |
Evaluating Attribution for Graph Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/417fbbf2e9d5a28a855a11894b2e795a-Abstract.html | Benjamin Sanchez-Lengeling, Jennifer Wei, Brian Lee, Emily Reif, Peter Wang, Wesley Qian, Kevin McCloskey, Lucy Colwell , Alexander Wiltschko | https://papers.nips.cc/paper_files/paper/2020/hash/417fbbf2e9d5a28a855a11894b2e795a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/417fbbf2e9d5a28a855a11894b2e795a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10219-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/417fbbf2e9d5a28a855a11894b2e795a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/417fbbf2e9d5a28a855a11894b2e795a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/417fbbf2e9d5a28a855a11894b2e795a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/417fbbf2e9d5a28a855a11894b2e795a-Supplemental.zip | Interpretability of machine learning models is critical to scientific understanding, AI safety, as well as debugging. Attribution is one approach to interpretability, which highlights input dimensions that are influential to a neural network’s prediction. Evaluation of these methods is largely qualitative for image and text models, because acquiring ground truth attributions requires expensive and unreliable human judgment. Attribution has been little studied for graph neural networks (GNNs), a model class of growing importance that makes predictions on arbitrarily-sized graphs. In this work we adapt commonly-used attribution methods for GNNs and quantitatively evaluate them using computable ground-truths that are objective and challenging to learn. We make concrete recommendations for which attribution methods to use, and provide the data and code for our benchmarking suite. Rigorous and open source benchmarking of attribution methods in graphs could enable new methods development and broader use of attribution in real-world ML tasks. |
On Second Order Behaviour in Augmented Neural ODEs | https://papers.nips.cc/paper_files/paper/2020/hash/418db2ea5d227a9ea8db8e5357ca2084-Abstract.html | Alexander Norcliffe, Cristian Bodnar, Ben Day, Nikola Simidjievski, Pietro Lió | https://papers.nips.cc/paper_files/paper/2020/hash/418db2ea5d227a9ea8db8e5357ca2084-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/418db2ea5d227a9ea8db8e5357ca2084-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10220-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/418db2ea5d227a9ea8db8e5357ca2084-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/418db2ea5d227a9ea8db8e5357ca2084-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/418db2ea5d227a9ea8db8e5357ca2084-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/418db2ea5d227a9ea8db8e5357ca2084-Supplemental.pdf | Neural Ordinary Differential Equations (NODEs) are a new class of models that
transform data continuously through infinite-depth architectures. The continuous
nature of NODEs has made them particularly suitable for learning the dynamics
of complex physical systems. While previous work has mostly been focused on
first order ODEs, the dynamics of many systems, especially in classical physics,
are governed by second order laws. In this work, we consider Second Order
Neural ODEs (SONODEs). We show how the adjoint sensitivity method can be
extended to SONODEs and prove that the optimisation of a first order coupled
ODE is equivalent and computationally more efficient. Furthermore, we extend the
theoretical understanding of the broader class of Augmented NODEs (ANODEs)
by showing they can also learn higher order dynamics with a minimal number
of augmented dimensions, but at the cost of interpretability. This indicates that
the advantages of ANODEs go beyond the extra space offered by the augmented
dimensions, as originally thought. Finally, we compare SONODEs and ANODEs
on synthetic and real dynamical systems and demonstrate that the inductive biases
of the former generally result in faster training and better performance. |
Neuron Shapley: Discovering the Responsible Neurons | https://papers.nips.cc/paper_files/paper/2020/hash/41c542dfe6e4fc3deb251d64cf6ed2e4-Abstract.html | Amirata Ghorbani, James Y. Zou | https://papers.nips.cc/paper_files/paper/2020/hash/41c542dfe6e4fc3deb251d64cf6ed2e4-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/41c542dfe6e4fc3deb251d64cf6ed2e4-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10221-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/41c542dfe6e4fc3deb251d64cf6ed2e4-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/41c542dfe6e4fc3deb251d64cf6ed2e4-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/41c542dfe6e4fc3deb251d64cf6ed2e4-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/41c542dfe6e4fc3deb251d64cf6ed2e4-Supplemental.zip | We develop Neuron Shapley as a new framework to quantify the contribution of individual neurons to the prediction and performance of a deep network. By accounting for interactions across neurons, Neuron Shapley is more effective in identifying important filters compared to common approaches based on activation patterns. Interestingly, removing just 30 filters with the highest Shapley scores effectively destroys the prediction accuracy of Inception-v3 on ImageNet. Visualization of these few critical filters provides insights into how the network functions. Neuron Shapley is a flexible framework and can be applied to identify responsible neurons in many tasks. We illustrate additional applications of identifying filters that are responsible for biased prediction in facial recognition and filters that are vulnerable to adversarial attacks. Removing these filters is a quick way to repair models. Computing exact Shapley values is computationally infeasible and therefore sampling-based approximations are used in practice. We introduce a new multi-armed bandit algorithm that is able to efficiently detect neurons with the largest Shapley value orders of magnitude faster than existing Shapley value approximation methods.
|
Stochastic Normalizing Flows | https://papers.nips.cc/paper_files/paper/2020/hash/41d80bfc327ef980528426fc810a6d7a-Abstract.html | Hao Wu, Jonas Köhler, Frank Noe | https://papers.nips.cc/paper_files/paper/2020/hash/41d80bfc327ef980528426fc810a6d7a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/41d80bfc327ef980528426fc810a6d7a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10222-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/41d80bfc327ef980528426fc810a6d7a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/41d80bfc327ef980528426fc810a6d7a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/41d80bfc327ef980528426fc810a6d7a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/41d80bfc327ef980528426fc810a6d7a-Supplemental.pdf | The sampling of probability distributions specified up to a normalization constant is an important problem in both machine learning and statistical mechanics. While classical stochastic sampling methods such as Markov Chain Monte Carlo (MCMC) or Langevin Dynamics (LD) can suffer from slow mixing times there is a growing interest in using normalizing flows in order to learn the transformation of a simple prior distribution to the given target distribution. Here we propose a generalized and combined approach to sample target densities: Stochastic Normalizing Flows (SNF) – an arbitrary sequence of deterministic invertible functions and stochastic sampling blocks. We show that stochasticity overcomes expressivity limitations of normalizing flows resulting from the invertibility constraint, whereas trainable transformations between sampling steps improve efficiency of pure MCMC/LD along the flow. By invoking ideas from non-equilibrium statistical mechanics we derive an efficient training procedure by which both the sampler's and the flow's parameters can be optimized end-to-end, and by which we can compute exact importance weights without having to marginalize out the randomness of the stochastic blocks. We illustrate the representational power, sampling efficiency and asymptotic correctness of SNFs on several benchmarks including applications to sampling molecular systems in equilibrium. |
GPU-Accelerated Primal Learning for Extremely Fast Large-Scale Classification | https://papers.nips.cc/paper_files/paper/2020/hash/41e7637e7b6a9f27a98b84d3a185c7c0-Abstract.html | John T. Halloran, David M. Rocke | https://papers.nips.cc/paper_files/paper/2020/hash/41e7637e7b6a9f27a98b84d3a185c7c0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/41e7637e7b6a9f27a98b84d3a185c7c0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10223-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/41e7637e7b6a9f27a98b84d3a185c7c0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/41e7637e7b6a9f27a98b84d3a185c7c0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/41e7637e7b6a9f27a98b84d3a185c7c0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/41e7637e7b6a9f27a98b84d3a185c7c0-Supplemental.pdf | One of the most efficient methods to solve L2 -regularized primal problems, such as logistic regression and linear support vector machine (SVM) classification, is the widely used trust region Newton algorithm, TRON. While TRON has recently been shown to enjoy substantial speedups on shared-memory multi-core systems, exploiting graphical processing units (GPUs) to speed up the method is significantly more difficult, owing to the highly complex and heavily sequential nature of the algorithm. In this work, we show that using judicious GPU-optimization principles, TRON training time for different losses and feature representations may be drastically reduced. For sparse feature sets, we show that using GPUs to train logistic regression classifiers in LIBLINEAR is up to an order-of-magnitude faster than solely using multithreading. For dense feature sets–which impose far more stringent memory constraints–we show that GPUs substantially reduce the lengthy SVM learning times required for state-of-the-art proteomics analysis, leading to dramatic improvements over recently proposed speedups. Furthermore, we show how GPU speedups may be mixed with multithreading to enable such speedups when the dataset is too large for GPU memory requirements; on a massive dense proteomics dataset of nearly a quarter-billion data instances, these mixed-architecture speedups reduce SVM analysis time from over half a week to less than a single day while using limited GPU memory. |
Random Reshuffling is Not Always Better | https://papers.nips.cc/paper_files/paper/2020/hash/42299f06ee419aa5d9d07798b56779e2-Abstract.html | Christopher M. De Sa | https://papers.nips.cc/paper_files/paper/2020/hash/42299f06ee419aa5d9d07798b56779e2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/42299f06ee419aa5d9d07798b56779e2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10224-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/42299f06ee419aa5d9d07798b56779e2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/42299f06ee419aa5d9d07798b56779e2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/42299f06ee419aa5d9d07798b56779e2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/42299f06ee419aa5d9d07798b56779e2-Supplemental.pdf | Many learning algorithms, such as stochastic gradient descent, are affected by the order in which training examples are used. It is often observed that sampling the training examples without-replacement, also known as random reshuffling, causes learning algorithms to converge faster. We give a counterexample to the Operator Inequality of Noncommutative Arithmetic and Geometric Means, a longstanding conjecture that relates to the performance of random reshuffling in learning algorithms (Recht and Ré, "Toward a noncommutative arithmetic-geometric mean inequality: conjectures, case-studies, and consequences," COLT 2012). We use this to give an example of a learning task and algorithm for which with-replacement random sampling actually outperforms random reshuffling. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.