title
stringlengths 13
150
| url
stringlengths 97
97
| authors
stringlengths 8
467
| detail_url
stringlengths 97
97
| tags
stringclasses 1
value | AuthorFeedback
stringlengths 102
102
⌀ | Bibtex
stringlengths 53
54
| MetaReview
stringlengths 99
99
| Paper
stringlengths 93
93
| Review
stringlengths 95
95
| Supplemental
stringlengths 100
100
⌀ | abstract
stringlengths 53
2k
|
---|---|---|---|---|---|---|---|---|---|---|---|
Counterexample-Guided Learning of Monotonic Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/8ab70731b1553f17c11a3bbc87e0b605-Abstract.html | Aishwarya Sivaraman, Golnoosh Farnadi, Todd Millstein, Guy Van den Broeck | https://papers.nips.cc/paper_files/paper/2020/hash/8ab70731b1553f17c11a3bbc87e0b605-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8ab70731b1553f17c11a3bbc87e0b605-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10725-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8ab70731b1553f17c11a3bbc87e0b605-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8ab70731b1553f17c11a3bbc87e0b605-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8ab70731b1553f17c11a3bbc87e0b605-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8ab70731b1553f17c11a3bbc87e0b605-Supplemental.zip | The widespread adoption of deep learning is often attributed to its automatic feature construction with minimal inductive bias. However, in many real-world tasks, the learned function is intended to satisfy domain-specific constraints. We focus on monotonicity constraints, which are common and require that the function's output increases with increasing values of specific input features. We develop a counterexample-guided technique to provably enforce monotonicity constraints at prediction time. Additionally, we propose a technique to use monotonicity as an inductive bias for deep learning. It works by iteratively incorporating monotonicity counterexamples in the learning process. Contrary to prior work in monotonic learning, we target general ReLU neural networks and do not further restrict the hypothesis space. We have implemented these techniques in a tool called COMET. Experiments on real-world datasets demonstrate that our approach achieves state-of-the-art results compared to existing monotonic learners, and can improve the model quality compared to those that were trained without taking monotonicity constraints into account. |
A Novel Approach for Constrained Optimization in Graphical Models | https://papers.nips.cc/paper_files/paper/2020/hash/8ab9bb97ce35080338be74dc6375e0ed-Abstract.html | Sara Rouhani, Tahrima Rahman, Vibhav Gogate | https://papers.nips.cc/paper_files/paper/2020/hash/8ab9bb97ce35080338be74dc6375e0ed-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8ab9bb97ce35080338be74dc6375e0ed-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10726-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8ab9bb97ce35080338be74dc6375e0ed-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8ab9bb97ce35080338be74dc6375e0ed-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8ab9bb97ce35080338be74dc6375e0ed-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8ab9bb97ce35080338be74dc6375e0ed-Supplemental.zip | We consider the following constrained maximization problem in discrete probabilistic graphical models (PGMs). Given two (possibly identical) PGMs $M_1$ and $M_2$ defined over the same set of variables and a real number $q$, find an assignment of values to all variables such that the probability of the assignment is maximized w.r.t. $M_1$ and is smaller than $q$ w.r.t. $M_2$. We show that several explanation and robust estimation queries over graphical models are special cases of this problem. We propose a class of approximate algorithms for solving this problem. Our algorithms are based on a graph concept called $k$-separator and heuristic algorithms for multiple choice knapsack and subset-sum problems. Our experiments show that our algorithms are superior to the following approach: encode the problem as a mixed integer linear program (MILP) and solve the latter using a state-of-the-art MILP solver such as SCIP. |
Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology | https://papers.nips.cc/paper_files/paper/2020/hash/8abfe8ac9ec214d68541fcb888c0b4c3-Abstract.html | Quynh N. Nguyen, Marco Mondelli | https://papers.nips.cc/paper_files/paper/2020/hash/8abfe8ac9ec214d68541fcb888c0b4c3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8abfe8ac9ec214d68541fcb888c0b4c3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10727-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8abfe8ac9ec214d68541fcb888c0b4c3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8abfe8ac9ec214d68541fcb888c0b4c3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8abfe8ac9ec214d68541fcb888c0b4c3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8abfe8ac9ec214d68541fcb888c0b4c3-Supplemental.pdf | Recent works have shown that gradient descent can find a global minimum for over-parameterized neural networks where the widths of all the hidden layers scale polynomially with N (N being the number of training samples). In this paper, we prove that, for deep networks, a single layer of width N following the input layer suffices to ensure a similar guarantee. In particular, all the remaining layers are allowed to have constant widths, and form a pyramidal topology. We show an application of our result to the widely used Xavier's initialization and obtain an over-parameterization requirement for the single wide layer of order N^2. |
On the Trade-off between Adversarial and Backdoor Robustness | https://papers.nips.cc/paper_files/paper/2020/hash/8b4066554730ddfaa0266346bdc1b202-Abstract.html | Cheng-Hsin Weng, Yan-Ting Lee, Shan-Hung (Brandon) Wu | https://papers.nips.cc/paper_files/paper/2020/hash/8b4066554730ddfaa0266346bdc1b202-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8b4066554730ddfaa0266346bdc1b202-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10728-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8b4066554730ddfaa0266346bdc1b202-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8b4066554730ddfaa0266346bdc1b202-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8b4066554730ddfaa0266346bdc1b202-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8b4066554730ddfaa0266346bdc1b202-Supplemental.pdf | Deep neural networks are shown to be susceptible to both adversarial attacks and backdoor attacks. Although many defenses against an individual type of the above attacks have been proposed, the interactions between the vulnerabilities of a network to both types of attacks have not been carefully investigated yet. In this paper, we conduct experiments to study whether adversarial robustness and backdoor robustness can affect each other and find a trade-off—by increasing the robustness of a network to adversarial examples, the network becomes more vulnerable to backdoor attacks. We then investigate the cause and show how such a trade-off can be exploited for either good or bad purposes. Our findings suggest that future research on defense should take both adversarial and backdoor attacks into account when designing algorithms or robustness measures to avoid pitfalls and a false sense of security. |
Implicit Graph Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/8b5c8441a8ff8e151b191c53c1842a38-Abstract.html | Fangda Gu, Heng Chang, Wenwu Zhu, Somayeh Sojoudi, Laurent El Ghaoui | https://papers.nips.cc/paper_files/paper/2020/hash/8b5c8441a8ff8e151b191c53c1842a38-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8b5c8441a8ff8e151b191c53c1842a38-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10729-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8b5c8441a8ff8e151b191c53c1842a38-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8b5c8441a8ff8e151b191c53c1842a38-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8b5c8441a8ff8e151b191c53c1842a38-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8b5c8441a8ff8e151b191c53c1842a38-Supplemental.pdf | Graph Neural Networks (GNNs) are widely used deep learning models that learn meaningful representations from graph-structured data. Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture long-range dependencies in underlying graphs. To overcome this difficulty, we propose a graph learning framework, called Implicit Graph Neural Networks (IGNN), where predictions are based on the solution of a fixed-point equilibrium equation involving implicitly defined "state" vectors. We use the Perron-Frobenius theory to derive sufficient conditions that ensure well-posedness of the framework. Leveraging implicit differentiation, we derive a tractable projected gradient descent method to train the framework. Experiments on a comprehensive range of tasks show that IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models. |
Rethinking Importance Weighting for Deep Learning under Distribution Shift | https://papers.nips.cc/paper_files/paper/2020/hash/8b9e7ab295e87570551db122a04c6f7c-Abstract.html | Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama | https://papers.nips.cc/paper_files/paper/2020/hash/8b9e7ab295e87570551db122a04c6f7c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8b9e7ab295e87570551db122a04c6f7c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10730-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8b9e7ab295e87570551db122a04c6f7c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8b9e7ab295e87570551db122a04c6f7c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8b9e7ab295e87570551db122a04c6f7c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8b9e7ab295e87570551db122a04c6f7c-Supplemental.pdf | Under distribution shift (DS) where the training data distribution differs from the test one, a powerful technique is importance weighting (IW) which handles DS in two separate steps: weight estimation (WE) estimates the test-over-training density ratio and weighted classification (WC) trains the classifier from weighted training data. However, IW cannot work well on complex data, since WE is incompatible with deep learning. In this paper, we rethink IW and theoretically show it suffers from a circular dependency: we need not only WE for WC, but also WC for WE where a trained deep classifier is used as the feature extractor (FE). To cut off the dependency, we try to pretrain FE from unweighted training data, which leads to biased FE. To overcome the bias, we propose an end-to-end solution dynamic IW that iterates between WE and WC and combines them in a seamless manner, and hence our WE can also enjoy deep networks and stochastic optimizers indirectly. Experiments with two representative types of DS on three popular datasets show that our dynamic IW compares favorably with state-of-the-art methods. |
Guiding Deep Molecular Optimization with Genetic Exploration | https://papers.nips.cc/paper_files/paper/2020/hash/8ba6c657b03fc7c8dd4dff8e45defcd2-Abstract.html | Sungsoo Ahn, Junsu Kim, Hankook Lee, Jinwoo Shin | https://papers.nips.cc/paper_files/paper/2020/hash/8ba6c657b03fc7c8dd4dff8e45defcd2-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8ba6c657b03fc7c8dd4dff8e45defcd2-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10731-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8ba6c657b03fc7c8dd4dff8e45defcd2-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8ba6c657b03fc7c8dd4dff8e45defcd2-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8ba6c657b03fc7c8dd4dff8e45defcd2-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8ba6c657b03fc7c8dd4dff8e45defcd2-Supplemental.pdf | De novo molecular design attempts to search over the chemical space for molecules with the desired property. Recently, deep learning has gained considerable attention as a promising approach to solve the problem. In this paper, we propose genetic expert-guided learning (GEGL), a simple yet novel framework for training a deep neural network (DNN) to generate highly-rewarding molecules. Our main idea is to design a "genetic expert improvement" procedure, which generates high-quality targets for imitation learning of the DNN. Extensive experiments show that GEGL significantly improves over state-of-the-art methods. For example, GEGL manages to solve the penalized octanol-water partition coefficient optimization with a score of 31.40, while the best-known score in the literature is 27.22. Besides, for the GuacaMol benchmark with 20 tasks, our method achieves the highest score for 19 tasks, in comparison with state-of-the-art methods, and newly obtains the perfect score for three tasks. Our training code is available at https://github.com/sungsoo-ahn/genetic-expert-guided-learning. |
Temporal Spike Sequence Learning via Backpropagation for Deep Spiking Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/8bdb5058376143fa358981954e7626b8-Abstract.html | Wenrui Zhang, Peng Li | https://papers.nips.cc/paper_files/paper/2020/hash/8bdb5058376143fa358981954e7626b8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8bdb5058376143fa358981954e7626b8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10732-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8bdb5058376143fa358981954e7626b8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8bdb5058376143fa358981954e7626b8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8bdb5058376143fa358981954e7626b8-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8bdb5058376143fa358981954e7626b8-Supplemental.pdf | Spiking neural networks (SNNs) are well suited for spatio-temporal learning and implementations on energy-efficient event-driven neuromorphic processors. However, existing SNN error backpropagation (BP) methods lack proper handling of spiking discontinuities and suffer from low performance compared with the BP methods for traditional artificial neural networks. In addition, a large number of time steps are typically required to achieve decent performance, leading to high latency and rendering spike-based computation unscalable to deep architectures. We present a novel Temporal Spike Sequence Learning Backpropagation (TSSL-BP) method for training deep SNNs, which breaks down error backpropagation across two types of inter-neuron and intra-neuron dependencies and leads to improved temporal learning precision. It captures inter-neuron dependencies through presynaptic firing times by considering the all-or-none characteristics of firing activities and captures intra-neuron dependencies by handling the internal evolution of each neuronal state in time. TSSL-BP efficiently trains deep SNNs within a much shortened temporal window of a few steps while improving the accuracy for various image classification datasets including CIFAR10. |
TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation | https://papers.nips.cc/paper_files/paper/2020/hash/8c00dee24c9878fea090ed070b44f1ab-Abstract.html | DONGXU LI, Chenchen Xu, Xin Yu, Kaihao Zhang, Benjamin Swift, Hanna Suominen, Hongdong Li | https://papers.nips.cc/paper_files/paper/2020/hash/8c00dee24c9878fea090ed070b44f1ab-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8c00dee24c9878fea090ed070b44f1ab-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10733-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8c00dee24c9878fea090ed070b44f1ab-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8c00dee24c9878fea090ed070b44f1ab-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8c00dee24c9878fea090ed070b44f1ab-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8c00dee24c9878fea090ed070b44f1ab-Supplemental.zip | Sign language translation (SLT) aims to interpret sign video sequences into text-based natural language sentences. Sign videos consist of continuous sequences of sign gestures with no clear boundaries in between. Existing SLT models usually represent sign visual features in a frame-wise manner so as to avoid needing to explicitly segmenting the videos into isolated signs. However, these methods neglect the temporal information of signs and lead to substantial ambiguity in translation. In this paper, we explore the temporal semantic structures of sign videos to learn more discriminative features. To this end, we first present a novel sign video segment representation which takes into account multiple temporal granularities, thus alleviating the need for accurate video segmentation. Taking advantage of the proposed segment representation, we develop a novel hierarchical sign video feature learning method via a temporal semantic pyramid network, called TSPNet. Specifically, TSPNet introduces an inter-scale attention to evaluate and enhance local semantic consistency of sign segments and an intra-scale attention to resolve semantic ambiguity by using non-local video context. Experiments show that our TSPNet outperforms the state-of-the-art with significant improvements on the BLEU score (from 9.58 to 13.41) and ROUGE score (from 31.80 to 34.96) on the largest commonly used SLT dataset. Our implementation is available at https://github.com/verashira/TSPNet. |
Neural Topographic Factor Analysis for fMRI Data | https://papers.nips.cc/paper_files/paper/2020/hash/8c3c27ac7d298331a1bdfd0a5e8703d3-Abstract.html | Eli Sennesh, Zulqarnain Khan, Yiyu Wang, J Benjamin Hutchinson, Ajay Satpute, Jennifer Dy, Jan-Willem van de Meent | https://papers.nips.cc/paper_files/paper/2020/hash/8c3c27ac7d298331a1bdfd0a5e8703d3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8c3c27ac7d298331a1bdfd0a5e8703d3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10734-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8c3c27ac7d298331a1bdfd0a5e8703d3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8c3c27ac7d298331a1bdfd0a5e8703d3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8c3c27ac7d298331a1bdfd0a5e8703d3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8c3c27ac7d298331a1bdfd0a5e8703d3-Supplemental.zip | Neuroimaging studies produce gigabytes of spatio-temporal data for a small number of participants and stimuli. Recent work increasingly suggests that the common practice of averaging across participants and stimuli leaves out systematic and meaningful information. We propose Neural Topographic Factor Analysis (NTFA), a probabilistic factor analysis model that infers embeddings for participants and stimuli. These embeddings allow us to reason about differences between participants and stimuli as signal rather than noise. We evaluate NTFA on data from an in-house pilot experiment, as well as two publicly available datasets. We demonstrate that inferring representations for participants and stimuli improves predictive generalization to unseen data when compared to previous topographic methods. We also demonstrate that the inferred latent factor representations are useful for downstream tasks such as multivoxel pattern analysis and functional connectivity. |
Neural Architecture Generator Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/8c53d30ad023ce50140181f713059ddf-Abstract.html | Robin Ru, Pedro Esperança, Fabio Maria Carlucci | https://papers.nips.cc/paper_files/paper/2020/hash/8c53d30ad023ce50140181f713059ddf-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8c53d30ad023ce50140181f713059ddf-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10735-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8c53d30ad023ce50140181f713059ddf-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8c53d30ad023ce50140181f713059ddf-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8c53d30ad023ce50140181f713059ddf-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8c53d30ad023ce50140181f713059ddf-Supplemental.pdf | Neural Architecture Search (NAS) was first proposed to achieve state-of-the-art performance through the discovery of new architecture patterns, without human intervention. An over-reliance on expert knowledge in the search space design has however led to increased performance (local optima) without significant architectural breakthroughs, thus preventing truly novel solutions from being reached. In this work we 1) are the first to investigate casting NAS as a problem of finding the optimal network generator and 2) we propose a new, hierarchical and graph-based search space capable of representing an extremely large variety of network types, yet only requiring few continuous hyper-parameters. This greatly reduces the dimensionality of the problem, enabling the effective use of Bayesian Optimisation as a search strategy. At the same time, we expand the range of valid architectures, motivating a multi-objective learning approach. We demonstrate the effectiveness of this strategy on six benchmark datasets and show that our search space generates extremely lightweight yet highly competitive models. |
A Bandit Learning Algorithm and Applications to Auction Design | https://papers.nips.cc/paper_files/paper/2020/hash/8ccf1fb8b09a8212bafea305cf5d5e9f-Abstract.html | Kim Thang Nguyen | https://papers.nips.cc/paper_files/paper/2020/hash/8ccf1fb8b09a8212bafea305cf5d5e9f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8ccf1fb8b09a8212bafea305cf5d5e9f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10736-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8ccf1fb8b09a8212bafea305cf5d5e9f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8ccf1fb8b09a8212bafea305cf5d5e9f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8ccf1fb8b09a8212bafea305cf5d5e9f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8ccf1fb8b09a8212bafea305cf5d5e9f-Supplemental.pdf | We consider online bandit learning in which at every time step,
an algorithm has to make a decision and then observe only its reward.
The goal is to design efficient (polynomial-time) algorithms that achieve a total reward approximately
close to that of the best fixed decision in hindsight.
In this paper, we introduce a new notion of $(\lambda,\mu)$-concave functions and present a bandit learning algorithm that
achieves a performance guarantee which is characterized as a function of the concavity parameters $\lambda$ and $\mu$.
The algorithm is based on the mirror descent algorithm in which the update directions follow the gradient of the multilinear extensions of
the reward functions.
The regret bound induced by our algorithm is $\widetilde{O}(\sqrt{T})$ which is nearly optimal.
We apply our algorithm to auction design, specifically to welfare maximization,
revenue maximization, and no-envy learning in auctions.
In welfare maximization, we show that a version of fictitious play
in smooth auctions guarantees a competitive regret bound which is determined by the smooth parameters.
In revenue maximization, we consider the simultaneous second-price auctions with reserve prices
in multi-parameter environments. We give a bandit algorithm which achieves the total revenue at least $1/2$ times
that of the best fixed reserve prices in hindsight.
In no-envy learning, we study the bandit item selection problem where the player valuation is submodular
and provide an efficient $1/2$-approximation no-envy algorithm. |
MetaPoison: Practical General-purpose Clean-label Data Poisoning | https://papers.nips.cc/paper_files/paper/2020/hash/8ce6fc704072e351679ac97d4a985574-Abstract.html | W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein | https://papers.nips.cc/paper_files/paper/2020/hash/8ce6fc704072e351679ac97d4a985574-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8ce6fc704072e351679ac97d4a985574-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10737-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8ce6fc704072e351679ac97d4a985574-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8ce6fc704072e351679ac97d4a985574-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8ce6fc704072e351679ac97d4a985574-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8ce6fc704072e351679ac97d4a985574-Supplemental.pdf | Data poisoning---the process by which an attacker takes control of a model by making imperceptible changes to a subset of the training data---is an emerging threat in the context of neural networks. Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models. We propose MetaPoison, a first-order method that approximates the bilevel problem via meta-learning and crafts poisons that fool neural networks. MetaPoison is effective: it outperforms previous clean-label poisoning methods by a large margin. MetaPoison is robust: poisoned data made for one model transfer to a variety of victim models with unknown training settings and architectures. MetaPoison is general-purpose, it works not only in fine-tuning scenarios, but also for end-to-end training from scratch, which till now hasn't been feasible for clean-label attacks with deep nets. MetaPoison can achieve arbitrary adversary goals---like using poisons of one class to make a target image don the label of another arbitrarily chosen class. Finally, MetaPoison works in the real-world. We demonstrate for the first time successful data poisoning of models trained on the black-box Google Cloud AutoML API. |
Sample Efficient Reinforcement Learning via Low-Rank Matrix Estimation | https://papers.nips.cc/paper_files/paper/2020/hash/8d2355364e9a2ba1f82f975414937b43-Abstract.html | Devavrat Shah, Dogyoon Song, Zhi Xu, Yuzhe Yang | https://papers.nips.cc/paper_files/paper/2020/hash/8d2355364e9a2ba1f82f975414937b43-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8d2355364e9a2ba1f82f975414937b43-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10738-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8d2355364e9a2ba1f82f975414937b43-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8d2355364e9a2ba1f82f975414937b43-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8d2355364e9a2ba1f82f975414937b43-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8d2355364e9a2ba1f82f975414937b43-Supplemental.pdf | We consider the question of learning $Q$-function in a sample efficient manner for reinforcement learning with continuous state and action spaces under a generative model. If $Q$-function is Lipschitz continuous, then the minimal sample complexity for estimating $\epsilon$-optimal $Q$-function is known to scale as $\Omega(\frac{1}{\epsilon^{d_1+d_2+2}})$ per classical non-parametric learning theory, where $d_1$ and $d_2$ denote the dimensions of the state and action spaces respectively. The $Q$-function, when viewed as a kernel, induces a Hilbert-Schmidt operator and hence possesses square-summable spectrum. This motivates us to consider a parametric class of $Q$-functions parameterized by its "rank" $r$, which contains all Lipschitz $Q$-functions as $r\to\infty$. As our key contribution, we develop a simple, iterative learning algorithm that finds $\epsilon$-optimal $Q$-function with sample complexity of $\widetilde{O}(\frac{1}{\epsilon^{\max(d_1, d_2)+2}})$ when the optimal $Q$-function has low rank $r$ and the discounting factor $\gamma$ is below a certain threshold. Thus, this provides an exponential improvement in sample complexity. To enable our result, we develop a novel Matrix Estimation algorithm that faithfully estimates an unknown low-rank matrix in the $\ell_\infty$ sense even in the presence of arbitrary bounded noise, which might be of interest in its own right. Empirical results on several stochastic control tasks confirm the efficacy of our "low-rank" algorithms. |
Training Generative Adversarial Networks with Limited Data | https://papers.nips.cc/paper_files/paper/2020/hash/8d30aa96e72440759f74bd2306c1fa3d-Abstract.html | Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, Timo Aila | https://papers.nips.cc/paper_files/paper/2020/hash/8d30aa96e72440759f74bd2306c1fa3d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8d30aa96e72440759f74bd2306c1fa3d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10739-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8d30aa96e72440759f74bd2306c1fa3d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8d30aa96e72440759f74bd2306c1fa3d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8d30aa96e72440759f74bd2306c1fa3d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8d30aa96e72440759f74bd2306c1fa3d-Supplemental.pdf | Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42. |
Deeply Learned Spectral Total Variation Decomposition | https://papers.nips.cc/paper_files/paper/2020/hash/8d3215ae97598264ad6529613774a038-Abstract.html | Tamara G. Grossmann, Yury Korolev, Guy Gilboa, Carola Schoenlieb | https://papers.nips.cc/paper_files/paper/2020/hash/8d3215ae97598264ad6529613774a038-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8d3215ae97598264ad6529613774a038-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10740-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8d3215ae97598264ad6529613774a038-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8d3215ae97598264ad6529613774a038-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8d3215ae97598264ad6529613774a038-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8d3215ae97598264ad6529613774a038-Supplemental.pdf | Non-linear spectral decompositions of images based on one-homogeneous functionals such as total variation have gained considerable attention in the last few years. Due to their ability to extract spectral components corresponding to objects of different size and contrast, such decompositions enable filtering, feature transfer, image fusion and other applications. However, obtaining this decomposition involves solving multiple non-smooth optimisation problems and is therefore computationally highly intensive. In this paper, we present a neural network approximation of a non-linear spectral decomposition. We report up to four orders of magnitude (×10,000) speedup in processing of mega-pixel size images, compared to classical GPU implementations. Our proposed network, TVspecNET, is able to implicitly learn the underlying PDE and, despite being entirely data driven, inherits invariances of the model based transform. To the best of our knowledge, this is the first approach towards learning a non-linear spectral decomposition of images. Not only do we gain a staggering computational advantage, but this approach can also be seen as a step towards studying neural networks that can decompose an image into spectral components defined by a user rather than a handcrafted functional. |
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training | https://papers.nips.cc/paper_files/paper/2020/hash/8dc5983b8c4ef1d8fcd5f325f9a65511-Abstract.html | Yonggan Fu, Haoran You, Yang Zhao, Yue Wang, Chaojian Li, Kailash Gopalakrishnan, Zhangyang Wang, Yingyan Lin | https://papers.nips.cc/paper_files/paper/2020/hash/8dc5983b8c4ef1d8fcd5f325f9a65511-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8dc5983b8c4ef1d8fcd5f325f9a65511-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10741-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8dc5983b8c4ef1d8fcd5f325f9a65511-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8dc5983b8c4ef1d8fcd5f325f9a65511-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8dc5983b8c4ef1d8fcd5f325f9a65511-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8dc5983b8c4ef1d8fcd5f325f9a65511-Supplemental.pdf | Recent breakthroughs in deep neural networks (DNNs) have fueled a tremendous demand for intelligent edge devices featuring on-site learning, while the practical realization of such systems remains a challenge due to the limited resources available at the edge and the required massive training costs for state-of-the-art (SOTA) DNNs. As reducing precision is one of the most effective knobs for boosting training time/energy efficiency, there has been a growing interest in low-precision DNN training. In this paper, we explore from an orthogonal direction: how to fractionally squeeze out more training cost savings from the most redundant bit level, progressively along the training trajectory and dynamically per input. Specifically, we propose FracTrain that integrates (i) progressive fractional quantization which gradually increases the precision of activations, weights, and gradients that will not reach the precision of SOTA static quantized DNN training until the final training stage, and (ii) dynamic fractional quantization which assigns precisions to both the activations and gradients of each layer in an input-adaptive manner, for only "fractionally" updating layer parameters. Extensive simulations and ablation studies (six models, four datasets, and three training settings including standard, adaptation, and fine-tuning) validate the effectiveness of FracTrain in reducing computational cost and hardware-quantified energy/latency of DNN training while achieving a comparable or better (-0.12%~+1.87%) accuracy. For example, when training ResNet-74 on CIFAR-10, FracTrain achieves 77.6% and 53.5% computational cost and training latency savings, respectively, compared with the best SOTA baseline, while achieving a comparable (-0.07%) accuracy. Our codes are available at: https://github.com/RICE-EIC/FracTrain. |
Improving Neural Network Training in Low Dimensional Random Bases | https://papers.nips.cc/paper_files/paper/2020/hash/8dcf2420e78a64333a59674678fb283b-Abstract.html | Frithjof Gressmann, Zach Eaton-Rosen, Carlo Luschi | https://papers.nips.cc/paper_files/paper/2020/hash/8dcf2420e78a64333a59674678fb283b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8dcf2420e78a64333a59674678fb283b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10742-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8dcf2420e78a64333a59674678fb283b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8dcf2420e78a64333a59674678fb283b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8dcf2420e78a64333a59674678fb283b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8dcf2420e78a64333a59674678fb283b-Supplemental.pdf | Stochastic Gradient Descent (SGD) has proven to be remarkably effective in optimizing deep neural networks that employ ever-larger numbers of parameters. Yet, improving the efficiency of large-scale optimization remains a vital and highly active area of research. Recent work has shown that deep neural networks can be optimized in randomly-projected subspaces of much smaller dimensionality than their native parameter space. While such training is promising for more efficient and scalable optimization schemes, its practical application is limited by inferior optimization performance.
Here, we improve on recent random subspace approaches as follows. We show that keeping the random projection fixed throughout training is detrimental to optimization. We propose re-drawing the random subspace at each step, which yields significantly better performance. We realize further improvements by applying independent projections to different parts of the network, making the approximation more efficient as network dimensionality grows.
To implement these experiments, we leverage hardware-accelerated pseudo-random number generation to construct the random projections on-demand at every optimization step, allowing us to distribute the computation of independent random directions across multiple workers with shared random seeds. This yields significant reductions in memory and is up to 10x faster for the workloads in question. |
Safe Reinforcement Learning via Curriculum Induction | https://papers.nips.cc/paper_files/paper/2020/hash/8df6a65941e4c9da40a4fb899de65c55-Abstract.html | Matteo Turchetta, Andrey Kolobov, Shital Shah, Andreas Krause, Alekh Agarwal | https://papers.nips.cc/paper_files/paper/2020/hash/8df6a65941e4c9da40a4fb899de65c55-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8df6a65941e4c9da40a4fb899de65c55-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10743-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8df6a65941e4c9da40a4fb899de65c55-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8df6a65941e4c9da40a4fb899de65c55-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8df6a65941e4c9da40a4fb899de65c55-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8df6a65941e4c9da40a4fb899de65c55-Supplemental.pdf | In safety-critical applications, autonomous agents may need to learn in an environment where mistakes can be very costly. In such settings, the agent needs to behave safely not only after but also while learning. To achieve this, existing safe reinforcement learning methods make an agent rely on priors that let it avoid dangerous situations during exploration with high probability, but both the probabilistic guarantees and the smoothness assumptions inherent in the priors are not viable in many scenarios of interest such as autonomous driving. This paper presents an alternative approach inspired by human teaching, where an agent learns under the supervision of an automatic instructor that saves the agent from violating constraints during learning. In this model, we introduce the monitor that neither needs to know how to do well at the task the agent is learning nor needs to know how the environment works. Instead, it has a library of reset controllers that it activates when the agent starts behaving dangerously, preventing it from doing damage. Crucially, the choices of which reset controller to apply in which situation affect the speed of agent learning. Based on observing agents' progress the teacher itself learns a policy for choosing the reset controllers, a curriculum, to optimize the agent's final policy reward. Our experiments use this framework in two environments to induce curricula for safe and efficient learning. |
Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/8e2c381d4dd04f1c55093f22c59c3a08-Abstract.html | Nino Vieillard, Tadashi Kozuno, Bruno Scherrer, Olivier Pietquin, Remi Munos, Matthieu Geist | https://papers.nips.cc/paper_files/paper/2020/hash/8e2c381d4dd04f1c55093f22c59c3a08-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8e2c381d4dd04f1c55093f22c59c3a08-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10744-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8e2c381d4dd04f1c55093f22c59c3a08-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8e2c381d4dd04f1c55093f22c59c3a08-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8e2c381d4dd04f1c55093f22c59c3a08-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8e2c381d4dd04f1c55093f22c59c3a08-Supplemental.pdf | Recent Reinforcement Learning (RL) algorithms making use of Kullback-Leibler (KL) regularization as a core component have shown outstanding performance. Yet, only little is understood theoretically about why KL regularization helps, so far. We study KL regularization within an approximate value iteration scheme and show that it implicitly averages q-values. Leveraging this insight, we provide a very strong performance bound, the very first to combine two desirable aspects: a linear dependency to the horizon (instead of quadratic) and an error propagation term involving an averaging effect of the estimation errors (instead of an accumulation effect). We also study the more general case of an additional entropy regularizer. The resulting abstract scheme encompasses many existing RL algorithms. Some of our assumptions do not hold with neural networks, so we complement this theoretical analysis with an extensive empirical study. |
How Robust are the Estimated Effects of Nonpharmaceutical Interventions against COVID-19? | https://papers.nips.cc/paper_files/paper/2020/hash/8e3308c853e47411c761429193511819-Abstract.html | Mrinank Sharma, Sören Mindermann, Jan Brauner, Gavin Leech, Anna Stephenson, Tomáš Gavenčiak, Jan Kulveit, Yee Whye Teh, Leonid Chindelevitch, Yarin Gal | https://papers.nips.cc/paper_files/paper/2020/hash/8e3308c853e47411c761429193511819-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8e3308c853e47411c761429193511819-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10745-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8e3308c853e47411c761429193511819-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8e3308c853e47411c761429193511819-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8e3308c853e47411c761429193511819-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8e3308c853e47411c761429193511819-Supplemental.pdf | To what extent are effectiveness estimates of nonpharmaceutical interventions (NPIs) against COVID-19 influenced by the assumptions our models make? To answer this question, we investigate 2 state-of-the-art NPI effectiveness models and propose 6 variants that make different structural assumptions. In particular, we investigate how well NPI effectiveness estimates generalise to unseen countries, and their sensitivity to unobserved factors. Models which account for noise in disease transmission compare favourably. We further evaluate how robust estimates are to different choices of epidemiological parameters and data. Focusing on models that assume transmission noise, we find that previously published results are robust across these choices and across different models. Finally, we mathematically ground the interpretation of NPI effectiveness estimates when certain common assumptions do not hold. |
Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses | https://papers.nips.cc/paper_files/paper/2020/hash/8ee7730e97c67473a424ccfeff49ab20-Abstract.html | Kaivalya Rawal, Himabindu Lakkaraju | https://papers.nips.cc/paper_files/paper/2020/hash/8ee7730e97c67473a424ccfeff49ab20-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8ee7730e97c67473a424ccfeff49ab20-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10746-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8ee7730e97c67473a424ccfeff49ab20-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8ee7730e97c67473a424ccfeff49ab20-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8ee7730e97c67473a424ccfeff49ab20-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8ee7730e97c67473a424ccfeff49ab20-Supplemental.pdf | As predictive models are increasingly being deployed in high-stakes decision-making, there has been a lot of interest in developing algorithms which can provide recourses to affected individuals. While developing such tools is important, it is even more critical to analyze and interpret a predictive model, and vet it thoroughly to ensure that the recourses it offers are meaningful and non-discriminatory before it is deployed in the real world. To this end, we propose a novel model agnostic framework called Actionable Recourse Summaries (AReS) to construct global counterfactual explanations which provide an interpretable and accurate summary of recourses for the entire population. We formulate a novel objective which simultaneously optimizes for correctness of the recourses and interpretability of the explanations, while minimizing overall recourse costs across the entire population. More specifically, our objective enables us to learn, with optimality guarantees on recourse correctness, a small number of compact rule sets each of which capture recourses for well defined subpopulations within the data. We also demonstrate theoretically that several of the prior approaches proposed to generate recourses for individuals are special cases of our framework. Experimental evaluation with real world datasets and user studies demonstrate that our framework can provide decision makers with a comprehensive overview of recourses corresponding to any black box model, and consequently help detect undesirable model biases and discrimination. |
Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization | https://papers.nips.cc/paper_files/paper/2020/hash/8f4576ad85410442a74ee3a7683757b3-Abstract.html | Benjamin Aubin, Florent Krzakala, Yue Lu, Lenka Zdeborová | https://papers.nips.cc/paper_files/paper/2020/hash/8f4576ad85410442a74ee3a7683757b3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8f4576ad85410442a74ee3a7683757b3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10747-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8f4576ad85410442a74ee3a7683757b3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8f4576ad85410442a74ee3a7683757b3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8f4576ad85410442a74ee3a7683757b3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8f4576ad85410442a74ee3a7683757b3-Supplemental.pdf | We consider a commonly studied supervised classification of a synthetic dataset whose labels are generated by feeding a one-layer non-linear neural network with random iid inputs. We study the generalization performances of standard classifiers in the high-dimensional regime where $\alpha=\frac{n}{d}$ is kept finite in the limit of a high dimension $d$ and number of samples $n$. Our contribution is three-fold: First, we prove a formula for the generalization error achieved by $\ell_2$ regularized classifiers that minimize a convex loss. This formula was first obtained by the heuristic replica method of statistical physics. Secondly, focussing on commonly used loss functions and optimizing the $\ell_2$ regularization strength, we observe that while ridge regression performance is poor, logistic and hinge regression are surprisingly able to approach the Bayes-optimal generalization error extremely closely. As $\alpha \to \infty$ they lead to Bayes-optimal rates, a fact that does not follow from predictions of margin-based generalization error bounds. Third, we design an optimal loss and regularizer that provably leads to Bayes-optimal generalization error. |
Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method | https://papers.nips.cc/paper_files/paper/2020/hash/8f468c873a32bb0619eaeb2050ba45d1-Abstract.html | Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh | https://papers.nips.cc/paper_files/paper/2020/hash/8f468c873a32bb0619eaeb2050ba45d1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8f468c873a32bb0619eaeb2050ba45d1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10748-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8f468c873a32bb0619eaeb2050ba45d1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8f468c873a32bb0619eaeb2050ba45d1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8f468c873a32bb0619eaeb2050ba45d1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8f468c873a32bb0619eaeb2050ba45d1-Supplemental.zip | We consider the classical setting of optimizing a nonsmooth Lipschitz continuous convex function over a convex constraint set, when having access to a (stochastic) first-order oracle (FO) for the function and a projection oracle (PO) for the constraint set. It is well known that to achieve $\epsilon$-suboptimality in high-dimensions, $\Theta(\epsilon^{-2})$ FO calls are necessary. This is achieved by the projected subgradient method (PGD). However, PGD also entails $O(\epsilon^{-2})$ PO calls, which may be computationally costlier than FO calls (e.g. nuclear norm constraints). Improving this PO calls complexity of PGD is largely unexplored, despite the fundamental nature of this problem and extensive literature. We present first such improvement. This only requires a mild assumption that the objective function, when extended to a slightly larger neighborhood of the constraint set, still remains Lipschitz and accessible via FO. In particular, we introduce MOPES method, which carefully combines Moreau-Yosida smoothing and accelerated first-order schemes. This is guaranteed to find a feasible $\epsilon$-suboptimal solution using only $O(\epsilon^{-1})$ PO calls and optimal $O(\epsilon^{-2})$ FO calls. Further, instead of a PO if we only have a linear minimization oracle (LMO, a la Frank-Wolfe) to access the constraint set, an extension of our method, MOLES, finds a feasible $\epsilon$-suboptimal solution using $O(\epsilon^{-2})$ LMO calls and FO calls---both match known lower bounds, resolving a question left open since White (1993). Our experiments confirm that these methods achieve significant speedups over the state-of-the-art, for a problem with costly PO and LMO calls. |
PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/8fb134f258b1f7865a6ab2d935a897c9-Abstract.html | Minh Vu, My T. Thai | https://papers.nips.cc/paper_files/paper/2020/hash/8fb134f258b1f7865a6ab2d935a897c9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8fb134f258b1f7865a6ab2d935a897c9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10749-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8fb134f258b1f7865a6ab2d935a897c9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8fb134f258b1f7865a6ab2d935a897c9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8fb134f258b1f7865a6ab2d935a897c9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8fb134f258b1f7865a6ab2d935a897c9-Supplemental.pdf | In Graph Neural Networks (GNNs), the graph structure is incorporated into the learning of node representations. This complex structure makes explaining GNNs' predictions become much more challenging. In this paper, we propose PGM-Explainer, a Probabilistic Graphical Model (PGM) model-agnostic explainer for GNNs. Given a prediction to be explained, PGM-Explainer identifies crucial graph components and generates an explanation in form of a PGM approximating that prediction. Different from existing explainers for GNNs where the explanations are drawn from a set of linear functions of explained features, PGM-Explainer is able to demonstrate the dependencies of explained features in form of conditional probabilities. Our theoretical analysis shows that the PGM generated by PGM-Explainer includes the Markov-blanket of the target prediction, i.e. including all its statistical information. We also show that the explanation returned by PGM-Explainer contains the same set of independence statements in the perfect map. Our experiments on both synthetic and real-world datasets show that PGM-Explainer achieves better performance than existing explainers in many benchmark tasks. |
Few-Cost Salient Object Detection with Adversarial-Paced Learning | https://papers.nips.cc/paper_files/paper/2020/hash/8fc687aa152e8199fe9e73304d407bca-Abstract.html | Dingwen Zhang, HaiBin Tian, Jungong Han | https://papers.nips.cc/paper_files/paper/2020/hash/8fc687aa152e8199fe9e73304d407bca-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8fc687aa152e8199fe9e73304d407bca-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10750-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8fc687aa152e8199fe9e73304d407bca-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8fc687aa152e8199fe9e73304d407bca-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8fc687aa152e8199fe9e73304d407bca-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8fc687aa152e8199fe9e73304d407bca-Supplemental.zip | Detecting and segmenting salient objects from given image scenes has received great attention in recent years. A fundamental challenge in training the existing deep saliency detection models is the requirement of large amounts of annotated data. While gathering large quantities of training data becomes cheap and easy, annotating the data is an expensive process in terms of time, labor and human expertise. To address this problem, this paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only, thus dramatically alleviating human labor in training models. To this end, we name this new task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario. Essentially, APL is derived from the self-paced learning (SPL) regime but it infers the robust learning pace through the data-driven adversarial learning mechanism rather than the heuristic design of the learning regularizer. Comprehensive experiments on four widely-used benchmark datasets have demonstrated that the proposed approach can effectively approach to the existing supervised deep salient object detection models with only 1k human-annotated training images. |
Minimax Estimation of Conditional Moment Models | https://papers.nips.cc/paper_files/paper/2020/hash/8fcd9e5482a62a5fa130468f4cf641ef-Abstract.html | Nishanth Dikkala, Greg Lewis, Lester Mackey, Vasilis Syrgkanis | https://papers.nips.cc/paper_files/paper/2020/hash/8fcd9e5482a62a5fa130468f4cf641ef-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8fcd9e5482a62a5fa130468f4cf641ef-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10751-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8fcd9e5482a62a5fa130468f4cf641ef-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8fcd9e5482a62a5fa130468f4cf641ef-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8fcd9e5482a62a5fa130468f4cf641ef-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/8fcd9e5482a62a5fa130468f4cf641ef-Supplemental.pdf | We develop an approach for estimating models described via conditional moment restrictions, with a prototypical application being non-parametric instrumental variable regression. We introduce a min-max criterion function, under which the estimation problem can be thought of as solving a zero-sum game between a modeler who is optimizing over the hypothesis space of the target model and an adversary who identifies violating moments over a test function space. We analyze the statistical estimation rate of the resulting estimator for arbitrary hypothesis spaces, with respect to an appropriate analogue of the mean squared error metric, for ill-posed inverse problems. We show that when the minimax criterion is regularized with a second moment penalty on the test function and the test function space is sufficiently rich, then the estimation rate scales with the critical radius of the hypothesis and test function spaces, a quantity which typically gives tight fast rates. Our main result follows from a novel localized Rademacher analysis of statistical learning problems defined via minimax objectives. We provide applications of our main results for several hypothesis spaces used in practice such as: reproducing kernel Hilbert spaces, high dimensional sparse linear functions, spaces defined via shape constraints, ensemble estimators such as random forests, and neural networks. For each of these applications we provide computationally efficient optimization methods for solving the corresponding minimax problem (e.g. stochastic first-order heuristics for neural networks). In several applications, we show how our modified mean squared error rate, combined with conditions that bound the ill-posedness of the inverse problem, lead to mean squared error rates. We conclude with an extensive experimental analysis of the proposed methods. |
Causal Imitation Learning With Unobserved Confounders | https://papers.nips.cc/paper_files/paper/2020/hash/8fdd149fcaa7058caccc9c4ad5b0d89a-Abstract.html | Junzhe Zhang, Daniel Kumor, Elias Bareinboim | https://papers.nips.cc/paper_files/paper/2020/hash/8fdd149fcaa7058caccc9c4ad5b0d89a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/8fdd149fcaa7058caccc9c4ad5b0d89a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10752-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/8fdd149fcaa7058caccc9c4ad5b0d89a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/8fdd149fcaa7058caccc9c4ad5b0d89a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/8fdd149fcaa7058caccc9c4ad5b0d89a-Review.html | null | One of the common ways children learn is by mimicking adults. Imitation learning focuses on learning policies with suitable performance from demonstrations generated by an expert, with an unspecified performance measure, and unobserved reward signal. Popular methods for imitation learning start by either directly mimicking the behavior policy of an expert (behavior cloning) or by learning a reward function that prioritizes observed expert trajectories (inverse reinforcement learning). However, these methods rely on the assumption that covariates used by the expert to determine her/his actions are fully observed. In this paper, we relax this assumption and study imitation learning when sensory inputs of the learner and the expert differ. First, we provide a non-parametric, graphical criterion that is complete (both necessary and sufficient) for determining the feasibility of imitation from the combinations of demonstration data and qualitative assumptions about the underlying environment, represented in the form of a causal model. We then show that when such a criterion does not hold, imitation could still be feasible by exploiting quantitative knowledge of the expert trajectories. Finally, we develop an efficient procedure for learning the imitating policy from experts' trajectories. |
Your GAN is Secretly an Energy-based Model and You Should Use Discriminator Driven Latent Sampling | https://papers.nips.cc/paper_files/paper/2020/hash/90525e70b7842930586545c6f1c9310c-Abstract.html | Tong Che, Ruixiang ZHANG, Jascha Sohl-Dickstein, Hugo Larochelle, Liam Paull, Yuan Cao, Yoshua Bengio | https://papers.nips.cc/paper_files/paper/2020/hash/90525e70b7842930586545c6f1c9310c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/90525e70b7842930586545c6f1c9310c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10753-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/90525e70b7842930586545c6f1c9310c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/90525e70b7842930586545c6f1c9310c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/90525e70b7842930586545c6f1c9310c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/90525e70b7842930586545c6f1c9310c-Supplemental.zip | We show that the sum of the implicit generator log-density $\log p_g$ of a GAN with the logit score of the discriminator defines an energy function which yields the true data density when the generator is imperfect but the discriminator is optimal, thus making it possible to improve on the typical generator (with implicit density $p_g$). To make that practical, we show that sampling from this modified density can be achieved by sampling in latent space according to an energy-based model induced by the sum of the latent prior log-density and the discriminator output score. This can be achieved by running a Langevin MCMC in latent space and then applying the generator function, which we call Discriminator Driven Latent Sampling~(DDLS). We show that DDLS is highly efficient compared to previous methods which work in the high-dimensional pixel space and can be applied to improve on previously trained GANs of many types. We evaluate DDLS on both synthetic and real-world datasets qualitatively and quantitatively. On CIFAR-10, DDLS substantially improves the Inception Score of an off-the-shelf pre-trained SN-GAN~\citep{sngan} from $8.22$ to $9.09$ which is even comparable to the class-conditional BigGAN~\citep{biggan} model. This achieves a new state-of-the-art in unconditional image synthesis setting without introducing extra parameters or additional training. |
Learning Black-Box Attackers with Transferable Priors and Query Feedback | https://papers.nips.cc/paper_files/paper/2020/hash/90599c8fdd2f6e7a03ad173e2f535751-Abstract.html | Jiancheng YANG, Yangzhou Jiang, Xiaoyang Huang, Bingbing Ni, Chenglong Zhao | https://papers.nips.cc/paper_files/paper/2020/hash/90599c8fdd2f6e7a03ad173e2f535751-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/90599c8fdd2f6e7a03ad173e2f535751-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10754-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/90599c8fdd2f6e7a03ad173e2f535751-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/90599c8fdd2f6e7a03ad173e2f535751-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/90599c8fdd2f6e7a03ad173e2f535751-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/90599c8fdd2f6e7a03ad173e2f535751-Supplemental.pdf | This paper addresses the challenging black-box adversarial attack problem, where only classification confidence of a victim model is available. Inspired by consistency of visual saliency between different vision models, a surrogate model is expected to improve the attack performance via transferability. By combining transferability-based and query-based black-box attack, we propose a surprisingly simple baseline approach (named SimBA++) using the surrogate model, which significantly outperforms several state-of-the-art methods. Moreover, to efficiently utilize the query feedback, we update the surrogate model in a novel learning scheme, named High-Order Gradient Approximation (HOGA). By constructing a high-order gradient computation graph, we update the surrogate model to approximate the victim model in both forward and backward pass. The SimBA++ and HOGA result in Learnable Black-Box Attack (LeBA), which surpasses previous state of the art by considerable margins: the proposed LeBA significantly reduces queries, while keeping higher attack success rates close to 100% in extensive ImageNet experiments, including attacking vision benchmarks and defensive models. Code is open source at https://github.com/TrustworthyDL/LeBA. |
Locally Differentially Private (Contextual) Bandits Learning | https://papers.nips.cc/paper_files/paper/2020/hash/908c9a564a86426585b29f5335b619bc-Abstract.html | Kai Zheng, Tianle Cai, Weiran Huang, Zhenguo Li, Liwei Wang | https://papers.nips.cc/paper_files/paper/2020/hash/908c9a564a86426585b29f5335b619bc-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/908c9a564a86426585b29f5335b619bc-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10755-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/908c9a564a86426585b29f5335b619bc-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/908c9a564a86426585b29f5335b619bc-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/908c9a564a86426585b29f5335b619bc-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/908c9a564a86426585b29f5335b619bc-Supplemental.zip | We study locally differentially private (LDP) bandits learning in this paper. First, we propose simple black-box reduction frameworks that can solve a large family of context-free bandits learning problems with LDP guarantee. Based on our frameworks, we can improve previous best results for private bandits learning with one-point feedback, such as private Bandits Convex Optimization etc, and obtain the first results for Bandits Convex Optimization (BCO) with multi-point feedback under LDP. LDP guarantee and black-box nature make our frameworks more attractive in real applications compared with previous specifically designed and relatively weaker differentially private (DP) algorithms. Further, we also extend our algorithm to Generalized Linear Bandits with regret bound $\tilde{\mc{O}}(T^{3/4}/\varepsilon)$ under $(\varepsilon, \delta)$-LDP and it is conjectured to be optimal. Note given existing $\Omega(T)$ lower bound for DP contextual linear bandits (Shariff & Sheffet, NeurIPS 2018), our result shows a fundamental difference between LDP and DP for contextual bandits. |
Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax | https://papers.nips.cc/paper_files/paper/2020/hash/90c34175923a36ab7a5de4b981c1972f-Abstract.html | Andres Potapczynski, Gabriel Loaiza-Ganem, John P. Cunningham | https://papers.nips.cc/paper_files/paper/2020/hash/90c34175923a36ab7a5de4b981c1972f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/90c34175923a36ab7a5de4b981c1972f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10756-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/90c34175923a36ab7a5de4b981c1972f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/90c34175923a36ab7a5de4b981c1972f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/90c34175923a36ab7a5de4b981c1972f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/90c34175923a36ab7a5de4b981c1972f-Supplemental.pdf | The Gumbel-Softmax is a continuous distribution over the simplex that is often used as a relaxation of discrete distributions. Because it can be readily interpreted and easily reparameterized, it enjoys widespread use. We propose a modular and more flexible family of reparameterizable distributions where Gaussian noise is transformed into a one-hot approximation through an invertible function. This invertible function is composed of a modified softmax and can incorporate diverse transformations that serve different specific purposes. For example, the stick-breaking procedure allows us to extend the reparameterization trick to distributions with countably infinite support, thus enabling the use of our distribution along nonparametric models, or normalizing flows let us increase the flexibility of the distribution. Our construction enjoys theoretical advantages over the Gumbel-Softmax, such as closed form KL, and significantly outperforms it in a variety of experiments. Our code is available at https://github.com/cunningham-lab/igr. |
Kernel Based Progressive Distillation for Adder Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/912d2b1c7b2826caf99687388d2e8f7c-Abstract.html | Yixing Xu, Chang Xu, Xinghao Chen, Wei Zhang, Chunjing XU, Yunhe Wang | https://papers.nips.cc/paper_files/paper/2020/hash/912d2b1c7b2826caf99687388d2e8f7c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/912d2b1c7b2826caf99687388d2e8f7c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10757-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/912d2b1c7b2826caf99687388d2e8f7c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/912d2b1c7b2826caf99687388d2e8f7c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/912d2b1c7b2826caf99687388d2e8f7c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/912d2b1c7b2826caf99687388d2e8f7c-Supplemental.pdf | Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption. Unfortunately, there is an accuracy drop when replacing all convolution filters by adder filters. The main reason here is the optimization difficulty of ANNs using $\ell_1$-norm, in which the estimation of gradient in back propagation is inaccurate. In this paper, we present a novel method for further improving the performance of ANNs without increasing the trainable parameters via a progressive kernel based knowledge distillation (PKKD) method. A convolutional neural network (CNN) with the same architecture is simultaneously initialized and trained as a teacher network, features and weights of ANN and CNN will be transformed to a new space to eliminate the accuracy drop. The similarity is conducted in a higher-dimensional space to disentangle the difference of their distributions using a kernel based method. Finally, the desired ANN is learned based on the information from both the ground-truth and teacher, progressively. The effectiveness of the proposed method for learning ANN with higher performance is then well-verified on several benchmarks. For instance, the ANN-50 trained using the proposed PKKD method obtains a 76.8\% top-1 accuracy on ImageNet dataset, which is 0.6\% higher than that of the ResNet-50. |
Adversarial Soft Advantage Fitting: Imitation Learning without Policy Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/9161ab7a1b61012c4c303f10b4c16b2c-Abstract.html | Paul Barde, Julien Roy, Wonseok Jeon, Joelle Pineau, Chris Pal, Derek Nowrouzezahrai | https://papers.nips.cc/paper_files/paper/2020/hash/9161ab7a1b61012c4c303f10b4c16b2c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9161ab7a1b61012c4c303f10b4c16b2c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10758-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9161ab7a1b61012c4c303f10b4c16b2c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9161ab7a1b61012c4c303f10b4c16b2c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9161ab7a1b61012c4c303f10b4c16b2c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9161ab7a1b61012c4c303f10b4c16b2c-Supplemental.pdf | Adversarial Imitation Learning alternates between learning a discriminator -- which tells apart expert's demonstrations from generated ones -- and a generator's policy to produce trajectories that can fool this discriminator. This alternated optimization is known to be delicate in practice since it compounds unstable adversarial training with brittle and sample-inefficient reinforcement learning. We propose to remove the burden of the policy optimization steps by leveraging a novel discriminator formulation. Specifically, our discriminator is explicitly conditioned on two policies: the one from the previous generator's iteration and a learnable policy. When optimized, this discriminator directly learns the optimal generator's policy. Consequently, our discriminator's update solves the generator's optimization problem for free: learning a policy that imitates the expert does not require an additional optimization loop. This formulation effectively cuts by half the implementation and computational burden of Adversarial Imitation Learning algorithms by removing the Reinforcement Learning phase altogether. We show on a variety of tasks that our simpler approach is competitive to prevalent Imitation Learning methods. |
Agree to Disagree: Adaptive Ensemble Knowledge Distillation in Gradient Space | https://papers.nips.cc/paper_files/paper/2020/hash/91c77393975889bd08f301c9e13a44b7-Abstract.html | Shangchen Du, Shan You, Xiaojie Li, Jianlong Wu, Fei Wang, Chen Qian, Changshui Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/91c77393975889bd08f301c9e13a44b7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/91c77393975889bd08f301c9e13a44b7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10759-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/91c77393975889bd08f301c9e13a44b7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/91c77393975889bd08f301c9e13a44b7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/91c77393975889bd08f301c9e13a44b7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/91c77393975889bd08f301c9e13a44b7-Supplemental.pdf | Distilling knowledge from an ensemble of teacher models is expected to have a more promising performance than that from a single one. Current methods mainly adopt a vanilla average rule, i.e., to simply take the average of all teacher losses for training the student network. However, this approach treats teachers equally and ignores the diversity among them. When conflicts or competitions exist among teachers, which is common, the inner compromise might hurt the distillation performance. In this paper, we examine the diversity of teacher models in the gradient space and regard the ensemble knowledge distillation as a multi-objective optimization problem so that we can determine a better optimization direction for the training of student network. Besides, we also introduce a tolerance parameter to accommodate disagreement among teachers. In this way, our method can be seen as a dynamic weighting method for each teacher in the ensemble. Extensive experiments validate the effectiveness of our method for both logits-based and feature-based cases. |
The Wasserstein Proximal Gradient Algorithm | https://papers.nips.cc/paper_files/paper/2020/hash/91cff01af640a24e7f9f7a5ab407889f-Abstract.html | Adil Salim, Anna Korba, Giulia Luise | https://papers.nips.cc/paper_files/paper/2020/hash/91cff01af640a24e7f9f7a5ab407889f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/91cff01af640a24e7f9f7a5ab407889f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10760-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/91cff01af640a24e7f9f7a5ab407889f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/91cff01af640a24e7f9f7a5ab407889f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/91cff01af640a24e7f9f7a5ab407889f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/91cff01af640a24e7f9f7a5ab407889f-Supplemental.pdf | Wasserstein gradient flows are continuous time dynamics that define curves of steepest descent to minimize an objective function over the space of probability measures (i.e., the Wasserstein space). This objective is typically a divergence w.r.t. a fixed target distribution. In recent years, these continuous time dynamics have been used to study the convergence of machine learning algorithms aiming at approximating a probability distribution. However, the discrete-time behavior of these algorithms might differ from the continuous time dynamics. Besides, although discretized gradient flows have been proposed in the literature, little is known about their minimization power. In this work, we propose a Forward Backward (FB) discretization scheme that can tackle the case where the objective function is the sum of a smooth and a nonsmooth geodesically convex terms. Using techniques from convex optimization and optimal transport, we analyze the FB scheme as a minimization algorithm on the Wasserstein space. More precisely, we show under mild assumptions that the FB scheme has convergence guarantees similar to the proximal gradient algorithm in Euclidean spaces (resp. similar to the associated Wasserstein gradient flow). |
Universally Quantized Neural Compression | https://papers.nips.cc/paper_files/paper/2020/hash/92049debbe566ca5782a3045cf300a3c-Abstract.html | Eirikur Agustsson, Lucas Theis | https://papers.nips.cc/paper_files/paper/2020/hash/92049debbe566ca5782a3045cf300a3c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/92049debbe566ca5782a3045cf300a3c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10761-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/92049debbe566ca5782a3045cf300a3c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/92049debbe566ca5782a3045cf300a3c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/92049debbe566ca5782a3045cf300a3c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/92049debbe566ca5782a3045cf300a3c-Supplemental.pdf | A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quantization. We demonstrate that a uniform noise channel can also be implemented at test time using universal quantization (Ziv, 1985). This allows us to eliminate the mismatch between training and test phases while maintaining a completely differentiable loss function. Implementing the uniform noise channel is a special case of the more general problem of communicating a sample, which we prove is computationally hard if we do not make assumptions about its distribution. However, the uniform special case is efficient as well as easy to implement and thus of great interest from a practical point of view. Finally, we show that quantization can be obtained as a limiting case of a soft quantizer applied to the uniform noise channel, bridging compression with and without quantization. |
Temporal Variability in Implicit Online Learning | https://papers.nips.cc/paper_files/paper/2020/hash/9239be5f9dc4058ec647f14fd04b1290-Abstract.html | Nicolò Campolongo, Francesco Orabona | https://papers.nips.cc/paper_files/paper/2020/hash/9239be5f9dc4058ec647f14fd04b1290-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9239be5f9dc4058ec647f14fd04b1290-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10762-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9239be5f9dc4058ec647f14fd04b1290-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9239be5f9dc4058ec647f14fd04b1290-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9239be5f9dc4058ec647f14fd04b1290-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9239be5f9dc4058ec647f14fd04b1290-Supplemental.pdf | In the setting of online learning, Implicit algorithms turn out to be highly successful from a practical standpoint. However, the tightest regret analyses only show marginal improvements over Online Mirror Descent. In this work, we shed light on this behavior carrying out a careful regret analysis. We prove a novel static regret bound that depends on the temporal variability of the sequence of loss functions, a quantity which is often encountered when considering dynamic competitors. We show, for example, that the regret can be constant if the temporal variability is constant and the learning rate is tuned appropriately, without the need of smooth losses. Moreover, we present an adaptive algorithm that achieves this regret bound without prior knowledge of the temporal variability and prove a matching lower bound. Finally, we validate our theoretical findings on classification and regression datasets. |
Investigating Gender Bias in Language Models Using Causal Mediation Analysis | https://papers.nips.cc/paper_files/paper/2020/hash/92650b2e92217715fe312e6fa7b90d82-Abstract.html | Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, Stuart Shieber | https://papers.nips.cc/paper_files/paper/2020/hash/92650b2e92217715fe312e6fa7b90d82-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/92650b2e92217715fe312e6fa7b90d82-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10763-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/92650b2e92217715fe312e6fa7b90d82-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/92650b2e92217715fe312e6fa7b90d82-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/92650b2e92217715fe312e6fa7b90d82-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/92650b2e92217715fe312e6fa7b90d82-Supplemental.zip | Many interpretation methods for neural models in natural language processing investigate how information is encoded inside hidden representations. However, these methods can only measure whether the information exists, not whether it is actually used by the model. We propose a methodology grounded in the theory of causal mediation analysis for interpreting which parts of a model are causally implicated in its behavior. The approach enables us to analyze the mechanisms that facilitate the flow of information from input to output through various model components, known as mediators. As a case study, we apply this methodology to analyzing gender bias in pre-trained Transformer language models. We study the role of individual neurons and attention heads in mediating gender bias across three datasets designed to gauge a model's sensitivity to gender bias. Our mediation analysis reveals that gender bias effects are concentrated in specific components of the model that may exhibit highly specialized behavior. |
Off-Policy Imitation Learning from Observations | https://papers.nips.cc/paper_files/paper/2020/hash/92977ae4d2ba21425a59afb269c2a14e-Abstract.html | Zhuangdi Zhu, Kaixiang Lin, Bo Dai, Jiayu Zhou | https://papers.nips.cc/paper_files/paper/2020/hash/92977ae4d2ba21425a59afb269c2a14e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/92977ae4d2ba21425a59afb269c2a14e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10764-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/92977ae4d2ba21425a59afb269c2a14e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/92977ae4d2ba21425a59afb269c2a14e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/92977ae4d2ba21425a59afb269c2a14e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/92977ae4d2ba21425a59afb269c2a14e-Supplemental.pdf | Learning from Observations (LfO) is a practical reinforcement learning scenario from which many applications can benefit through the reuse of incomplete resources. Compared to conventional imitation learning (IL), LfO is more challenging because of the lack of expert action guidance.
In both conventional IL and LfO, distribution matching is at the heart of their foundation. Traditional distribution matching approaches are sample-costly which depend on on-policy transitions for policy learning. Towards sample-efficiency, some off-policy solutions have been proposed, which, however, either lack comprehensive theoretical justifications or depend on the guidance of expert actions.
In this work, we propose a sample-efficient LfO approach which enables off-policy optimization in a principled manner. To further accelerate the learning procedure, we regulate the policy update with an inverse action model, which assists distribution matching from the perspective of mode-covering. Extensive empirical results on challenging locomotion tasks indicate that our approach is comparable with state-of-the-art in terms of both sample-efficiency and asymptotic performance. |
Escaping Saddle-Point Faster under Interpolation-like Conditions | https://papers.nips.cc/paper_files/paper/2020/hash/92a08bf918f44ccd961477be30023da1-Abstract.html | Abhishek Roy, Krishnakumar Balasubramanian, Saeed Ghadimi, Prasant Mohapatra | https://papers.nips.cc/paper_files/paper/2020/hash/92a08bf918f44ccd961477be30023da1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/92a08bf918f44ccd961477be30023da1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10765-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/92a08bf918f44ccd961477be30023da1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/92a08bf918f44ccd961477be30023da1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/92a08bf918f44ccd961477be30023da1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/92a08bf918f44ccd961477be30023da1-Supplemental.pdf | In this paper, we show that under over-parametrization several standard stochastic optimization algorithms escape saddle-points and converge to local-minimizers much faster. One of the fundamental aspects of over-parametrized models is that they are capable of interpolating the training data. We show that, under interpolation-like assumptions satisfied by the stochastic gradients in an over-parametrization setting, the first-order oracle complexity of Perturbed Stochastic Gradient Descent (PSGD) algorithm to reach an $\epsilon$-local-minimizer, matches the corresponding deterministic rate of $O(1/\epsilon^{2})$. We next analyze Stochastic Cubic-Regularized Newton (SCRN) algorithm under interpolation-like conditions, and show that the oracle complexity to reach an $\epsilon$-local-minimizer under interpolation-like conditions, is $O(1/\epsilon^{2.5})$. While this obtained complexity is better than the corresponding complexity of either PSGD, or SCRN without interpolation-like assumptions, it does not match the rate of $O(1/\epsilon^{1.5})$ corresponding to deterministic Cubic-Regularized Newton method. It seems further Hessian-based interpolation-like assumptions are necessary to bridge this gap. We also discuss the corresponding improved complexities in the zeroth-order settings.
|
Matérn Gaussian Processes on Riemannian Manifolds | https://papers.nips.cc/paper_files/paper/2020/hash/92bf5e6240737e0326ea59846a83e076-Abstract.html | Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Deisenroth (he/him) | https://papers.nips.cc/paper_files/paper/2020/hash/92bf5e6240737e0326ea59846a83e076-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/92bf5e6240737e0326ea59846a83e076-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10766-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/92bf5e6240737e0326ea59846a83e076-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/92bf5e6240737e0326ea59846a83e076-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/92bf5e6240737e0326ea59846a83e076-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/92bf5e6240737e0326ea59846a83e076-Supplemental.zip | Gaussian processes are an effective model class for learning unknown functions, particularly in settings where accurately representing predictive uncertainty is of key importance. Motivated by applications in the physical sciences, the widely-used Matérn class of Gaussian processes has recently been generalized to model functions whose domains are Riemannian manifolds, by re-expressing said processes as solutions of stochastic partial differential equations. In this work, we propose techniques for computing the kernels of these processes on compact Riemannian manifolds via spectral theory of the Laplace-Beltrami operator in a fully constructive manner, thereby allowing them to be trained via standard scalable techniques such as inducing point methods. We also extend the generalization from the Matérn to the widely-used squared exponential Gaussian process. By allowing Riemannian Matérn Gaussian processes to be trained using well-understood techniques, our work enables their use in mini-batch, online, and non-conjugate settings, and makes them more accessible to machine learning practitioners. |
Improved Techniques for Training Score-Based Generative Models | https://papers.nips.cc/paper_files/paper/2020/hash/92c3b916311a5517d9290576e3ea37ad-Abstract.html | Yang Song, Stefano Ermon | https://papers.nips.cc/paper_files/paper/2020/hash/92c3b916311a5517d9290576e3ea37ad-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/92c3b916311a5517d9290576e3ea37ad-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10767-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/92c3b916311a5517d9290576e3ea37ad-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/92c3b916311a5517d9290576e3ea37ad-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/92c3b916311a5517d9290576e3ea37ad-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/92c3b916311a5517d9290576e3ea37ad-Supplemental.pdf | Score-based generative models can produce high quality image samples comparable to GANs, without requiring adversarial optimization. However, existing training procedures are limited to images of low resolution (typically below 32 x 32), and can be unstable under some settings. We provide a new theoretical analysis of learning and sampling from score models in high dimensional spaces, explaining existing failure modes and motivating new solutions that generalize across datasets.
To enhance stability, we also propose to maintain an exponential moving average of model weights. With these improvements, we can effortlessly scale score-based generative models to images with unprecedented resolutions ranging from 64 x 64 to 256 x 256. Our score-based models can generate high-fidelity samples that rival best-in-class GANs on various image datasets, including CelebA, FFHQ, and multiple LSUN categories. |
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations | https://papers.nips.cc/paper_files/paper/2020/hash/92d1e1eb1cd6f9fba3227870bb6d7f07-Abstract.html | Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, Michael Auli | https://papers.nips.cc/paper_files/paper/2020/hash/92d1e1eb1cd6f9fba3227870bb6d7f07-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/92d1e1eb1cd6f9fba3227870bb6d7f07-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10768-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/92d1e1eb1cd6f9fba3227870bb6d7f07-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/92d1e1eb1cd6f9fba3227870bb6d7f07-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/92d1e1eb1cd6f9fba3227870bb6d7f07-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/92d1e1eb1cd6f9fba3227870bb6d7f07-Supplemental.pdf | We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. |
A Maximum-Entropy Approach to Off-Policy Evaluation in Average-Reward MDPs | https://papers.nips.cc/paper_files/paper/2020/hash/9308b0d6e5898366a4a986bc33f3d3e7-Abstract.html | Nevena Lazic, Dong Yin, Mehrdad Farajtabar, Nir Levine, Dilan Gorur, Chris Harris, Dale Schuurmans | https://papers.nips.cc/paper_files/paper/2020/hash/9308b0d6e5898366a4a986bc33f3d3e7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9308b0d6e5898366a4a986bc33f3d3e7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10769-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9308b0d6e5898366a4a986bc33f3d3e7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9308b0d6e5898366a4a986bc33f3d3e7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9308b0d6e5898366a4a986bc33f3d3e7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9308b0d6e5898366a4a986bc33f3d3e7-Supplemental.pdf | This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs). For MDPs that are ergodic and linear (i.e. where rewards and dynamics are linear in some known features), we provide the first finite-sample OPE error bound, extending the existing results beyond the episodic and discounted cases. In a more general setting, when the feature dynamics are approximately linear and for arbitrary rewards, we propose a new approach for estimating stationary distributions with function approximation. We formulate this problem as finding the maximum-entropy distribution subject to matching feature expectations under empirical dynamics. We show that this results in an exponential-family distribution whose sufficient statistics are the features, paralleling maximum-entropy approaches in supervised learning. We demonstrate the effectiveness of the proposed OPE approaches in multiple environments. |
Instead of Rewriting Foreign Code for Machine Learning, Automatically Synthesize Fast Gradients | https://papers.nips.cc/paper_files/paper/2020/hash/9332c513ef44b682e9347822c2e457ac-Abstract.html | William Moses, Valentin Churavy | https://papers.nips.cc/paper_files/paper/2020/hash/9332c513ef44b682e9347822c2e457ac-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9332c513ef44b682e9347822c2e457ac-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10770-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9332c513ef44b682e9347822c2e457ac-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9332c513ef44b682e9347822c2e457ac-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9332c513ef44b682e9347822c2e457ac-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9332c513ef44b682e9347822c2e457ac-Supplemental.pdf | Applying differentiable programming techniques and machine learning algorithms to foreign programs requires developers to either rewrite their code in a machine learning framework, or otherwise provide derivatives of the foreign code. This paper presents Enzyme, a high-performance automatic differentiation (AD) compiler plugin for the LLVM compiler framework capable of synthesizing gradients of statically analyzable programs expressed in the LLVM intermediate representation (IR). Enzyme synthesizes gradients for programs written in any language whose compiler targets LLVM IR including C, C++, Fortran, Julia, Rust, Swift, MLIR, etc., thereby providing native AD capabilities in these languages. Unlike traditional source-to-source and operator-overloading tools, Enzyme performs AD on optimized IR. On a machine-learning focused benchmark suite including Microsoft's ADBench, AD on optimized IR achieves a geometric mean speedup of 4.2 times over AD on IR before optimization allowing Enzyme to achieve state-of-the-art performance. Packaging Enzyme for PyTorch and TensorFlow provides convenient access to gradients of foreign code with state-of-the-art performance, enabling foreign code to be directly incorporated into existing machine learning workflows. |
Does Unsupervised Architecture Representation Learning Help Neural Architecture Search? | https://papers.nips.cc/paper_files/paper/2020/hash/937936029af671cf479fa893db91cbdd-Abstract.html | Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, Mi Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/937936029af671cf479fa893db91cbdd-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/937936029af671cf479fa893db91cbdd-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10771-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/937936029af671cf479fa893db91cbdd-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/937936029af671cf479fa893db91cbdd-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/937936029af671cf479fa893db91cbdd-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/937936029af671cf479fa893db91cbdd-Supplemental.zip | Existing Neural Architecture Search (NAS) methods either encode neural architectures using discrete encodings that do not scale well, or adopt supervised learning-based methods to jointly learn architecture representations and optimize architecture search on such representations which incurs search bias. Despite the widespread use, architecture representations learned in NAS are still poorly understood. We observe that the structural properties of neural architectures are hard to preserve in the latent space if architecture representation learning and search are coupled, resulting in less effective search performance. In this work, we find empirically that pre-training architecture representations using only neural architectures without their accuracies as labels improves the downstream architecture search efficiency. To explain this finding, we visualize how unsupervised architecture representation learning better encourages neural architectures with similar connections and operators to cluster together. This helps map neural architectures with similar performance to the same regions in the latent space and makes the transition of architectures in the latent space relatively smooth, which considerably benefits diverse downstream search strategies. |
Value-driven Hindsight Modelling | https://papers.nips.cc/paper_files/paper/2020/hash/9381fc93ad66f9ec4b2eef71147a6665-Abstract.html | Arthur Guez, Fabio Viola, Theophane Weber, Lars Buesing, Steven Kapturowski, Doina Precup, David Silver, Nicolas Heess | https://papers.nips.cc/paper_files/paper/2020/hash/9381fc93ad66f9ec4b2eef71147a6665-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9381fc93ad66f9ec4b2eef71147a6665-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10772-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9381fc93ad66f9ec4b2eef71147a6665-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9381fc93ad66f9ec4b2eef71147a6665-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9381fc93ad66f9ec4b2eef71147a6665-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9381fc93ad66f9ec4b2eef71147a6665-Supplemental.pdf | Value estimation is a critical component of the reinforcement learning (RL) paradigm. The question of how to effectively learn value predictors from data is one of the major problems studied by the RL community, and different approaches exploit structure in the problem domain in different ways. Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function. In contrast, model-free methods directly leverage the quantity of interest from the future, but receive a potentially weak scalar signal (an estimate of the return). We develop an approach for representation learning in RL that sits in between these two extremes: we propose to learn what to model in a way that can directly help value prediction. To this end, we determine which features of the future trajectory provide useful information to predict the associated return. This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function. The idea can be understood as reasoning, in hindsight, about which aspects of the future observations could help past value prediction. We show how this can help dramatically even in simple policy evaluation settings. We then test our approach at scale in challenging domains, including on 57 Atari 2600 games. |
Dynamic Regret of Convex and Smooth Functions | https://papers.nips.cc/paper_files/paper/2020/hash/939314105ce8701e67489642ef4d49e8-Abstract.html | Peng Zhao, Yu-Jie Zhang, Lijun Zhang, Zhi-Hua Zhou | https://papers.nips.cc/paper_files/paper/2020/hash/939314105ce8701e67489642ef4d49e8-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/939314105ce8701e67489642ef4d49e8-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10773-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/939314105ce8701e67489642ef4d49e8-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/939314105ce8701e67489642ef4d49e8-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/939314105ce8701e67489642ef4d49e8-Review.html | null | We investigate online convex optimization in non-stationary environments and choose the dynamic regret as the performance measure, defined as the difference between cumulative loss incurred by the online algorithm and that of any feasible comparator sequence. Let $T$ be the time horizon and $P_T$ be the path-length that essentially reflects the non-stationarity of environments, the state-of-the-art dynamic regret is $\mathcal{O}(\sqrt{T(1+P_T)})$. Although this bound is proved to be minimax optimal for convex functions, in this paper, we demonstrate that it is possible to further enhance the dynamic regret by exploiting the smoothness condition. Specifically, we propose novel online algorithms that are capable of leveraging smoothness and replace the dependence on $T$ in the dynamic regret by problem-dependent quantities: the variation in gradients of loss functions, the cumulative loss of the comparator sequence, and the minimum of the previous two terms. These quantities are at most $\mathcal{O}(T)$ while could be much smaller in benign environments. Therefore, our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and meanwhile guarantee the same rate in the worst case. |
On Convergence of Nearest Neighbor Classifiers over Feature Transformations | https://papers.nips.cc/paper_files/paper/2020/hash/93d9033636450402d67cd55e60b3f926-Abstract.html | Luka Rimanic, Cedric Renggli, Bo Li, Ce Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/93d9033636450402d67cd55e60b3f926-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/93d9033636450402d67cd55e60b3f926-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10774-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/93d9033636450402d67cd55e60b3f926-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/93d9033636450402d67cd55e60b3f926-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/93d9033636450402d67cd55e60b3f926-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/93d9033636450402d67cd55e60b3f926-Supplemental.zip | The k-Nearest Neighbors (kNN) classifier is a fundamental non-parametric machine learning algorithm. However, it is well known that it suffers from the curse of dimensionality, which is why in practice one often applies a kNN classifier on top of a (pre-trained) feature transformation. From a theoretical perspective, most, if not all theoretical results aimed at understanding the kNN classifier are derived for the raw feature space. This leads to an emerging gap between our theoretical understanding of kNN and its practical applications.
In this paper, we take a first step towards bridging this gap. We provide a novel analysis on the convergence rates of a kNN classifier over transformed features. This analysis requires in-depth understanding of the properties that connect both the transformed space and the raw feature space. More precisely, we build our convergence bound upon two key properties of the transformed space: (1) safety -- how well can one recover the raw posterior from the transformed space, and (2) smoothness -- how complex this recovery function is. Based on our result, we are able to explain why some (pre-trained) feature transformations are better suited for a kNN classifier than other. We empirically validate that both properties have an impact on the kNN convergence on 30 feature transformations with 6 benchmark datasets spanning from the vision to the text domain. |
Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments | https://papers.nips.cc/paper_files/paper/2020/hash/93fb39474c51b8a82a68413e2a5ae17a-Abstract.html | Steven Jecmen, Hanrui Zhang, Ryan Liu, Nihar Shah, Vincent Conitzer, Fei Fang | https://papers.nips.cc/paper_files/paper/2020/hash/93fb39474c51b8a82a68413e2a5ae17a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/93fb39474c51b8a82a68413e2a5ae17a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10775-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/93fb39474c51b8a82a68413e2a5ae17a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/93fb39474c51b8a82a68413e2a5ae17a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/93fb39474c51b8a82a68413e2a5ae17a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/93fb39474c51b8a82a68413e2a5ae17a-Supplemental.pdf | We consider three important challenges in conference peer review: (i) reviewers maliciously attempting to get assigned to certain papers to provide positive reviews, possibly as part of quid-pro-quo arrangements with the authors; (ii) "torpedo reviewing," where reviewers deliberately attempt to get assigned to certain papers that they dislike in order to reject them; (iii) reviewer de-anonymization on release of the similarities and the reviewer-assignment code. On the conceptual front, we identify connections between these three problems and present a framework that brings all these challenges under a common umbrella. We then present a (randomized) algorithm for reviewer assignment that can optimally solve the reviewer-assignment problem under any given constraints on the probability of assignment for any reviewer-paper pair. We further consider the problem of restricting the joint probability that certain suspect pairs of reviewers are assigned to certain papers, and show that this problem is NP-hard for arbitrary constraints on these joint probabilities but efficiently solvable for a practical special case. Finally, we experimentally evaluate our algorithms on datasets from past conferences, where we observe that they can limit the chance that any malicious reviewer gets assigned to their desired paper to 50% while producing assignments with over 90% of the total optimal similarity. |
Contrastive learning of global and local features for medical image segmentation with limited annotations | https://papers.nips.cc/paper_files/paper/2020/hash/949686ecef4ee20a62d16b4a2d7ccca3-Abstract.html | Krishna Chaitanya, Ertunc Erdil, Neerav Karani, Ender Konukoglu | https://papers.nips.cc/paper_files/paper/2020/hash/949686ecef4ee20a62d16b4a2d7ccca3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/949686ecef4ee20a62d16b4a2d7ccca3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10776-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/949686ecef4ee20a62d16b4a2d7ccca3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/949686ecef4ee20a62d16b4a2d7ccca3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/949686ecef4ee20a62d16b4a2d7ccca3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/949686ecef4ee20a62d16b4a2d7ccca3-Supplemental.pdf | A key requirement for the success of supervised deep learning is a large labeled dataset - a condition that is difficult to meet in medical image analysis. Self-supervised learning (SSL) can help in this regard by providing a strategy to pre-train a neural network with unlabeled data, followed by fine-tuning for a downstream task with limited annotations. Contrastive learning, a particular variant of SSL, is a powerful technique for learning image-level representations. In this work, we propose strategies for extending the contrastive learning framework for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues.
Specifically, we propose (1) novel contrasting strategies that leverage structural similarity across volumetric medical images (domain-specific cue) and (2) a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation (problem-specific cue). We carry out an extensive evaluation on three Magnetic Resonance Imaging (MRI) datasets. In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques. When combined with a simple data augmentation technique, the proposed method reaches within 8\% of benchmark performance using only two labeled MRI volumes for training. The code is made public at https://github.com/krishnabits001/domain_specific_cl. |
Self-Supervised Graph Transformer on Large-Scale Molecular Data | https://papers.nips.cc/paper_files/paper/2020/hash/94aef38441efa3380a3bed3faf1f9d5d-Abstract.html | Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying WEI, Wenbing Huang, Junzhou Huang | https://papers.nips.cc/paper_files/paper/2020/hash/94aef38441efa3380a3bed3faf1f9d5d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/94aef38441efa3380a3bed3faf1f9d5d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10777-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/94aef38441efa3380a3bed3faf1f9d5d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/94aef38441efa3380a3bed3faf1f9d5d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/94aef38441efa3380a3bed3faf1f9d5d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/94aef38441efa3380a3bed3faf1f9d5d-Supplemental.pdf | How to obtain informative representations of molecules is a crucial prerequisite in AI-driven drug design and discovery. Recent researches abstract molecules as graphs and employ Graph Neural Networks (GNNs) for molecular representation learning. Nevertheless, two issues impede the usage of GNNs in real scenarios: (1) insufficient labeled molecules for supervised training; (2) poor generalization capability to new-synthesized molecules. To address them both, we propose a novel framework, GROVER, which stands for Graph Representation frOm self-superVised mEssage passing tRansformer. With carefully designed self-supervised tasks in node-, edge- and graph-level, GROVER can learn rich structural and semantic information of molecules from enormous unlabelled molecular data. Rather, to encode such complex information, GROVER integrates Message Passing Networks into the Transformer-style architecture to deliver a class of more expressive encoders of molecules. The flexibility of GROVER allows it to be trained efficiently on large-scale molecular dataset without requiring any supervision, thus being immunized to the two issues mentioned above. We pre-train GROVER with 100 million parameters on 10 million unlabelled molecules---the biggest GNN and the largest training dataset in molecular representation learning. We then leverage the pre-trained GROVER for molecular property prediction followed by task-specific fine-tuning, where we observe a huge improvement (more than 6% on average) from current state-of-the-art methods on 11 challenging benchmarks. The insights we gained are that well-designed self-supervision losses and largely-expressive pre-trained models enjoy the significant potential on performance boosting. |
Generative Neurosymbolic Machines | https://papers.nips.cc/paper_files/paper/2020/hash/94c28dcfc97557df0df6d1f7222fc384-Abstract.html | Jindong Jiang, Sungjin Ahn | https://papers.nips.cc/paper_files/paper/2020/hash/94c28dcfc97557df0df6d1f7222fc384-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/94c28dcfc97557df0df6d1f7222fc384-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10778-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/94c28dcfc97557df0df6d1f7222fc384-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/94c28dcfc97557df0df6d1f7222fc384-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/94c28dcfc97557df0df6d1f7222fc384-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/94c28dcfc97557df0df6d1f7222fc384-Supplemental.pdf | Reconciling symbolic and distributed representations is a crucial challenge that can potentially resolve the limitations of current deep learning. Remarkable advances in this direction have been achieved recently via generative object-centric representation models. While learning a recognition model that infers object-centric symbolic representations like bounding boxes from raw images in an unsupervised way, no such model can provide another important ability of a generative model, i.e., generating (sampling) according to the structure of learned world density. In this paper, we propose Generative Neurosymbolic Machines, a generative model that combines the benefits of distributed and symbolic representations to support both structured representations of symbolic components and density-based generation. These two crucial properties are achieved by a two-layer latent hierarchy with the global distributed latent for flexible density modeling and the structured symbolic latent map. To increase the model flexibility in this hierarchical structure, we also propose the StructDRAW prior. In experiments, we show that the proposed model significantly outperforms the previous structured representation models as well as the state-of-the-art non-structured generative models in terms of both structure accuracy and image generation quality. |
How many samples is a good initial point worth in Low-rank Matrix Recovery? | https://papers.nips.cc/paper_files/paper/2020/hash/94c4dd41f9dddce696557d3717d98d82-Abstract.html | Jialun Zhang, Richard Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/94c4dd41f9dddce696557d3717d98d82-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/94c4dd41f9dddce696557d3717d98d82-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10779-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/94c4dd41f9dddce696557d3717d98d82-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/94c4dd41f9dddce696557d3717d98d82-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/94c4dd41f9dddce696557d3717d98d82-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/94c4dd41f9dddce696557d3717d98d82-Supplemental.pdf | Given a sufficiently large amount of labeled data, the nonconvex low-rank matrix recovery problem contains no spurious local minima, so a local optimization algorithm is guaranteed to converge to a global minimum starting from any initial guess. However, the actual amount of data needed by this theoretical guarantee is very pessimistic, as it must prevent spurious local minima from existing anywhere, including at adversarial locations. In contrast, prior work based on good initial guesses have more realistic data requirements, because they allow spurious local minima to exist outside of a neighborhood of the solution. In this paper, we quantify the relationship between the quality of the initial guess and the corresponding reduction in data requirements. Using the restricted isometry constant as a surrogate for sample complexity, we compute a sharp “threshold” number of samples needed to prevent each specific point on the optimization landscape from becoming a spurious local minima. Optimizing the threshold over regions of the landscape, we see that, for initial points not too close to the ground truth, a linear improvement in the quality of the initial guess amounts to a constant factor improvement in the sample complexity. |
CSER: Communication-efficient SGD with Error Reset | https://papers.nips.cc/paper_files/paper/2020/hash/94cb02feb750f20bad8a85dfe7e18d11-Abstract.html | Cong Xie, Shuai Zheng, Sanmi Koyejo, Indranil Gupta, Mu Li, Haibin Lin | https://papers.nips.cc/paper_files/paper/2020/hash/94cb02feb750f20bad8a85dfe7e18d11-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/94cb02feb750f20bad8a85dfe7e18d11-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10780-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/94cb02feb750f20bad8a85dfe7e18d11-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/94cb02feb750f20bad8a85dfe7e18d11-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/94cb02feb750f20bad8a85dfe7e18d11-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/94cb02feb750f20bad8a85dfe7e18d11-Supplemental.pdf | The scalability of Distributed Stochastic Gradient Descent (SGD) is today limited by communication bottlenecks. We propose a novel SGD variant: \underline{C}ommunication-efficient \underline{S}GD with \underline{E}rror \underline{R}eset, or \underline{CSER}. The key idea in CSER is first a new technique called ``error reset'' that adapts arbitrary compressors for SGD, producing bifurcated local models with periodic reset of resulting local residual errors.
Second we introduce partial synchronization for both the gradients and the models, leveraging advantages from them.
We prove the convergence of CSER for smooth non-convex problems.
Empirical results show that when combined with highly aggressive compressors, the CSER algorithms accelerate the distributed training by nearly $10\times$ for CIFAR-100, and by $4.5\times$ for ImageNet. |
Efficient estimation of neural tuning during naturalistic behavior | https://papers.nips.cc/paper_files/paper/2020/hash/94d2a3c6dd19337f2511cdf8b4bf907e-Abstract.html | Edoardo Balzani, Kaushik Lakshminarasimhan, Dora Angelaki, Cristina Savin | https://papers.nips.cc/paper_files/paper/2020/hash/94d2a3c6dd19337f2511cdf8b4bf907e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/94d2a3c6dd19337f2511cdf8b4bf907e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10781-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/94d2a3c6dd19337f2511cdf8b4bf907e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/94d2a3c6dd19337f2511cdf8b4bf907e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/94d2a3c6dd19337f2511cdf8b4bf907e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/94d2a3c6dd19337f2511cdf8b4bf907e-Supplemental.zip | Recent technological advances in systems neuroscience have led to a shift away from using simple tasks, with low-dimensional, well-controlled stimuli, towards trying to understand neural activity during naturalistic behavior. However, with the increase in number and complexity of task-relevant features, standard analyses such as estimating tuning functions become challenging. Here, we use a Poisson generalized additive model (P-GAM) with spline nonlinearities and an exponential link function to map a large number of task variables (input stimuli, behavioral outputs, or activity of other neurons, modeled as discrete events or continuous variables) into spike counts. We develop efficient procedures for parameter learning by optimizing a generalized cross-validation score and infer marginal confidence bounds for the contribution of each feature to neural responses. This allows us to robustly identify a minimal set of task features that each neuron is responsive to, circumventing computationally demanding model comparison. We show that our estimation procedure outperforms traditional regularized GLMs in terms of both fit quality and computing time. When applied to neural recordings from monkeys performing a virtual reality spatial navigation task, P-GAM reveals mixed selectivity and preferential coupling between neurons with similar tuning. |
High-recall causal discovery for autocorrelated time series with latent confounders | https://papers.nips.cc/paper_files/paper/2020/hash/94e70705efae423efda1088614128d0b-Abstract.html | Andreas Gerhardus, Jakob Runge | https://papers.nips.cc/paper_files/paper/2020/hash/94e70705efae423efda1088614128d0b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/94e70705efae423efda1088614128d0b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10782-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/94e70705efae423efda1088614128d0b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/94e70705efae423efda1088614128d0b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/94e70705efae423efda1088614128d0b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/94e70705efae423efda1088614128d0b-Supplemental.pdf | We present a new method for linear and nonlinear, lagged and contemporaneous constraint-based causal discovery from observational time series in the presence of latent confounders. We show that existing causal discovery methods such as FCI and variants suffer from low recall in the autocorrelated time series case and identify low effect size of conditional independence tests as the main reason. Information-theoretical arguments show that effect size can often be increased if causal parents are included in the conditioning sets. To identify parents early on, we suggest an iterative procedure that utilizes novel orientation rules to determine ancestral relationships already during the edge removal phase. We prove that the method is order-independent, and sound and complete in the oracle case. Extensive simulation studies for different numbers of variables, time lags, sample sizes, and further cases demonstrate that our method indeed achieves much higher recall than existing methods for the case of autocorrelated continuous variables while keeping false positives at the desired level. This performance gain grows with stronger autocorrelation. At github.com/jakobrunge/tigramite we provide Python code for all methods involved in the simulation studies. |
Forget About the LiDAR: Self-Supervised Depth Estimators with MED Probability Volumes | https://papers.nips.cc/paper_files/paper/2020/hash/951124d4a093eeae83d9726a20295498-Abstract.html | Juan Luis GonzalezBello, Munchurl Kim | https://papers.nips.cc/paper_files/paper/2020/hash/951124d4a093eeae83d9726a20295498-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/951124d4a093eeae83d9726a20295498-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10783-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/951124d4a093eeae83d9726a20295498-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/951124d4a093eeae83d9726a20295498-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/951124d4a093eeae83d9726a20295498-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/951124d4a093eeae83d9726a20295498-Supplemental.zip | Self-supervised depth estimators have recently shown results comparable to the supervised methods on the challenging single image depth estimation (SIDE) task, by exploiting the geometrical relations between target and reference views in the training data. However, previous methods usually learn forward or backward image synthesis, but not depth estimation, as they cannot effectively neglect occlusions between the target and the reference images. Previous works rely on rigid photometric assumptions or on the SIDE network to infer depth and occlusions, resulting in limited performance. On the other hand, we propose a method to "Forget About the LiDAR" (FAL), with Mirrored Exponential Disparity (MED) probability volumes for the training of monocular depth estimators from stereo images. Our MED representation allows us to obtain geometrically inspired occlusion maps with our novel Mirrored Occlusion Module (MOM), which does not impose a learning burden on our FAL-net. Contrary to the previous methods that learn SIDE from stereo pairs by regressing disparity in the linear space, our FAL-net regresses disparity by binning it into the exponential space, which allows for better detection of distant and nearby objects. We define a two-step training strategy for our FAL-net: It is first trained for view synthesis and then fine-tuned for depth estimation with our MOM. Our FAL-net is remarkably light-weight and outperforms the previous state-of-the-art methods with 8$\times$ fewer parameters and 3$\times$ faster inference speeds on the challenging KITTI dataset. We present extensive experimental results on the KITTI, CityScapes, and Make3D datasets to verify our method's effectiveness. To the authors' best knowledge, the presented method performs the best among all the previous self-supervised methods until now. |
Joint Contrastive Learning with Infinite Possibilities | https://papers.nips.cc/paper_files/paper/2020/hash/9523147e5a6707baf674941812ee5c94-Abstract.html | Qi Cai, Yu Wang, Yingwei Pan, Ting Yao, Tao Mei | https://papers.nips.cc/paper_files/paper/2020/hash/9523147e5a6707baf674941812ee5c94-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9523147e5a6707baf674941812ee5c94-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10784-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9523147e5a6707baf674941812ee5c94-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9523147e5a6707baf674941812ee5c94-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9523147e5a6707baf674941812ee5c94-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9523147e5a6707baf674941812ee5c94-Supplemental.pdf | This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling. We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL). JCL implicitly involves the simultaneous learning of an infinite number of query-key pairs, which poses tighter constraints when searching for invariant features. We derive an upper bound on this formulation that allows analytical solutions in an end-to-end training manner. While JCL is practically effective in numerous computer vision applications, we also theoretically unveil the certain mechanisms that govern the behavior of JCL. We demonstrate that the proposed formulation harbors an innate agency that strongly favors similarity within each instance-specific class, and therefore remains advantageous when searching for discriminative features among distinct instances. We evaluate these proposals on multiple benchmarks, demonstrating considerable improvements over existing algorithms. Code is publicly available at: https://github.com/caiqi/Joint-Contrastive-Learning. |
Robust Gaussian Covariance Estimation in Nearly-Matrix Multiplication Time | https://papers.nips.cc/paper_files/paper/2020/hash/9529fbba677729d3206b3b9073d1e9ca-Abstract.html | Jerry Li, Guanghao Ye | https://papers.nips.cc/paper_files/paper/2020/hash/9529fbba677729d3206b3b9073d1e9ca-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9529fbba677729d3206b3b9073d1e9ca-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10785-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9529fbba677729d3206b3b9073d1e9ca-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9529fbba677729d3206b3b9073d1e9ca-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9529fbba677729d3206b3b9073d1e9ca-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9529fbba677729d3206b3b9073d1e9ca-Supplemental.pdf | Robust covariance estimation is the following, well-studied problem in high dimensional statistics: given $N$ samples from a $d$-dimensional Gaussian $\mathcal{N}(\boldsymbol{0}, \Sigma)$, but where an $\varepsilon$-fraction of the samples have been arbitrarily corrupted, output $\widehat{\Sigma}$ minimizing the total variation distance between $\mathcal{N}(\boldsymbol{0}, \Sigma)$ and $\mathcal{N}(\boldsymbol{0}, \widehat{\Sigma})$.
This corresponds to learning $\Sigma$ in a natural affine-invariant variant of the Frobenius norm known as the \emph{Mahalanobis norm}.
Previous work of Cheng et al demonstrated an algorithm that, given $N = \widetilde{\Omega}(d^2 / \varepsilon^2)$ samples, achieved a near-optimal error of $O(\varepsilon \log 1 / \varepsilon)$, and moreover, their algorithm ran in time $\widetilde{O}(T(N, d) \log \kappa / \mathrm{poly} (\varepsilon))$, where $T(N, d)$ is the time it takes to multiply a $d \times N$ matrix by its transpose, and $\kappa$ is the condition number of $\Sigma$.
When $\varepsilon$ is relatively small, their polynomial dependence on $1/\varepsilon$ in the runtime is prohibitively large.
In this paper, we demonstrate a novel algorithm that achieves the same statistical guarantees, but which runs in time $\widetilde{O} (T(N, d) \log \kappa)$.
In particular, our runtime has no dependence on $\varepsilon$.
When $\Sigma$ is reasonably conditioned, our runtime matches that of the fastest algorithm for covariance estimation without outliers, up to poly-logarithmic factors, showing that we can get robustness essentially ``for free.'' |
Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models | https://papers.nips.cc/paper_files/paper/2020/hash/95424358822e753eb993c97ee76a9076-Abstract.html | Adarsh Keshav Jeewajee, Leslie Kaelbling | https://papers.nips.cc/paper_files/paper/2020/hash/95424358822e753eb993c97ee76a9076-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/95424358822e753eb993c97ee76a9076-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10786-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/95424358822e753eb993c97ee76a9076-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/95424358822e753eb993c97ee76a9076-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/95424358822e753eb993c97ee76a9076-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/95424358822e753eb993c97ee76a9076-Supplemental.zip | Undirected graphical models are compact representations of joint probability distributions over random variables. To solve inference tasks of interest, graphical models of arbitrary topology can be trained using empirical risk minimization. However, to solve inference tasks that were not seen during training, these models (EGMs) often need to be re-trained. Instead, we propose an inference-agnostic adversarial training framework which produces an infinitely-large ensemble of graphical models (AGMs). The ensemble is optimized to generate data within the GAN framework, and inference is performed using a finite subset of these models. AGMs perform comparably with EGMs on inference tasks that the latter were specifically optimized for. Most importantly, AGMs show significantly better generalization to unseen inference tasks compared to EGMs, as well as deep neural architectures like GibbsNet and VAEAC which allow arbitrary conditioning. Finally, AGMs allow fast data sampling, competitive with Gibbs sampling from EGMs. |
GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators | https://papers.nips.cc/paper_files/paper/2020/hash/9547ad6b651e2087bac67651aa92cd0d-Abstract.html | Dingfan Chen, Tribhuvanesh Orekondy, Mario Fritz | https://papers.nips.cc/paper_files/paper/2020/hash/9547ad6b651e2087bac67651aa92cd0d-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9547ad6b651e2087bac67651aa92cd0d-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10787-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9547ad6b651e2087bac67651aa92cd0d-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9547ad6b651e2087bac67651aa92cd0d-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9547ad6b651e2087bac67651aa92cd0d-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9547ad6b651e2087bac67651aa92cd0d-Supplemental.pdf | The wide-spread availability of rich data has fueled the growth of machine learning applications in numerous domains. However, growth in domains with highly-sensitive data (e.g., medical) is largely hindered as the private nature of data prohibits it from being shared. To this end, we propose Gradient-sanitized Wasserstein Generative Adversarial Networks (GS-WGAN), which allows releasing a sanitized form of the sensitive data with rigorous privacy guarantees.
In contrast to prior work, our approach is able to distort gradient information more precisely, and thereby enabling training deeper models which generate more informative samples. Moreover, our formulation naturally allows for training GANs in both centralized and federated (i.e., decentralized) data scenarios.
Through extensive experiments, we find our approach consistently outperforms state-of-the-art approaches across multiple metrics (e.g., sample quality) and datasets. |
SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows | https://papers.nips.cc/paper_files/paper/2020/hash/9578a63fbe545bd82cc5bbe749636af1-Abstract.html | Didrik Nielsen, Priyank Jaini, Emiel Hoogeboom, Ole Winther, Max Welling | https://papers.nips.cc/paper_files/paper/2020/hash/9578a63fbe545bd82cc5bbe749636af1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9578a63fbe545bd82cc5bbe749636af1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10788-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9578a63fbe545bd82cc5bbe749636af1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9578a63fbe545bd82cc5bbe749636af1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9578a63fbe545bd82cc5bbe749636af1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9578a63fbe545bd82cc5bbe749636af1-Supplemental.pdf | Normalizing flows and variational autoencoders are powerful generative models that can represent complicated density functions. However, they both impose constraints on the models: Normalizing flows use bijective transformations to model densities whereas VAEs learn stochastic transformations that are non-invertible and thus typically do not provide tractable estimates of the marginal likelihood. In this paper, we introduce SurVAE Flows: A modular framework of composable transformations that encompasses VAEs and normalizing flows. SurVAE Flows bridge the gap between normalizing flows and VAEs with surjective transformations, wherein the transformations are deterministic in one direction -- thereby allowing exact likelihood computation, and stochastic in the reverse direction -- hence providing a lower bound on the corresponding likelihood. We show that several recently proposed methods, including dequantization and augmented normalizing flows, can be expressed as SurVAE Flows. Finally, we introduce common operations such as the max value, the absolute value, sorting and stochastic permutation as composable layers in SurVAE Flows. |
Learning Causal Effects via Weighted Empirical Risk Minimization | https://papers.nips.cc/paper_files/paper/2020/hash/95a6fc111fa11c3ab209a0ed1b9abeb6-Abstract.html | Yonghan Jung, Jin Tian, Elias Bareinboim | https://papers.nips.cc/paper_files/paper/2020/hash/95a6fc111fa11c3ab209a0ed1b9abeb6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/95a6fc111fa11c3ab209a0ed1b9abeb6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10789-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/95a6fc111fa11c3ab209a0ed1b9abeb6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/95a6fc111fa11c3ab209a0ed1b9abeb6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/95a6fc111fa11c3ab209a0ed1b9abeb6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/95a6fc111fa11c3ab209a0ed1b9abeb6-Supplemental.pdf | Learning causal effects from data is a fundamental problem across the sciences. Determining the identifiability of a target effect from a combination of the observational distribution and the causal graph underlying a phenomenon is well-understood in theory. However, in practice, it remains a challenge to apply the identification theory to estimate the identified causal functionals from finite samples. Although a plethora of effective estimators have been developed under the setting known as the back-door (also called conditional ignorability), there exists still no systematic way of estimating arbitrary causal functionals that are both computationally and statistically attractive. This paper aims to bridge this gap, from causal identification to causal estimation. We note that estimating functionals from limited samples based on the empirical risk minimization (ERM) principle has been pervasive in the machine learning literature, and these methods have been extended to causal inference under the back-door setting. In this paper, we develop a learning framework that marries two families of methods, benefiting from the generality of the causal identification theory and the effectiveness of the estimators produced based on the principle of ERM. Specifically, we develop a sound and complete algorithm that generates causal functionals in the form of weighted distributions that are amenable to the ERM optimization. We then provide a practical procedure for learning causal effects from finite samples and a causal graph. Finally, experimental results support the effectiveness of our approach. |
Revisiting the Sample Complexity of Sparse Spectrum Approximation of Gaussian Processes | https://papers.nips.cc/paper_files/paper/2020/hash/95b431e51fc53692913da5263c214162-Abstract.html | Minh Hoang, Nghia Hoang, Hai Pham, David Woodruff | https://papers.nips.cc/paper_files/paper/2020/hash/95b431e51fc53692913da5263c214162-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/95b431e51fc53692913da5263c214162-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10790-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/95b431e51fc53692913da5263c214162-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/95b431e51fc53692913da5263c214162-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/95b431e51fc53692913da5263c214162-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/95b431e51fc53692913da5263c214162-Supplemental.pdf | We introduce a new scalable approximation for Gaussian processes with provable guarantees which holds simultaneously over its entire parameter space. Our approximation is obtained from an improved sample complexity analysis for sparse spectrum Gaussian processes (SSGPs). In particular, our analysis shows that under a certain data disentangling condition, an SSGP's prediction and model evidence (for training) can well-approximate those of a full GP with low sample complexity. We also develop a new auto-encoding algorithm that finds a latent space to disentangle latent input coordinates into well-separated clusters, which is amenable to our sample complexity analysis. We validate our proposed method on several benchmarks with promising results supporting our theoretical analysis. |
Incorporating Interpretable Output Constraints in Bayesian Neural Networks | https://papers.nips.cc/paper_files/paper/2020/hash/95c7dfc5538e1ce71301cf92a9a96bd0-Abstract.html | Wanqian Yang, Lars Lorch, Moritz Graule, Himabindu Lakkaraju, Finale Doshi-Velez | https://papers.nips.cc/paper_files/paper/2020/hash/95c7dfc5538e1ce71301cf92a9a96bd0-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/95c7dfc5538e1ce71301cf92a9a96bd0-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10791-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/95c7dfc5538e1ce71301cf92a9a96bd0-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/95c7dfc5538e1ce71301cf92a9a96bd0-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/95c7dfc5538e1ce71301cf92a9a96bd0-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/95c7dfc5538e1ce71301cf92a9a96bd0-Supplemental.pdf | Domains where supervised models are deployed often come with task-specific constraints, such as prior expert knowledge on the ground-truth function, or desiderata like safety and fairness. We introduce a novel probabilistic framework for reasoning with such constraints and formulate a prior that enables us to effectively incorporate them into Bayesian neural networks (BNNs), including a variant that can be amortized over tasks. The resulting Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification and is amenable to black-box inference. Unlike typical BNN inference in uninterpretable parameter space, OC-BNNs widen the range of functional knowledge that can be incorporated, especially for model users without expertise in machine learning. We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring. |
Multi-Stage Influence Function | https://papers.nips.cc/paper_files/paper/2020/hash/95e62984b87e90645a5cf77037395959-Abstract.html | Hongge Chen, Si Si, Yang Li, Ciprian Chelba, Sanjiv Kumar, Duane Boning, Cho-Jui Hsieh | https://papers.nips.cc/paper_files/paper/2020/hash/95e62984b87e90645a5cf77037395959-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/95e62984b87e90645a5cf77037395959-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10792-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/95e62984b87e90645a5cf77037395959-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/95e62984b87e90645a5cf77037395959-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/95e62984b87e90645a5cf77037395959-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/95e62984b87e90645a5cf77037395959-Supplemental.pdf | Multi-stage training and knowledge transfer, from a large-scale pretraining task to various finetuning tasks, have revolutionized natural language processing and computer vision resulting in state-of-the-art performance improvements. In this paper, we develop a multi-stage influence function score to track predictions from a finetuned model all the way back to the pretraining data. With this score, we can identify the pretraining examples in the pretraining task that contribute most to a prediction in the finetuning task. The proposed multi-stage influence function generalizes the original influence function for a single model in (Koh &Liang, 2017), thereby enabling influence computation through both pretrained and finetuned models. We study two different scenarios with the pretrained embedding fixed or updated in the finetuning tasks. We test our proposed method in various experiments to show its effectiveness and potential applications. |
Probabilistic Fair Clustering | https://papers.nips.cc/paper_files/paper/2020/hash/95f2b84de5660ddf45c8a34933a2e66f-Abstract.html | Seyed Esmaeili, Brian Brubach, Leonidas Tsepenekas, John Dickerson | https://papers.nips.cc/paper_files/paper/2020/hash/95f2b84de5660ddf45c8a34933a2e66f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/95f2b84de5660ddf45c8a34933a2e66f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10793-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/95f2b84de5660ddf45c8a34933a2e66f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/95f2b84de5660ddf45c8a34933a2e66f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/95f2b84de5660ddf45c8a34933a2e66f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/95f2b84de5660ddf45c8a34933a2e66f-Supplemental.pdf | In clustering problems, a central decision-maker is given a complete metric graph over vertices and must provide a clustering of vertices that minimizes some objective function. In fair clustering problems, vertices are endowed with a color (e.g., membership in a group), and the requirements of a valid clustering might also include the representation of colors in the solution. Prior work in fair clustering assumes complete knowledge of group membership. In this paper, we generalize this by assuming imperfect knowledge of group membership through probabilistic assignments, and present algorithms in this more general setting with approximation ratio guarantees. We also address the problem of "metric membership", where group membership has a notion of order and distance. Experiments are conducted using our proposed algorithms as well as baselines to validate our approach, and also surface nuanced concerns when group membership is not known deterministically. |
Stochastic Segmentation Networks: Modelling Spatially Correlated Aleatoric Uncertainty | https://papers.nips.cc/paper_files/paper/2020/hash/95f8d9901ca8878e291552f001f67692-Abstract.html | Miguel Monteiro, Loic Le Folgoc, Daniel Coelho de Castro, Nick Pawlowski, Bernardo Marques, Konstantinos Kamnitsas, Mark van der Wilk, Ben Glocker | https://papers.nips.cc/paper_files/paper/2020/hash/95f8d9901ca8878e291552f001f67692-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/95f8d9901ca8878e291552f001f67692-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10794-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/95f8d9901ca8878e291552f001f67692-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/95f8d9901ca8878e291552f001f67692-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/95f8d9901ca8878e291552f001f67692-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/95f8d9901ca8878e291552f001f67692-Supplemental.pdf | In image segmentation, there is often more than one plausible solution for a given input. In medical imaging, for example, experts will often disagree about the exact location of object boundaries. Estimating this inherent uncertainty and predicting multiple plausible hypotheses is of great interest in many applications, yet this ability is lacking in most current deep learning methods. In this paper, we introduce stochastic segmentation networks (SSNs), an efficient probabilistic method for modelling aleatoric uncertainty with any image segmentation network architecture. In contrast to approaches that produce pixel-wise estimates, SSNs model joint distributions over entire label maps and thus can generate multiple spatially coherent hypotheses for a single image. By using a low-rank multivariate normal distribution over the logit space to model the probability of the label map given the image, we obtain a spatially consistent probability distribution that can be efficiently computed by a neural network without any changes to the underlying architecture. We tested our method on the segmentation of real-world medical data, including lung nodules in 2D CT and brain tumours in 3D multimodal MRI scans. SSNs outperform state-of-the-art for modelling correlated uncertainty in ambiguous images while being much simpler, more flexible, and more efficient. |
ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA | https://papers.nips.cc/paper_files/paper/2020/hash/962e56a8a0b0420d87272a682bfd1e53-Abstract.html | Ilyes Khemakhem, Ricardo Monti, Diederik Kingma, Aapo Hyvarinen | https://papers.nips.cc/paper_files/paper/2020/hash/962e56a8a0b0420d87272a682bfd1e53-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/962e56a8a0b0420d87272a682bfd1e53-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10795-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/962e56a8a0b0420d87272a682bfd1e53-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/962e56a8a0b0420d87272a682bfd1e53-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/962e56a8a0b0420d87272a682bfd1e53-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/962e56a8a0b0420d87272a682bfd1e53-Supplemental.zip | We consider the identifiability theory of probabilistic models and establish sufficient conditions under which the representations learnt by a very broad family of conditional energy-based models are unique in function space, up to a simple transformation. In our model family, the energy function is the dot-product between two feature extractors, one for the dependent variable, and one for the conditioning variable. We show that under mild conditions, the features are unique up to scaling and permutation. Our results extend recent developments in nonlinear ICA, and in fact, they lead to an important generalization of ICA models. In particular, we show that our model can be used for the estimation of the components in the framework of Independently Modulated Component Analysis (IMCA), a new generalization of nonlinear ICA that relaxes the independence assumption. A thorough empirical study show that representations learnt by our model from real-world image datasets are identifiable, and improve performance in transfer learning and semi-supervised learning tasks. |
Testing Determinantal Point Processes | https://papers.nips.cc/paper_files/paper/2020/hash/964d1775b722eff11b8ecd9e9ed5bd9e-Abstract.html | Khashayar Gatmiry, Maryam Aliakbarpour, Stefanie Jegelka | https://papers.nips.cc/paper_files/paper/2020/hash/964d1775b722eff11b8ecd9e9ed5bd9e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/964d1775b722eff11b8ecd9e9ed5bd9e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10796-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/964d1775b722eff11b8ecd9e9ed5bd9e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/964d1775b722eff11b8ecd9e9ed5bd9e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/964d1775b722eff11b8ecd9e9ed5bd9e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/964d1775b722eff11b8ecd9e9ed5bd9e-Supplemental.pdf | Determinantal point processes (DPPs) are popular probabilistic models of diversity. In this paper, we investigate DPPs from a new perspective: property testing of distributions. Given sample access to an unknown distribution $q$ over the subsets of a ground set, we aim to distinguish whether $q$ is a DPP distribution or $\epsilon$-far from all DPP distributions in $\ell_1$-distance. In this work, we propose the first algorithm for testing DPPs. Furthermore, we establish a matching lower bound on the sample complexity of DPP testing. This lower bound also extends to showing a new hardness result for the problem of testing the more general class of log-submodular distributions. |
CogLTX: Applying BERT to Long Texts | https://papers.nips.cc/paper_files/paper/2020/hash/96671501524948bc3937b4b30d0e57b9-Abstract.html | Ming Ding, Chang Zhou, Hongxia Yang, Jie Tang | https://papers.nips.cc/paper_files/paper/2020/hash/96671501524948bc3937b4b30d0e57b9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/96671501524948bc3937b4b30d0e57b9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10797-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/96671501524948bc3937b4b30d0e57b9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/96671501524948bc3937b4b30d0e57b9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/96671501524948bc3937b4b30d0e57b9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/96671501524948bc3937b4b30d0e57b9-Supplemental.pdf | BERTs are incapable of processing long texts due to its quadratically increasing memory and time consumption. The straightforward thoughts to address this problem, such as slicing the text by a sliding window or simplifying transformers, suffer from insufficient long-range attentions or need customized CUDA kernels. The limited text length of BERT reminds us the limited capacity (5∼ 9 chunks) of the working memory of humans – then how do human beings Cognize Long TeXts? Founded on the cognitive theory stemming from Baddeley, our CogLTX framework identifies key sentences by training a judge model, concatenates them for reasoning and enables multi-step reasoning via rehearsal and decay. Since relevance annotations are usually unavailable, we propose to use treatment experiments to create supervision. As a general algorithm, CogLTX outperforms or gets comparable results to SOTA models on NewsQA, HotpotQA, multi-class and multi-label long-text classification tasks with memory overheads independent of the text length. |
f-GAIL: Learning f-Divergence for Generative Adversarial Imitation Learning | https://papers.nips.cc/paper_files/paper/2020/hash/967990de5b3eac7b87d49a13c6834978-Abstract.html | Xin Zhang, Yanhua Li, Ziming Zhang, Zhi-Li Zhang | https://papers.nips.cc/paper_files/paper/2020/hash/967990de5b3eac7b87d49a13c6834978-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/967990de5b3eac7b87d49a13c6834978-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10798-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/967990de5b3eac7b87d49a13c6834978-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/967990de5b3eac7b87d49a13c6834978-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/967990de5b3eac7b87d49a13c6834978-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/967990de5b3eac7b87d49a13c6834978-Supplemental.pdf | Imitation learning (IL) aims to learn a policy from expert demonstrations that minimizes the discrepancy between the learner and expert behaviors. Various imitation learning algorithms have been proposed with different pre-determined divergences to quantify the discrepancy. This naturally gives rise to the following question: Given a set of expert demonstrations, which divergence can recover the expert policy more accurately with higher data efficiency? In this work, we propose f-GAIL – a new generative adversarial imitation learning model – that automatically learns a discrepancy measure from the f-divergence family as well as a policy capable of producing expert-like behaviors. Compared with IL baselines with various predefined divergence measures, f-GAIL learns better policies with higher data efficiency in six physics-based control tasks. |
Non-parametric Models for Non-negative Functions | https://papers.nips.cc/paper_files/paper/2020/hash/968b15768f3d19770471e9436d97913c-Abstract.html | Ulysse Marteau-Ferey, Francis Bach, Alessandro Rudi | https://papers.nips.cc/paper_files/paper/2020/hash/968b15768f3d19770471e9436d97913c-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/968b15768f3d19770471e9436d97913c-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10799-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/968b15768f3d19770471e9436d97913c-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/968b15768f3d19770471e9436d97913c-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/968b15768f3d19770471e9436d97913c-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/968b15768f3d19770471e9436d97913c-Supplemental.pdf | Linear models have shown great effectiveness and flexibility in many fields such as machine learning, signal processing and statistics. They can represent rich spaces of functions while preserving the convexity of the optimization problems where they are used, and are simple to evaluate, differentiate and integrate. However, for modeling non-negative functions, which are crucial for unsupervised learning, density estimation, or non-parametric Bayesian methods, linear models are not applicable directly. Moreover, current state-of-the-art models like generalized linear models either lead to non-convex optimization problems, or cannot be easily integrated. In this paper we provide the first model for non-negative functions which benefits from the same good properties of linear models. In particular, we prove that it admits a representer theorem and provide an efficient dual formulation for convex problems. We study its representation power, showing that the resulting space of functions is strictly richer than that of generalized linear models. Finally we extend the model and the theoretical results to functions with outputs in convex cones. The paper is complemented by an experimental evaluation of the model showing its effectiveness in terms of formulation, algorithmic derivation and practical results on the problems of density estimation, regression with heteroscedastic errors, and multiple quantile regression. |
Uncertainty Aware Semi-Supervised Learning on Graph Data | https://papers.nips.cc/paper_files/paper/2020/hash/968c9b4f09cbb7d7925f38aea3484111-Abstract.html | Xujiang Zhao, Feng Chen, Shu Hu, Jin-Hee Cho | https://papers.nips.cc/paper_files/paper/2020/hash/968c9b4f09cbb7d7925f38aea3484111-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/968c9b4f09cbb7d7925f38aea3484111-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10800-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/968c9b4f09cbb7d7925f38aea3484111-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/968c9b4f09cbb7d7925f38aea3484111-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/968c9b4f09cbb7d7925f38aea3484111-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/968c9b4f09cbb7d7925f38aea3484111-Supplemental.pdf | Thanks to graph neural networks (GNNs), semi-supervised node classification has shown the state-of-the-art performance in graph data. However, GNNs have not considered different types of uncertainties associated with class probabilities to minimize risk of increasing misclassification under uncertainty in real life. In this work, we propose a multi-source uncertainty framework using a GNN that reflects various types of predictive uncertainties in both deep learning and belief/evidence theory domains for node classification predictions. By collecting evidence from the given labels of training nodes, the Graph-based Kernel Dirichlet distribution Estimation (GKDE) method is designed for accurately predicting node-level Dirichlet distributions and detecting out-of-distribution (OOD) nodes. We validated the outperformance of our proposed model compared to the state-of-the-art counterparts in terms of misclassification detection and OOD detection based on six real network datasets. We found that dissonance-based detection yielded the best results on misclassification detection while vacuity-based detection was the best for OOD detection. To clarify the reasons behind the results, we provided the theoretical proof that explains the relationships between different types of uncertainties considered in this work. |
ConvBERT: Improving BERT with Span-based Dynamic Convolution | https://papers.nips.cc/paper_files/paper/2020/hash/96da2f590cd7246bbde0051047b0d6f7-Abstract.html | Zi-Hang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan | https://papers.nips.cc/paper_files/paper/2020/hash/96da2f590cd7246bbde0051047b0d6f7-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/96da2f590cd7246bbde0051047b0d6f7-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10801-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/96da2f590cd7246bbde0051047b0d6f7-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/96da2f590cd7246bbde0051047b0d6f7-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/96da2f590cd7246bbde0051047b0d6f7-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/96da2f590cd7246bbde0051047b0d6f7-Supplemental.zip | Pre-trained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, using less than 1/4 training cost. Code and pre-trained models will be released. |
Practical No-box Adversarial Attacks against DNNs | https://papers.nips.cc/paper_files/paper/2020/hash/96e07156db854ca7b00b5df21716b0c6-Abstract.html | Qizhang Li, Yiwen Guo, Hao Chen | https://papers.nips.cc/paper_files/paper/2020/hash/96e07156db854ca7b00b5df21716b0c6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/96e07156db854ca7b00b5df21716b0c6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10802-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/96e07156db854ca7b00b5df21716b0c6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/96e07156db854ca7b00b5df21716b0c6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/96e07156db854ca7b00b5df21716b0c6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/96e07156db854ca7b00b5df21716b0c6-Supplemental.pdf | The study of adversarial vulnerabilities of deep neural networks (DNNs) has progressed rapidly. Existing attacks require either internal access (to the architecture, parameters, or training set of the victim model) or external access (to query the model). However, both the access may be infeasible or expensive in many scenarios. We investigate no-box adversarial examples, where the attacker can neither access the model information or the training set nor query the model. Instead, the attacker can only gather a small number of examples from the same problem domain as that of the victim model. Such a stronger threat model greatly expands the applicability of adversarial attacks. We propose three mechanisms for training with a very small dataset (on the order of tens of examples) and find that prototypical reconstruction is the most effective. Our experiments show that adversarial examples crafted on prototypical auto-encoding models transfer well to a variety of image classification and face verification models. On a commercial celebrity recognition system held by clarifai.com, our approach significantly diminishes the average prediction accuracy of the system to only 15.40%, which is on par with the attack that transfers adversarial examples from a pre-trained Arcface model. Our code is publicly available at: https://github.com/qizhangli/nobox-attacks. |
Breaking the Sample Size Barrier in Model-Based Reinforcement Learning with a Generative Model | https://papers.nips.cc/paper_files/paper/2020/hash/96ea64f3a1aa2fd00c72faacf0cb8ac9-Abstract.html | Gen Li, Yuting Wei, Yuejie Chi, Yuantao Gu, Yuxin Chen | https://papers.nips.cc/paper_files/paper/2020/hash/96ea64f3a1aa2fd00c72faacf0cb8ac9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/96ea64f3a1aa2fd00c72faacf0cb8ac9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10803-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/96ea64f3a1aa2fd00c72faacf0cb8ac9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/96ea64f3a1aa2fd00c72faacf0cb8ac9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/96ea64f3a1aa2fd00c72faacf0cb8ac9-Review.html | null | We investigate the sample efficiency of reinforcement learning in a $\gamma$-discounted infinite-horizon Markov decision process (MDP) with state space S and action space A, assuming access to a generative model. Despite a number of prior work tackling this problem, a complete picture of the trade-offs between sample complexity and statistical accuracy is yet to be determined. In particular, prior results suffer from a sample size barrier, in the sense that their claimed statistical guarantees hold only when the sample size exceeds at least $ |S| |A| / (1-\gamma)^2 $ (up to some log factor). The current paper overcomes this barrier by certifying the minimax optimality of model-based reinforcement learning as soon as the sample size exceeds the order of $ |S| |A| / (1-\gamma) $ (modulo some log factor). More specifically, a perturbed model-based planning algorithm provably finds an $\epsilon$-optimal policy with an order of $ |S| |A| / ((1-\gamma)^3\epsilon^2 ) $ samples (up to log factor) for any $0< \epsilon < 1/(1-\gamma)$. Along the way, we derive improved (instance-dependent) guarantees for model-based policy evaluation. To the best of our knowledge, this work provides the first minimax-optimal guarantee in a generative model that accommodates the entire range of sample sizes (beyond which finding a meaningful policy is information theoretically impossible). |
Walking in the Shadow: A New Perspective on Descent Directions for Constrained Minimization | https://papers.nips.cc/paper_files/paper/2020/hash/96f2d6069db8ad895c34e2285d25c0ed-Abstract.html | Hassan Mortagy, Swati Gupta, Sebastian Pokutta | https://papers.nips.cc/paper_files/paper/2020/hash/96f2d6069db8ad895c34e2285d25c0ed-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/96f2d6069db8ad895c34e2285d25c0ed-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10804-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/96f2d6069db8ad895c34e2285d25c0ed-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/96f2d6069db8ad895c34e2285d25c0ed-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/96f2d6069db8ad895c34e2285d25c0ed-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/96f2d6069db8ad895c34e2285d25c0ed-Supplemental.pdf | Descent directions such as movement towards Frank-Wolfe vertices, away steps, in-face away steps and pairwise directions have been an important design consideration in conditional gradient descent (CGD) variants. In this work, we attempt to demystify the impact of movement in these directions towards attaining constrained minimizers. The best local direction of descent is the directional derivative of the projection of the gradient, which we refer to as the "shadow" of the gradient. We show that the continuous-time dynamics of moving in the shadow are equivalent to those of PGD however non-trivial to discretize. By projecting gradients in PGD, one not only ensures feasibility but also is able to "wrap" around the convex region. We show that Frank-Wolfe (FW) vertices in fact recover the maximal wrap one can obtain by projecting gradients, thus providing a new perspective to these steps. We also claim that the shadow steps give the best direction of descent emanating from the convex hull of all possible away-vertices. Opening up the PGD movements in terms of shadow steps gives linear convergence, dependent on the number of faces. We combine these insights into a novel Shadow-CG method that uses FW steps (i.e., wrap around the polytope) and shadow steps (i.e., optimal local descent direction), while enjoying linear convergence. Our analysis develops properties of directional derivatives of projections (which may be of independent interest), while providing a unifying view of various descent directions in the CGD literature. |
Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks | https://papers.nips.cc/paper_files/paper/2020/hash/96fca94df72984fc97ee5095410d4dec-Abstract.html | Alexander Shekhovtsov, Viktor Yanush, Boris Flach | https://papers.nips.cc/paper_files/paper/2020/hash/96fca94df72984fc97ee5095410d4dec-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/96fca94df72984fc97ee5095410d4dec-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10805-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/96fca94df72984fc97ee5095410d4dec-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/96fca94df72984fc97ee5095410d4dec-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/96fca94df72984fc97ee5095410d4dec-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/96fca94df72984fc97ee5095410d4dec-Supplemental.pdf | In neural networks with binary activations and or binary weights the training by gradient descent is complicated as the model has piecewise constant response.
We consider stochastic binary networks, obtained by adding noises in front of activations.
The expected model response becomes a smooth function of parameters, its gradient is well defined but it is challenging to estimate it accurately.
We propose a new method for this estimation problem combining sampling and analytic approximation steps. The method has a significantly reduced variance at the price of a small bias which gives a very practical tradeoff in comparison with existing unbiased and biased estimators.
We further show that one extra linearization step leads to a deep straight-through estimator previously known only as an ad-hoc heuristic.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models with both proposed methods. |
Reward Propagation Using Graph Convolutional Networks | https://papers.nips.cc/paper_files/paper/2020/hash/970627414218ccff3497cb7a784288f5-Abstract.html | Martin Klissarov, Doina Precup | https://papers.nips.cc/paper_files/paper/2020/hash/970627414218ccff3497cb7a784288f5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/970627414218ccff3497cb7a784288f5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10806-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/970627414218ccff3497cb7a784288f5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/970627414218ccff3497cb7a784288f5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/970627414218ccff3497cb7a784288f5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/970627414218ccff3497cb7a784288f5-Supplemental.pdf | Potential-based reward shaping provides an approach for designing good reward functions, with the purpose of speeding up learning. However, automatically finding potential functions for complex environments is a difficult problem (in fact, of the same difficulty as learning a value function from scratch). We propose a new framework for learning potential functions by leveraging ideas from graph representation learning. Our approach relies on Graph Convolutional Networks which we use as a key ingredient in combination with the probabilistic inference view of reinforcement learning. More precisely, we leverage Graph Convolutional Networks to perform message passing from rewarding states. The propagated messages can then be used as potential functions for reward shaping to accelerate learning. We verify empirically that our approach can achieve considerable improvements in both small and high-dimensional control problems. |
LoopReg: Self-supervised Learning of Implicit Surface Correspondences, Pose and Shape for 3D Human Mesh Registration | https://papers.nips.cc/paper_files/paper/2020/hash/970af30e481057c48f87e101b61e6994-Abstract.html | Bharat Lal Bhatnagar, Cristian Sminchisescu, Christian Theobalt, Gerard Pons-Moll | https://papers.nips.cc/paper_files/paper/2020/hash/970af30e481057c48f87e101b61e6994-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/970af30e481057c48f87e101b61e6994-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10807-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/970af30e481057c48f87e101b61e6994-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/970af30e481057c48f87e101b61e6994-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/970af30e481057c48f87e101b61e6994-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/970af30e481057c48f87e101b61e6994-Supplemental.pdf | We address the problem of fitting 3D human models to 3D scans of dressed humans. Classical methods optimize both the data-to-model correspondences and the human model parameters (pose and shape), but are reliable only when initialised close to the solution. Some methods initialize the optimization based on fully supervised correspondence predictors, which is not differentiable end-to-end, and can only process a single scan at a time. Our main contribution is LoopReg, an end-to-end learning framework to register a corpus of scans to a common 3D human model. The key idea is to create a self-supervised loop. A backward map, parameterized by a Neural Network, predicts the correspondence from every scan point to the surface of the human model. A forward map, parameterized by a human model, transforms the corresponding points back to the scan based on the model parameters (pose and shape), thus closing the loop. Formulating this closed loop is not straightforward because it is not trivial to force the output of the NN to be on the surface of the human model -- outside this surface the human model is not even defined. To this end, we propose two key innovations. First, we define the canonical surface implicitly as the zero level set of a distance field in R3, which in contrast to more common UV parameterizations does not require cutting the surface, does not have discontinuities, and does not induce distortion. Second, we diffuse the human model to the 3D domain. This allows to map the NN predictions forward, even when they slightly deviate from the zero level set. Results demonstrate that we can train LoopReg mainly self-supervised -- following a supervised warm-start, the model becomes increasingly more accurate as additional unlabelled raw scans are processed. Our code and pre-trained models can be downloaded for research. |
Fully Dynamic Algorithm for Constrained Submodular Optimization | https://papers.nips.cc/paper_files/paper/2020/hash/9715d04413f296eaf3c30c47cec3daa6-Abstract.html | Silvio Lattanzi, Slobodan Mitrović, Ashkan Norouzi-Fard, Jakub M. Tarnawski, Morteza Zadimoghaddam | https://papers.nips.cc/paper_files/paper/2020/hash/9715d04413f296eaf3c30c47cec3daa6-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9715d04413f296eaf3c30c47cec3daa6-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10808-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9715d04413f296eaf3c30c47cec3daa6-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9715d04413f296eaf3c30c47cec3daa6-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9715d04413f296eaf3c30c47cec3daa6-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9715d04413f296eaf3c30c47cec3daa6-Supplemental.pdf | The task of maximizing a monotone submodular function under a cardinality constraint is at the core of many machine learning and data mining applications, including data summarization, sparse regression and coverage problems. We study this classic problem in the fully dynamic setting, where elements can be both inserted and removed. Our main result is a randomized algorithm that maintains an efficient data structure with a poly-logarithmic amortized update time and yields a $(1/2-epsilon)$-approximate solution. We complement our theoretical analysis with an empirical study of the performance of our algorithm. |
Robust Optimal Transport with Applications in Generative Modeling and Domain Adaptation | https://papers.nips.cc/paper_files/paper/2020/hash/9719a00ed0c5709d80dfef33795dcef3-Abstract.html | Yogesh Balaji, Rama Chellappa, Soheil Feizi | https://papers.nips.cc/paper_files/paper/2020/hash/9719a00ed0c5709d80dfef33795dcef3-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9719a00ed0c5709d80dfef33795dcef3-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10809-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9719a00ed0c5709d80dfef33795dcef3-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9719a00ed0c5709d80dfef33795dcef3-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9719a00ed0c5709d80dfef33795dcef3-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9719a00ed0c5709d80dfef33795dcef3-Supplemental.pdf | Optimal Transport (OT) distances such as Wasserstein have been used in several areas such as GANs and domain adaptation. OT, however, is very sensitive to outliers (samples with large noise) in the data since in its objective function, every sample, including outliers, is weighed similarly due to the marginal constraints. To remedy this issue, robust formulations of OT with unbalanced marginal constraints have previously been proposed. However, employing these methods in deep learning problems such as GANs and domain adaptation is challenging due to the instability of their dual optimization solvers. In this paper, we resolve these issues by deriving a computationally-efficient dual form of the robust OT optimization that is amenable to modern deep learning applications. We demonstrate the effectiveness of our formulation in two applications of GANs and domain adaptation. Our approach can train state-of-the-art GAN models on noisy datasets corrupted with outlier distributions. In particular, the proposed optimization method computes weights for training samples reflecting how difficult it is for those samples to be generated in the model. In domain adaptation, our robust OT formulation leads to improved accuracy compared to the standard adversarial adaptation methods. Our code is available at https://github.com/yogeshbalaji/robustOT. |
Autofocused oracles for model-based design | https://papers.nips.cc/paper_files/paper/2020/hash/972cda1e62b72640cb7ac702714a115f-Abstract.html | Clara Fannjiang, Jennifer Listgarten | https://papers.nips.cc/paper_files/paper/2020/hash/972cda1e62b72640cb7ac702714a115f-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/972cda1e62b72640cb7ac702714a115f-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10810-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/972cda1e62b72640cb7ac702714a115f-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/972cda1e62b72640cb7ac702714a115f-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/972cda1e62b72640cb7ac702714a115f-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/972cda1e62b72640cb7ac702714a115f-Supplemental.pdf | Data-driven design is making headway into a number of application areas, including protein, small-molecule, and materials engineering. The design goal is to construct an object with desired properties, such as a protein that binds to a therapeutic target, or a superconducting material with a higher critical temperature than previously observed. To that end, costly experimental measurements are being replaced with calls to high-capacity regression models trained on labeled data, which can be leveraged in an in silico search for design candidates. However, the design goal necessitates moving into regions of the design space beyond where such models were trained. Therefore, one can ask: should the regression model be altered as the design algorithm explores the design space, in the absence of new data? Herein, we answer this question in the affirmative. In particular, we (i) formalize the data-driven design problem as a non-zero-sum game, (ii) develop a principled strategy for retraining the regression model as the design algorithm proceeds---what we refer to as autofocusing, and (iii) demonstrate the promise of autofocusing empirically. |
Debiasing Averaged Stochastic Gradient Descent to handle missing values | https://papers.nips.cc/paper_files/paper/2020/hash/972ededf6c4d7c1405ef53f27d961eda-Abstract.html | Aude Sportisse, Claire Boyer, Aymeric Dieuleveut, Julie Josse | https://papers.nips.cc/paper_files/paper/2020/hash/972ededf6c4d7c1405ef53f27d961eda-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/972ededf6c4d7c1405ef53f27d961eda-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10811-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/972ededf6c4d7c1405ef53f27d961eda-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/972ededf6c4d7c1405ef53f27d961eda-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/972ededf6c4d7c1405ef53f27d961eda-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/972ededf6c4d7c1405ef53f27d961eda-Supplemental.zip | Stochastic gradient algorithm is a key ingredient of many machine learning methods, particularly appropriate for large-scale learning. However, a major caveat of large data is their incompleteness. We propose an averaged stochastic gradient algorithm handling missing values in linear models. This approach has the merit to be free from the need of any data distribution modeling and to account for heterogeneous missing proportion.
In both streaming and finite-sample settings, we prove that this algorithm achieves convergence rate of $\mathcal{O}(\frac{1}{n})$ at the iteration $n$, the same as without missing values.
We show the convergence behavior and the relevance of the algorithm not only on synthetic data but also on real data sets, including those collected from medical register. |
Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning | https://papers.nips.cc/paper_files/paper/2020/hash/9739efc4f01292e764c86caa59af353e-Abstract.html | Younggyo Seo, Kimin Lee, Ignasi Clavera Gilaberte, Thanard Kurutach, Jinwoo Shin, Pieter Abbeel | https://papers.nips.cc/paper_files/paper/2020/hash/9739efc4f01292e764c86caa59af353e-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9739efc4f01292e764c86caa59af353e-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10812-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9739efc4f01292e764c86caa59af353e-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9739efc4f01292e764c86caa59af353e-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9739efc4f01292e764c86caa59af353e-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9739efc4f01292e764c86caa59af353e-Supplemental.pdf | Model-based reinforcement learning (RL) has shown great potential in various control tasks in terms of both sample-efficiency and final performance. However, learning a generalizable dynamics model robust to changes in dynamics remains a challenge since the target transition dynamics follow a multi-modal distribution. In this paper, we present a new model-based RL algorithm, coined trajectory-wise multiple choice learning, that learns a multi-headed dynamics model for dynamics generalization. The main idea is updating the most accurate prediction head to specialize each head in certain environments with similar dynamics, i.e., clustering environments. Moreover, we incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector, enabling the model to perform online adaptation to unseen environments. Finally, to utilize the specialized prediction heads more effectively, we propose an adaptive planning method, which selects the most accurate prediction head over a recent experience. Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods. Source code and videos are available at https://sites.google.com/view/trajectory-mcl. |
CompRess: Self-Supervised Learning by Compressing Representations | https://papers.nips.cc/paper_files/paper/2020/hash/975a1c8b9aee1c48d32e13ec30be7905-Abstract.html | Soroush Abbasi Koohpayegani, Ajinkya Tejankar, Hamed Pirsiavash | https://papers.nips.cc/paper_files/paper/2020/hash/975a1c8b9aee1c48d32e13ec30be7905-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/975a1c8b9aee1c48d32e13ec30be7905-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10813-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/975a1c8b9aee1c48d32e13ec30be7905-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/975a1c8b9aee1c48d32e13ec30be7905-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/975a1c8b9aee1c48d32e13ec30be7905-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/975a1c8b9aee1c48d32e13ec30be7905-Supplemental.zip | Self-supervised learning aims to learn good representations with unlabeled data. Recent works have shown that larger models benefit more from self-supervised learning than smaller models. As a result, the gap between supervised and self-supervised learning has been greatly reduced for larger models. In this work, instead of designing a new pseudo task for self-supervised learning, we develop a model compression method to compress an already learned, deep self-supervised model (teacher) to a smaller one (student). We train the student model so that it mimics the relative similarity between the datapoints in the teacher's embedding space. For AlexNet, our method outperforms all previous methods including the fully supervised model on ImageNet linear evaluation (59.0% compared to 56.5%) and on nearest neighbor evaluation (50.7% compared to 41.4%). To the best of our knowledge, this is the first time a self-supervised AlexNet has outperformed supervised one on ImageNet classification. Our code is available here: https://github.com/UMBCvision/CompRess |
Sample complexity and effective dimension for regression on manifolds | https://papers.nips.cc/paper_files/paper/2020/hash/977f8b33d303564416bf9f4ab1c39720-Abstract.html | Andrew McRae, Justin Romberg, Mark Davenport | https://papers.nips.cc/paper_files/paper/2020/hash/977f8b33d303564416bf9f4ab1c39720-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/977f8b33d303564416bf9f4ab1c39720-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10814-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/977f8b33d303564416bf9f4ab1c39720-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/977f8b33d303564416bf9f4ab1c39720-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/977f8b33d303564416bf9f4ab1c39720-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/977f8b33d303564416bf9f4ab1c39720-Supplemental.pdf | We consider the theory of regression on a manifold using reproducing kernel Hilbert space methods. Manifold models arise in a wide variety of modern machine learning problems, and our goal is to help understand the effectiveness of various implicit and explicit dimensionality-reduction methods that exploit manifold structure. Our first key contribution is to establish a novel nonasymptotic version of the Weyl law from differential geometry. From this we are able to show that certain spaces of smooth functions on a manifold are effectively finite-dimensional, with a complexity that scales according to the manifold dimension rather than any ambient data dimension. Finally, we show that given (potentially noisy) function values taken uniformly at random over a manifold, a kernel regression estimator (derived from the spectral decomposition of the manifold) yields minimax-optimal error bounds that are controlled by the effective dimension. |
The phase diagram of approximation rates for deep neural networks | https://papers.nips.cc/paper_files/paper/2020/hash/979a3f14bae523dc5101c52120c535e9-Abstract.html | Dmitry Yarotsky, Anton Zhevnerchuk | https://papers.nips.cc/paper_files/paper/2020/hash/979a3f14bae523dc5101c52120c535e9-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/979a3f14bae523dc5101c52120c535e9-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10815-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/979a3f14bae523dc5101c52120c535e9-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/979a3f14bae523dc5101c52120c535e9-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/979a3f14bae523dc5101c52120c535e9-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/979a3f14bae523dc5101c52120c535e9-Supplemental.pdf | We explore the phase diagram of approximation rates for deep neural networks and prove several new theoretical results. In particular, we generalize the existing result on the existence of deep discontinuous phase in ReLU networks to functional classes of arbitrary positive smoothness, and identify the boundary between the feasible and infeasible rates. Moreover, we show that all networks with a piecewise polynomial activation function have the same phase diagram. Next, we demonstrate that standard fully-connected architectures with a fixed width independent of smoothness can adapt to smoothness and achieve almost optimal rates. Finally, we consider deep networks with periodic activations ("deep Fourier expansion") and prove that they have very fast, nearly exponential approximation rates, thanks to the emerging capability of the network to implement efficient lookup operations. |
Timeseries Anomaly Detection using Temporal Hierarchical One-Class Network | https://papers.nips.cc/paper_files/paper/2020/hash/97e401a02082021fd24957f852e0e475-Abstract.html | Lifeng Shen, Zhuocong Li, James Kwok | https://papers.nips.cc/paper_files/paper/2020/hash/97e401a02082021fd24957f852e0e475-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/97e401a02082021fd24957f852e0e475-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10816-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/97e401a02082021fd24957f852e0e475-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/97e401a02082021fd24957f852e0e475-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/97e401a02082021fd24957f852e0e475-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/97e401a02082021fd24957f852e0e475-Supplemental.zip | Real-world timeseries have complex underlying temporal dynamics and the detection of anomalies is challenging. In this paper, we propose the Temporal Hierarchical One-Class (THOC) network, a temporal one-class classification model for timeseries anomaly detection. It captures temporal dynamics in multiple scales by using a dilated recurrent neural network with skip connections. Using multiple hyperspheres obtained with a hierarchical clustering process, a one-class objective called Multiscale Vector Data Description is defined. This allows the temporal dynamics to be well captured by a set of multi-resolution temporal clusters. To further facilitate representation learning, the hypersphere centers are encouraged to be orthogonal to each other, and a self-supervision task in the temporal domain is added. The whole model can be trained end-to-end. Extensive empirical studies on various real-world timeseries demonstrate that the proposed THOC network outperforms recent strong deep learning baselines on timeseries anomaly detection. |
EcoLight: Intersection Control in Developing Regions Under Extreme Budget and Network Constraints | https://papers.nips.cc/paper_files/paper/2020/hash/97e49161287e7a4f9b745366e4f9431b-Abstract.html | Sachin Chauhan, Kashish Bansal, Rijurekha Sen | https://papers.nips.cc/paper_files/paper/2020/hash/97e49161287e7a4f9b745366e4f9431b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/97e49161287e7a4f9b745366e4f9431b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10817-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/97e49161287e7a4f9b745366e4f9431b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/97e49161287e7a4f9b745366e4f9431b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/97e49161287e7a4f9b745366e4f9431b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/97e49161287e7a4f9b745366e4f9431b-Supplemental.zip | Effective intersection control can play an important role in reducing traffic congestion and associated vehicular emissions. This is vitally needed in developing countries, where air pollution is reaching life threatening levels. This paper presents EcoLight intersection control for developing regions, where budget is constrained and network connectivity is very poor. EcoLight learns effective control offline using state-of-the-art Deep Reinforcement Learning methods, but deploys highly efficient runtime control algorithms on low cost embedded devices that work stand-alone on road without server connectivity. EcoLight optimizes both average case and worst case values of throughput, travel time and other metrics, as evaluated on open-source datasets from New York and on a custom developing region dataset. |
Reconstructing Perceptive Images from Brain Activity by Shape-Semantic GAN | https://papers.nips.cc/paper_files/paper/2020/hash/9813b270ed0288e7c0388f0fd4ec68f5-Abstract.html | Tao Fang, Yu Qi, Gang Pan | https://papers.nips.cc/paper_files/paper/2020/hash/9813b270ed0288e7c0388f0fd4ec68f5-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9813b270ed0288e7c0388f0fd4ec68f5-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10818-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9813b270ed0288e7c0388f0fd4ec68f5-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9813b270ed0288e7c0388f0fd4ec68f5-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9813b270ed0288e7c0388f0fd4ec68f5-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9813b270ed0288e7c0388f0fd4ec68f5-Supplemental.pdf | Reconstructing seeing images from fMRI recordings is an absorbing research area in neuroscience and provides a potential brain-reading technology. The challenge lies in that visual encoding in brain is highly complex and not fully revealed.
Inspired by the theory that visual features are hierarchically represented in cortex, we propose to break the complex visual signals into multi-level components and decode each component separately. Specifically, we decode shape and semantic representations from the lower and higher visual cortex respectively, and merge the shape and semantic information to images by a generative adversarial network (Shape-Semantic GAN). This 'divide and conquer' strategy captures visual information more accurately. Experiments demonstrate that Shape-Semantic GAN improves the reconstruction similarity and image quality, and achieves the state-of-the-art image reconstruction performance. |
Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design | https://papers.nips.cc/paper_files/paper/2020/hash/985e9a46e10005356bbaf194249f6856-Abstract.html | Michael Dennis, Natasha Jaques, Eugene Vinitsky, Alexandre Bayen, Stuart Russell, Andrew Critch, Sergey Levine | https://papers.nips.cc/paper_files/paper/2020/hash/985e9a46e10005356bbaf194249f6856-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/985e9a46e10005356bbaf194249f6856-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10819-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/985e9a46e10005356bbaf194249f6856-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/985e9a46e10005356bbaf194249f6856-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/985e9a46e10005356bbaf194249f6856-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/985e9a46e10005356bbaf194249f6856-Supplemental.pdf | A wide range of reinforcement learning (RL) problems --- including robustness, transfer learning, unsupervised RL, and emergent complexity --- require specifying a distribution of tasks or environments in which a policy will be trained. However, creating a useful distribution of environments is error prone, and takes a significant amount of developer time and effort. We propose Unsupervised Environment Design (UED) as an alternative paradigm, where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments. Existing approaches to automatically generating environments suffer from common failure modes: domain randomization cannot generate structure or adapt the difficulty of the environment to the agent's learning progress, and minimax adversarial training leads to worst-case environments that are often unsolvable. To generate structured, solvable environments for our protagonist agent, we introduce a second, antagonist agent that is allied with the environment-generating adversary. The adversary is motivated to generate environments which maximize regret, defined as the difference between the protagonist and antagonist agent's return. We call our technique Protagonist Antagonist Induced Regret Environment Design (PAIRED). Our experiments demonstrate that PAIRED produces a natural curriculum of increasingly complex environments, and PAIRED agents achieve higher zero-shot transfer performance when tested in highly novel environments. |
A Spectral Energy Distance for Parallel Speech Synthesis | https://papers.nips.cc/paper_files/paper/2020/hash/9873eaad153c6c960616c89e54fe155a-Abstract.html | Alexey Gritsenko, Tim Salimans, Rianne van den Berg, Jasper Snoek, Nal Kalchbrenner | https://papers.nips.cc/paper_files/paper/2020/hash/9873eaad153c6c960616c89e54fe155a-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/9873eaad153c6c960616c89e54fe155a-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10820-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/9873eaad153c6c960616c89e54fe155a-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/9873eaad153c6c960616c89e54fe155a-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/9873eaad153c6c960616c89e54fe155a-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/9873eaad153c6c960616c89e54fe155a-Supplemental.pdf | Speech synthesis is an important practical generative modeling problem that has seen great progress over the last few years, with likelihood-based autoregressive neural models now outperforming traditional concatenative systems. A downside of such autoregressive models is that they require executing tens of thousands of sequential operations per second of generated audio, making them ill-suited for deployment on specialized deep learning hardware. Here, we propose a new learning method that allows us to train highly parallel models of speech, without requiring access to an analytical likelihood function. Our approach is based on a generalized energy distance between the distributions of the generated and real audio. This spectral energy distance is a proper scoring rule with respect to the distribution over magnitude-spectrograms of the generated waveform audio and offers statistical consistency guarantees. The distance can be calculated from minibatches without bias, and does not involve adversarial learning, yielding a stable and consistent method for training implicit generative models. Empirically, we achieve state-of-the-art generation quality among implicit generative models, as judged by the recently-proposed cFDSD metric. When combining our method with adversarial techniques, we also improve upon the recently-proposed GAN-TTS model in terms of Mean Opinion Score as judged by trained human evaluators. |
Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations | https://papers.nips.cc/paper_files/paper/2020/hash/98b17f068d5d9b7668e19fb8ae470841-Abstract.html | Joel Dapello, Tiago Marques, Martin Schrimpf, Franziska Geiger, David Cox, James J. DiCarlo | https://papers.nips.cc/paper_files/paper/2020/hash/98b17f068d5d9b7668e19fb8ae470841-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/98b17f068d5d9b7668e19fb8ae470841-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10821-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/98b17f068d5d9b7668e19fb8ae470841-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/98b17f068d5d9b7668e19fb8ae470841-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/98b17f068d5d9b7668e19fb8ae470841-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/98b17f068d5d9b7668e19fb8ae470841-Supplemental.pdf | Current state-of-the-art object recognition models are largely based on convolutional neural network (CNN) architectures, which are loosely inspired by the primate visual system. However, these CNNs can be fooled by imperceptibly small, explicitly crafted perturbations, and struggle to recognize objects in corrupted images that are easily recognized by humans. Here, by making comparisons with primate neural data, we first observed that CNN models with a neural hidden layer that better matches primate primary visual cortex (V1) are also more robust to adversarial attacks. Inspired by this observation, we developed VOneNets, a new class of hybrid CNN vision models. Each VOneNet contains a fixed weight neural network front-end that simulates primate V1, called the VOneBlock, followed by a neural network back-end adapted from current CNN vision models. The VOneBlock is based on a classical neuroscientific model of V1: the linear-nonlinear-Poisson model, consisting of a biologically-constrained Gabor filter bank, simple and complex cell nonlinearities, and a V1 neuronal stochasticity generator. After training, VOneNets retain high ImageNet performance, but each is substantially more robust, outperforming the base CNNs and state-of-the-art methods by 18% and 3%, respectively, on a conglomerate benchmark of perturbations comprised of white box adversarial attacks and common image corruptions. Finally, we show that all components of the VOneBlock work in synergy to improve robustness. While current CNN architectures are arguably brain-inspired, the results presented here demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in ImageNet-level computer vision applications. |
Learning from Positive and Unlabeled Data with Arbitrary Positive Shift | https://papers.nips.cc/paper_files/paper/2020/hash/98b297950041a42470269d56260243a1-Abstract.html | Zayd Hammoudeh, Daniel Lowd | https://papers.nips.cc/paper_files/paper/2020/hash/98b297950041a42470269d56260243a1-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/98b297950041a42470269d56260243a1-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10822-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/98b297950041a42470269d56260243a1-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/98b297950041a42470269d56260243a1-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/98b297950041a42470269d56260243a1-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/98b297950041a42470269d56260243a1-Supplemental.pdf | Positive-unlabeled (PU) learning trains a binary classifier using only positive and unlabeled data. A common simplifying assumption is that the positive data is representative of the target positive class. This assumption rarely holds in practice due to temporal drift, domain shift, and/or adversarial manipulation. This paper shows that PU learning is possible even with arbitrarily non-representative positive data given unlabeled data from the source and target distributions. Our key insight is that only the negative class's distribution need be fixed. We integrate this into two statistically consistent methods to address arbitrary positive bias - one approach combines negative-unlabeled learning with unlabeled-unlabeled learning while the other uses a novel, recursive risk estimator. Experimental results demonstrate our methods' effectiveness across numerous real-world datasets and forms of positive bias, including disjoint positive class-conditional supports. Additionally, we propose a general, simplified approach to address PU risk estimation overfitting. |
Deep Energy-based Modeling of Discrete-Time Physics | https://papers.nips.cc/paper_files/paper/2020/hash/98b418276d571e623651fc1d471c7811-Abstract.html | Takashi Matsubara, Ai Ishikawa, Takaharu Yaguchi | https://papers.nips.cc/paper_files/paper/2020/hash/98b418276d571e623651fc1d471c7811-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/98b418276d571e623651fc1d471c7811-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10823-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/98b418276d571e623651fc1d471c7811-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/98b418276d571e623651fc1d471c7811-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/98b418276d571e623651fc1d471c7811-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/98b418276d571e623651fc1d471c7811-Supplemental.pdf | Physical phenomena in the real world are often described by energy-based modeling theories, such as Hamiltonian mechanics or the Landau theory, which yield various physical laws. Recent developments in neural networks have enabled the mimicking of the energy conservation law by learning the underlying continuous-time differential equations. However, this may not be possible in discrete time, which is often the case in practical learning and computation. Moreover, other physical laws have been overlooked in the previous neural network models. In this study, we propose a deep energy-based physical model that admits a specific differential geometric structure. From this structure, the conservation or dissipation law of energy and the mass conservation law follow naturally. To ensure the energetic behavior in discrete time, we also propose an automatic discrete differentiation algorithm that enables neural networks to employ the discrete gradient method. |
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning | https://papers.nips.cc/paper_files/paper/2020/hash/98dce83da57b0395e163467c9dae521b-Abstract.html | Iro Laina, Ruth Fong, Andrea Vedaldi | https://papers.nips.cc/paper_files/paper/2020/hash/98dce83da57b0395e163467c9dae521b-Abstract.html | NIPS 2020 | https://papers.nips.cc/paper_files/paper/2020/file/98dce83da57b0395e163467c9dae521b-AuthorFeedback.pdf | https://papers.nips.cc/paper_files/paper/10824-/bibtex | https://papers.nips.cc/paper_files/paper/2020/file/98dce83da57b0395e163467c9dae521b-MetaReview.html | https://papers.nips.cc/paper_files/paper/2020/file/98dce83da57b0395e163467c9dae521b-Paper.pdf | https://papers.nips.cc/paper_files/paper/2020/file/98dce83da57b0395e163467c9dae521b-Review.html | https://papers.nips.cc/paper_files/paper/2020/file/98dce83da57b0395e163467c9dae521b-Supplemental.pdf | The increasing impact of black box models, and particularly of unsupervised ones, comes with an increasing interest in tools to understand and interpret them. In this paper, we consider in particular how to characterise visual groupings discovered automatically by deep neural networks, starting with state-of-the-art clustering methods. In some cases, clusters readily correspond to an existing labelled dataset. However, often they do not, yet they still maintain an "intuitive interpretability''. We introduce two concepts, visual learnability and describability, that can be used to quantify the interpretability of arbitrary image groupings, including unsupervised ones. The idea is to measure (1) how well humans can learn to reproduce a grouping by measuring their ability to generalise from a small set of visual examples (learnability) and (2) whether the set of visual examples can be replaced by a succinct, textual description (describability). By assessing human annotators as classifiers, we remove the subjective quality of existing evaluation metrics. For better scalability, we finally propose a class-level captioning system to generate descriptions for visual groupings automatically and compare it to human annotators using the describability metric. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.