title
stringlengths
13
150
url
stringlengths
97
97
authors
stringlengths
8
467
detail_url
stringlengths
97
97
tags
stringclasses
1 value
AuthorFeedback
stringlengths
102
102
Bibtex
stringlengths
53
54
MetaReview
stringlengths
99
99
Paper
stringlengths
93
93
Review
stringlengths
95
95
Supplemental
stringlengths
100
100
abstract
stringlengths
53
2k
Practical Quasi-Newton Methods for Training Deep Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/192fc044e74dffea144f9ac5dc9f3395-Abstract.html
Donald Goldfarb, Yi Ren, Achraf Bahamou
https://papers.nips.cc/paper_files/paper/2020/hash/192fc044e74dffea144f9ac5dc9f3395-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9925-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/192fc044e74dffea144f9ac5dc9f3395-Supplemental.pdf
We consider the development of practical stochastic quasi-Newton, and in particular Kronecker-factored block diagonal BFGS and L-BFGS methods, for training deep neural networks (DNNs). In DNN training, the number of variables and components of the gradient n is often of the order of tens of millions and the Hessian has n^2 elements. Consequently, computing and storing a full n times n BFGS approximation or storing a modest number of (step, change in gradient) vector pairs for use in an L-BFGS implementation is out of the question. In our proposed methods, we approximate the Hessian by a block-diagonal matrix and use the structure of the gradient and Hessian to further approximate these blocks, each of which corresponds to a layer, as the Kronecker product of two much smaller matrices. This is analogous to the approach in KFAC , which computes a Kronecker-factored block diagonal approximation to the Fisher matrix in a stochastic natural gradient method. Because the indefinite and highly variable nature of the Hessian in a DNN, we also propose a new damping approach to keep the upper as well as the lower bounds of the BFGS and L-BFGS approximations bounded. In tests on autoencoder feed-forward network models with either nine or thirteen layers applied to three datasets, our methods outperformed or performed comparably to KFAC and state-of-the-art first-order stochastic methods.
Approximation Based Variance Reduction for Reparameterization Gradients
https://papers.nips.cc/paper_files/paper/2020/hash/193002e668758ea9762904da1a22337c-Abstract.html
Tomas Geffner, Justin Domke
https://papers.nips.cc/paper_files/paper/2020/hash/193002e668758ea9762904da1a22337c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9926-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/193002e668758ea9762904da1a22337c-Supplemental.pdf
Flexible variational distributions improve variational inference but are harder to optimize. In this work we present a control variate that is applicable for any reparameterizable distribution with known mean and covariance, e.g. Gaussians with any covariance structure. The control variate is based on a quadratic approximation of the model, and its parameters are set using a double-descent scheme. We empirically show that this control variate leads to large improvements in gradient variance and optimization convergence for inference with non-factorized variational distributions.
Inference Stage Optimization for Cross-scenario 3D Human Pose Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/1943102704f8f8f3302c2b730728e023-Abstract.html
Jianfeng Zhang, Xuecheng Nie, Jiashi Feng
https://papers.nips.cc/paper_files/paper/2020/hash/1943102704f8f8f3302c2b730728e023-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9927-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1943102704f8f8f3302c2b730728e023-Supplemental.pdf
Existing 3D human pose estimation models suffer performance drop when applying to new scenarios with unseen poses due to their limited generalizability. In this work, we propose a novel framework, Inference Stage Optimization (ISO), for improving the generalizability of 3D pose models when source and target data come from different pose distributions. Our main insight is that the target data, even though not labeled, carry valuable priors about their underlying distribution. To exploit such information, the proposed ISO performs geometry-aware self-supervised learning (SSL) on each single target instance and updates the 3D pose model before making prediction. In this way, the model can mine distributional knowledge about the target scenario and quickly adapt to it with enhanced generalization performance. In addition, to handle sequential target data, we propose an online mode for implementing our ISO framework via streaming the SSL, which substantially enhances its effectiveness. We systematically analyze why and how our ISO framework works on diverse benchmarks under cross-scenario setup. Remarkably, it yields new state-of-the-art of 83.6% 3D PCK on MPI-INF-3DHP, improving upon the previous best result by 9.7%.
Consistent feature selection for analytic deep neural networks
https://papers.nips.cc/paper_files/paper/2020/hash/1959eb9d5a0f7ebc58ebde81d5df400d-Abstract.html
Vu C. Dinh, Lam S. Ho
https://papers.nips.cc/paper_files/paper/2020/hash/1959eb9d5a0f7ebc58ebde81d5df400d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9928-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1959eb9d5a0f7ebc58ebde81d5df400d-Supplemental.zip
In this work, we investigate the problem of feature selection for analytic deep networks. We prove that for a wide class of networks, including deep feed-forward neural networks, convolutional neural networks and a major sub-class of residual neural networks, the Adaptive Group Lasso selection procedure with Group Lasso as the base estimator is selection-consistent. The work provides further evidence that Group Lasso might be inefficient for feature selection with neural networks and advocates the use of Adaptive Group Lasso over the popular Group Lasso.
Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification
https://papers.nips.cc/paper_files/paper/2020/hash/1963bd5135521d623f6c29e6b1174975-Abstract.html
Yulin Wang, Kangchen Lv, Rui Huang, Shiji Song, Le Yang, Gao Huang
https://papers.nips.cc/paper_files/paper/2020/hash/1963bd5135521d623f6c29e6b1174975-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9929-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1963bd5135521d623f6c29e6b1174975-Supplemental.pdf
The accuracy of deep convolutional neural networks (CNNs) generally improves when fueled with high resolution images. However, this often comes at a high computational cost and high memory footprint. Inspired by the fact that not all regions in an image are task-relevant, we propose a novel framework that performs efficient image classification by processing a sequence of relatively small inputs, which are strategically selected from the original image with reinforcement learning. Such a dynamic decision process naturally facilitates adaptive inference at test time, i.e., it can be terminated once the model is sufficiently confident about its prediction and thus avoids further redundant computation. Notably, our framework is general and flexible as it is compatible with most of the state-of-the-art light-weighted CNNs (such as MobileNets, EfficientNets and RegNets), which can be conveniently deployed as the backbone feature extractor. Experiments on ImageNet show that our method consistently improves the computational efficiency of a wide variety of deep models. For example, it further reduces the average latency of the highly efficient MobileNet-V3 on an iPhone XS Max by 20% without sacrificing accuracy. Code and pre-trained models are available at https://github.com/blackfeather-wang/GFNet-Pytorch.
Information Maximization for Few-Shot Learning
https://papers.nips.cc/paper_files/paper/2020/hash/196f5641aa9dc87067da4ff90fd81e7b-Abstract.html
Malik Boudiaf, Imtiaz Ziko, Jérôme Rony, Jose Dolz, Pablo Piantanida, Ismail Ben Ayed
https://papers.nips.cc/paper_files/paper/2020/hash/196f5641aa9dc87067da4ff90fd81e7b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9930-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/196f5641aa9dc87067da4ff90fd81e7b-Supplemental.pdf
We introduce Transductive Infomation Maximization (TIM) for few-shot learning. Our method maximizes the mutual information between the query features and their label predictions for a given few-shot task, in conjunction with a supervision loss based on the support set. Furthermore, we propose a new alternating-direction solver for our mutual-information loss, which substantially speeds up transductive inference convergence over gradient-based optimization, while yielding similar accuracy. TIM inference is modular: it can be used on top of any base-training feature extractor. Following standard transductive few-shot settings, our comprehensive experiments demonstrate that TIM outperforms state-of-the-art methods significantly across various datasets and networks, while used on top of a fixed feature extractor trained with simple cross-entropy on the base classes, without resorting to complex meta-learning schemes. It consistently brings between 2% and 5% improvement in accuracy over the best performing method, not only on all the well-established few-shot benchmarks but also on more challenging scenarios, with domain shifts and larger numbers of classes.
Inverse Reinforcement Learning from a Gradient-based Learner
https://papers.nips.cc/paper_files/paper/2020/hash/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Abstract.html
Giorgia Ramponi, Gianluca Drappo, Marcello Restelli
https://papers.nips.cc/paper_files/paper/2020/hash/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9931-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/19aa6c6fb4ba9fcf39e893ff1fd5b5bd-Supplemental.pdf
Inverse Reinforcement Learning addresses the problem of inferring an expert's reward function from demonstrations. However, in many applications, we not only have access to the expert's near-optimal behaviour, but we also observe part of her learning process. In this paper, we propose a new algorithm for this setting, in which the goal is to recover the reward function being optimized by an agent, given a sequence of policies produced during learning. Our approach is based on the assumption that the observed agent is updating her policy parameters along the gradient direction. Then we extend our method to deal with the more realistic scenario where we only have access to a dataset of learning trajectories. For both settings, we provide theoretical insights into our algorithms' performance. Finally, we evaluate the approach in a simulated GridWorld environment and on the MuJoCo environments, comparing it with the state-of-the-art baseline.
Bayesian Multi-type Mean Field Multi-agent Imitation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/19eca5979ccbb752778e6c5f090dc9b6-Abstract.html
Fan Yang, Alina Vereshchaka, Changyou Chen, Wen Dong
https://papers.nips.cc/paper_files/paper/2020/hash/19eca5979ccbb752778e6c5f090dc9b6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9932-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/19eca5979ccbb752778e6c5f090dc9b6-Supplemental.zip
Multi-agent Imitation learning (MAIL) refers to the problem that agents learn to perform a task interactively in a multi-agent system through observing and mimicking expert demonstrations, without any knowledge of a reward function from the environment. MAIL has received a lot of attention due to promising results achieved on synthesized tasks, with the potential to be applied to complex real-world multi-agent tasks. Key challenges for MAIL include sample efficiency and scalability. In this paper, we proposed Bayesian multi-type mean field multi-agent imitation learning (BM3IL). Our method improves sample efficiency through establishing a Bayesian formulation for MAIL, and enhances scalability through introducing a new multi-type mean field approximation. We demonstrate the performance of our algorithm through benchmarking with three state-of-the-art multi-agent imitation learning algorithms on several tasks, including solving a multi-agent traffic optimization problem in a real-world transportation network. Experimental results indicate that our algorithm significantly outperforms all other algorithms in all scenarios.
Bayesian Robust Optimization for Imitation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1a669e81c8093745261889539694be7f-Abstract.html
Daniel Brown, Scott Niekum, Marek Petrik
https://papers.nips.cc/paper_files/paper/2020/hash/1a669e81c8093745261889539694be7f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9933-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1a669e81c8093745261889539694be7f-Supplemental.pdf
One of the main challenges in imitation learning is determining what action an agent should take when outside the state distribution of the demonstrations. Inverse reinforcement learning (IRL) can enable generalization to new states by learning a parameterized reward function, but these approaches still face uncertainty over the true reward function and corresponding optimal policy. Existing safe imitation learning approaches based on IRL deal with this uncertainty using a maxmin framework that optimizes a policy under the assumption of an adversarial reward function, whereas risk-neutral IRL approaches either optimize a policy for the mean or MAP reward function. While completely ignoring risk can lead to overly aggressive and unsafe policies, optimizing in a fully adversarial sense is also problematic as it can lead to overly conservative policies that perform poorly in practice. To provide a bridge between these two extremes, we propose Bayesian Robust Optimization for Imitation Learning (BROIL). BROIL leverages Bayesian reward function inference and a user specific risk tolerance to efficiently optimize a robust policy that balances expected return and conditional value at risk. Our empirical results show that BROIL provides a natural way to interpolate between return-maximizing and risk-minimizing behaviors and outperforms existing risk-sensitive and risk-neutral inverse reinforcement learning algorithms.
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance
https://papers.nips.cc/paper_files/paper/2020/hash/1a77befc3b608d6ed363567685f70e1e-Abstract.html
Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Basri Ronen, Yaron Lipman
https://papers.nips.cc/paper_files/paper/2020/hash/1a77befc3b608d6ed363567685f70e1e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9934-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1a77befc3b608d6ed363567685f70e1e-Supplemental.zip
In this work we address the challenging problem of multiview 3D surface reconstruction. We introduce a neural network architecture that simultaneously learns the unknown geometry, camera parameters, and a neural renderer that approximates the light reflected from the surface towards the camera. The geometry is represented as a zero level-set of a neural network, while the neural renderer, derived from the rendering equation, is capable of (implicitly) modeling a wide set of lighting conditions and materials. We trained our network on real world 2D images of objects with different material properties, lighting conditions, and noisy camera initializations from the DTU MVS dataset. We found our model to produce state of the art 3D surface reconstructions with high fidelity, resolution and detail.
Riemannian Continuous Normalizing Flows
https://papers.nips.cc/paper_files/paper/2020/hash/1aa3d9c6ce672447e1e5d0f1b5207e85-Abstract.html
Emile Mathieu, Maximilian Nickel
https://papers.nips.cc/paper_files/paper/2020/hash/1aa3d9c6ce672447e1e5d0f1b5207e85-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9935-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1aa3d9c6ce672447e1e5d0f1b5207e85-Supplemental.pdf
Normalizing flows have shown great promise for modelling flexible probability distributions in a computationally tractable way. However, whilst data is often naturally described on Riemannian manifolds such as spheres, torii, and hyperbolic spaces, most normalizing flows implicitly assume a flat geometry, making them either misspecified or ill-suited in these situations. To overcome this problem, we introduce Riemannian continuous normalizing flows, a model which admits the parametrization of flexible probability measures on smooth manifolds by defining flows as the solution to ordinary differential equations. We show that this approach can lead to substantial improvements on both synthetic and real-world data when compared to standard flows or previously introduced projected flows.
Attention-Gated Brain Propagation: How the brain can implement reward-based error backpropagation
https://papers.nips.cc/paper_files/paper/2020/hash/1abb1e1ea5f481b589da52303b091cbb-Abstract.html
Isabella Pozzi, Sander Bohte, Pieter Roelfsema
https://papers.nips.cc/paper_files/paper/2020/hash/1abb1e1ea5f481b589da52303b091cbb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9936-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1abb1e1ea5f481b589da52303b091cbb-Supplemental.zip
Much recent work has focused on biologically plausible variants of supervised learning algorithms. However, there is no teacher in the motor cortex that instructs the motor neurons and learning in the brain depends on reward and punishment. We demonstrate a biologically plausible reinforcement learning scheme for deep networks with an arbitrary number of layers. The network chooses an action by selecting a unit in the output layer and uses feedback connections to assign credit to the units in successively lower layers that are responsible for this action. After the choice, the network receives reinforcement and there is no teacher correcting the errors. We show how the new learning scheme – Attention-Gated Brain Propagation (BrainProp) – is mathematically equivalent to error backpropagation, for one output unit at a time. We demonstrate successful learning of deep fully connected, convolutional and locally connected networks on classical and hard image-classification benchmarks; MNIST, CIFAR10, CIFAR100 and Tiny ImageNet. BrainProp achieves an accuracy that is equivalent to that of standard error-backpropagation, and better than state-of-the-art biologically inspired learning schemes. The trial-and-error nature of learning is associated with limited additional training time so that BrainProp is a factor of 1-3.5 times slower. Our results thereby provide new insights into how deep learning may be implemented in the brain.
Asymptotic Guarantees for Generative Modeling Based on the Smooth Wasserstein Distance
https://papers.nips.cc/paper_files/paper/2020/hash/1ac978c8020be6d7212aa71d4f040fc3-Abstract.html
Ziv Goldfeld, Kristjan Greenewald, Kengo Kato
https://papers.nips.cc/paper_files/paper/2020/hash/1ac978c8020be6d7212aa71d4f040fc3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9937-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ac978c8020be6d7212aa71d4f040fc3-Supplemental.pdf
Minimum distance estimation (MDE) gained recent attention as a formulation of (implicit) generative modeling. It considers minimizing, over model parameters, a statistical distance between the empirical data distribution and the model. This formulation lends itself well to theoretical analysis, but typical results are hindered by the curse of dimensionality. To overcome this and devise a scalable finite-sample statistical MDE theory, we adopt the framework of smooth 1-Wasserstein distance (SWD) $\mathsf{W}_1^{(\sigma)}$. The SWD was recently shown to preserve the metric and topological structure of classic Wasserstein distances, while enjoying dimension-free empirical convergence rates. In this work, we conduct a thorough statistical study of the minimum smooth Wasserstein estimators (MSWEs), first proving the estimator's measurability and asymptotic consistency. We then characterize the limit distribution of the optimal model parameters and their associated minimal SWD. These results imply an $O(n^{-1/2})$ generalization bound for generative modeling based on MSWE, which holds in arbitrary dimension. Our main technical tool is a novel high-dimensional limit distribution result for empirical $\mathsf{W}_1^{(\sigma)}$. The characterization of a nondegenerate limit stands in sharp contrast with the classic empirical 1-Wasserstein distance, for which a similar result is known only in the one-dimensional case. The validity of our theory is supported by empirical results, posing the SWD as a potent tool for learning and inference in high dimensions.
Online Robust Regression via SGD on the l1 loss
https://papers.nips.cc/paper_files/paper/2020/hash/1ae6464c6b5d51b363d7d96f97132c75-Abstract.html
Scott Pesme, Nicolas Flammarion
https://papers.nips.cc/paper_files/paper/2020/hash/1ae6464c6b5d51b363d7d96f97132c75-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9938-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ae6464c6b5d51b363d7d96f97132c75-Supplemental.pdf
We consider the robust linear regression problem in the online setting where we have access to the data in a streaming manner, one data point after the other. More specifically, for a true parameter $ \theta^* $, we consider the corrupted Gaussian linear model $y = + \varepsilon + b$ where the adversarial noise $b$ can take any value with probability $\eta$ and equals zero otherwise. We consider this adversary to be oblivious (i.e., $b$ independent of the data) since this is the only contamination model under which consistency is possible. Current algorithms rely on having the whole data at hand in order to identify and remove the outliers. In contrast, we show in this work that stochastic gradient descent on the l1 loss converges to the true parameter vector at a $\tilde{O}( 1 / (1 - \eta)^2 n )$ rate which is independent of the values of the contaminated measurements. Our proof relies on the elegant smoothing of the l1 loss by the Gaussian data and a classical non-asymptotic analysis of Polyak-Ruppert averaged SGD. In addition, we provide experimental evidence of the efficiency of this simple and highly scalable algorithm.
PRANK: motion Prediction based on RANKing
https://papers.nips.cc/paper_files/paper/2020/hash/1b0251ccb8bd5f9ccf444e4bda7713e3-Abstract.html
Yuriy Biktairov, Maxim Stebelev, Irina Rudenko, Oleh Shliazhko, Boris Yangel
https://papers.nips.cc/paper_files/paper/2020/hash/1b0251ccb8bd5f9ccf444e4bda7713e3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9939-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b0251ccb8bd5f9ccf444e4bda7713e3-Supplemental.pdf
Predicting the motion of agents such as pedestrians or human-driven vehicles is one of the most critical problems in the autonomous driving domain. The overall safety of driving and the comfort of a passenger directly depend on its successful solution. The motion prediction problem also remains one of the most challenging problems in autonomous driving engineering, mainly due to high variance of the possible agent’s future behavior given a situation. The two phenomena responsible for the said variance are the multimodality caused by the uncertainty of the agent’s intent (e.g., turn right or move forward) and uncertainty in the realization of a given intent (e.g., which lane to turn into). To be useful within a real-time autonomous driving pipeline, a motion prediction system must provide efficient ways to describe and quantify this uncertainty, such as computing posterior modes and their probabilities or estimating density at the point corresponding to a given trajectory. It also should not put substantial density on physically impossible trajectories, as they can confuse the system processing the predictions. In this paper, we introduce the PRANK method, which satisfies these requirements. PRANK takes rasterized bird-eye images of agent’s surroundings as an input and extracts features of the scene with a convolutional neural network. It then produces the conditional distribution of agent’s trajectories plausible in the given scene. The key contribution of PRANK is a way to represent that distribution using nearest-neighbor methods in latent trajectory space, which allows for efficient inference in real time. We evaluate PRANK on the in-house and Argoverse datasets, where it shows competitive results.
Fighting Copycat Agents in Behavioral Cloning from Observation Histories
https://papers.nips.cc/paper_files/paper/2020/hash/1b113258af3968aaf3969ca67e744ff8-Abstract.html
Chuan Wen, Jierui Lin, Trevor Darrell, Dinesh Jayaraman, Yang Gao
https://papers.nips.cc/paper_files/paper/2020/hash/1b113258af3968aaf3969ca67e744ff8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9940-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b113258af3968aaf3969ca67e744ff8-Supplemental.pdf
Imitation learning trains policies to map from input observations to the actions that an expert would choose. In this setting, distribution shift frequently exacerbates the effect of misattributing expert actions to nuisance correlates among the observed variables. We observe that a common instance of this causal confusion occurs in partially observed settings when expert actions are strongly correlated over time: the imitator learns to cheat by predicting the expert's previous action, rather than the next action. To combat this "copycat problem", we propose an adversarial approach to learn a feature representation that removes excess information about the previous expert action nuisance correlate, while retaining the information necessary to predict the next action. In our experiments, our approach improves performance significantly across a variety of partially observed imitation learning tasks.
Tight Nonparametric Convergence Rates for Stochastic Gradient Descent under the Noiseless Linear Model
https://papers.nips.cc/paper_files/paper/2020/hash/1b33d16fc562464579b7199ca3114982-Abstract.html
Raphaël Berthier, Francis Bach, Pierre Gaillard
https://papers.nips.cc/paper_files/paper/2020/hash/1b33d16fc562464579b7199ca3114982-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9941-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b33d16fc562464579b7199ca3114982-Supplemental.pdf
In the context of statistical supervised learning, the noiseless linear model assumes that there exists a deterministic linear relation $Y = \langle \theta_*, \Phi(U) \rangle$ between the random output $Y$ and the random feature vector $\Phi(U)$, a potentially non-linear transformation of the inputs~$U$. We analyze the convergence of single-pass, fixed step-size stochastic gradient descent on the least-square risk under this model. The convergence of the iterates to the optimum $\theta_*$ and the decay of the generalization error follow polynomial convergence rates with exponents that both depend on the regularities of the optimum $\theta_*$ and of the feature vectors $\Phi(U)$. We interpret our result in the reproducing kernel Hilbert space framework. As a special case, we analyze an online algorithm for estimating a real function on the unit hypercube from the noiseless observation of its value at randomly sampled points; the convergence depends on the Sobolev smoothness of the function and of a chosen kernel. Finally, we apply our analysis beyond the supervised learning setting to obtain convergence rates for the averaging process (a.k.a. gossip algorithm) on a graph depending on its spectral dimension.
Structured Prediction for Conditional Meta-Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1b69ebedb522700034547abc5652ffac-Abstract.html
Ruohan Wang, Yiannis Demiris, Carlo Ciliberto
https://papers.nips.cc/paper_files/paper/2020/hash/1b69ebedb522700034547abc5652ffac-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9942-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b69ebedb522700034547abc5652ffac-Supplemental.pdf
The goal of optimization-based meta-learning is to find a single initialization shared across a distribution of tasks to speed up the process of learning new tasks. Conditional meta-learning seeks task-specific initialization to better capture complex task distributions and improve performance. However, many existing conditional methods are difficult to generalize and lack theoretical guarantees. In this work, we propose a new perspective on conditional meta-learning via structured prediction. We derive task-adaptive structured meta-learning (TASML), a principled framework that yields task-specific objective functions by weighing meta-training data on target tasks. Our non-parametric approach is model-agnostic and can be combined with existing meta-learning methods to achieve conditioning. Empirically, we show that TASML improves the performance of existing meta-learning models, and outperforms the state-of-the-art on benchmark datasets.
Optimal Lottery Tickets via Subset Sum: Logarithmic Over-Parameterization is Sufficient
https://papers.nips.cc/paper_files/paper/2020/hash/1b742ae215adf18b75449c6e272fd92d-Abstract.html
Ankit Pensia, Shashank Rajput, Alliot Nagle, Harit Vishwakarma, Dimitris Papailiopoulos
https://papers.nips.cc/paper_files/paper/2020/hash/1b742ae215adf18b75449c6e272fd92d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9943-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b742ae215adf18b75449c6e272fd92d-Supplemental.pdf
The strong lottery ticket hypothesis (LTH) postulates that one can approximate any target neural network by only pruning the weights of a sufficiently over-parameterized random network. A recent work by Malach et al. [MYSS20] establishes the first theoretical analysis for the strong LTH: one can provably approximate a neural network of width $d$ and depth $l$, by pruning a random one that is a factor $O(d^4 l^2)$ wider and twice as deep. This polynomial over-parameterization requirement is at odds with recent experimental research that achieves good approximation with networks that are a small factor wider than the target. In this work, we close the gap and offer an exponential improvement to the over-parameterization requirement for the existence of lottery tickets. We show that any target network of width $d$ and depth $l$ can be approximated by pruning a random network that is a factor $O(log(dl))$ wider and twice as deep. Our analysis heavily relies on connecting pruning random ReLU networks to random instances of the Subset Sum problem. We then show that this logarithmic over-parameterization is essentially optimal for constant depth networks. Finally, we verify several of our theoretical insights with experiments.
The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes
https://papers.nips.cc/paper_files/paper/2020/hash/1b84c4cee2b8b3d823b30e2d604b1878-Abstract.html
Douwe Kiela, Hamed Firooz, Aravind Mohan, Vedanuj Goswami, Amanpreet Singh, Pratik Ringshia, Davide Testuggine
https://papers.nips.cc/paper_files/paper/2020/hash/1b84c4cee2b8b3d823b30e2d604b1878-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9944-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b84c4cee2b8b3d823b30e2d604b1878-Supplemental.pdf
This work proposes a new challenge set for multimodal classification, focusing on detecting hate speech in multimodal memes. It is constructed such that unimodal models struggle and only multimodal models can succeed: difficult examples (“benign confounders”) are added to the dataset to make it hard to rely on unimodal signals. The task requires subtle reasoning, yet is straightforward to evaluate as a binary classification problem. We provide baseline performance numbers for unimodal models, as well as for multimodal models with various degrees of sophistication. We find that state-of-the-art methods perform poorly compared to humans, illustrating the difficulty of the task and highlighting the challenge that this important problem poses to the community.
Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
https://papers.nips.cc/paper_files/paper/2020/hash/1b9a80606d74d3da6db2f1274557e644-Abstract.html
Lingkai Kong, Molei Tao
https://papers.nips.cc/paper_files/paper/2020/hash/1b9a80606d74d3da6db2f1274557e644-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9945-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1b9a80606d74d3da6db2f1274557e644-Supplemental.zip
This article suggests that deterministic Gradient Descent, which does not use any stochastic gradient approximation, can still exhibit stochastic behaviors. In particular, it shows that if the objective function exhibit multiscale behaviors, then in a large learning rate regime which only resolves the macroscopic but not the microscopic details of the objective, the deterministic GD dynamics can become chaotic and convergent not to a local minimizer but to a statistical distribution. In this sense, deterministic GD resembles stochastic GD even though no stochasticity is injected. A sufficient condition is also established for approximating this long-time statistical limit by a rescaled Gibbs distribution, which for example allows escapes from local minima to be quantified. Both theoretical and numerical demonstrations are provided, and the theoretical part relies on the construction of a stochastic map that uses bounded noise (as opposed to Gaussian noise).
Identifying Learning Rules From Neural Network Observables
https://papers.nips.cc/paper_files/paper/2020/hash/1ba922ac006a8e5f2b123684c2f4d65f-Abstract.html
Aran Nayebi, Sanjana Srivastava, Surya Ganguli, Daniel L. Yamins
https://papers.nips.cc/paper_files/paper/2020/hash/1ba922ac006a8e5f2b123684c2f4d65f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9946-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ba922ac006a8e5f2b123684c2f4d65f-Supplemental.pdf
The brain modifies its synaptic strengths during learning in order to better adapt to its environment. However, the underlying plasticity rules that govern learning are unknown. Many proposals have been suggested, including Hebbian mechanisms, explicit error backpropagation, and a variety of alternatives. It is an open question as to what specific experimental measurements would need to be made to determine whether any given learning rule is operative in a real biological system. In this work, we take a "virtual experimental" approach to this problem. Simulating idealized neuroscience experiments with artificial neural networks, we generate a large-scale dataset of learning trajectories of aggregate statistics measured in a variety of neural network architectures, loss functions, learning rule hyperparameters, and parameter initializations. We then take a discriminative approach, training linear and simple non-linear classifiers to identify learning rules from features based on these observables. We show that different classes of learning rules can be separated solely on the basis of aggregate statistics of the weights, activations, or instantaneous layer-wise activity changes, and that these results generalize to limited access to the trajectory and held-out architectures and learning curricula. We identify the statistics of each observable that are most relevant for rule identification, finding that statistics from network activities across training are more robust to unit undersampling and measurement noise than those obtained from the synaptic strengths. Our results suggest that activation patterns, available from electrophysiological recordings of post-synaptic activities on the order of several hundred units, frequently measured at wider intervals over the course of learning, may provide a good basis on which to identify learning rules.
Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions
https://papers.nips.cc/paper_files/paper/2020/hash/1bd413de70f32142f4a33a94134c5690-Abstract.html
Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, Emmanouil Zampetakis
https://papers.nips.cc/paper_files/paper/2020/hash/1bd413de70f32142f4a33a94134c5690-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9947-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1bd413de70f32142f4a33a94134c5690-Supplemental.pdf
respect to the Renyi Divergence, which provides improved theoretical and practical results in differentially private submodular optimization.
Weakly-Supervised Reinforcement Learning for Controllable Behavior
https://papers.nips.cc/paper_files/paper/2020/hash/1bd69c7df3112fb9a584fbd9edfc6c90-Abstract.html
Lisa Lee, Ben Eysenbach, Russ R. Salakhutdinov, Shixiang (Shane) Gu, Chelsea Finn
https://papers.nips.cc/paper_files/paper/2020/hash/1bd69c7df3112fb9a584fbd9edfc6c90-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9948-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1bd69c7df3112fb9a584fbd9edfc6c90-Supplemental.zip
Reinforcement learning (RL) is a powerful framework for learning to take actions to solve tasks. However, in many settings, an agent must winnow down the inconceivably large space of all possible tasks to the single task that it is currently being asked to solve. Can we instead constrain the space of tasks to those that are semantically meaningful? In this work, we introduce a framework for using weak supervision to automatically disentangle this semantically meaningful subspace of tasks from the enormous space of nonsensical "chaff" tasks. We show that this learned subspace enables efficient exploration and provides a representation that captures distance between states. On a variety of challenging, vision-based continuous control problems, our approach leads to substantial performance gains, particularly as the complexity of the environment grows.
Improving Policy-Constrained Kidney Exchange via Pre-Screening
https://papers.nips.cc/paper_files/paper/2020/hash/1bda4c789c38754f639a376716c5859f-Abstract.html
Duncan McElfresh, Michael Curry, Tuomas Sandholm, John Dickerson
https://papers.nips.cc/paper_files/paper/2020/hash/1bda4c789c38754f639a376716c5859f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9949-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1bda4c789c38754f639a376716c5859f-Supplemental.zip
In barter exchanges, participants swap goods with one another without exchanging money; these exchanges are often facilitated by a central clearinghouse, with the goal of maximizing the aggregate quality (or number) of swaps. Barter exchanges are subject to many forms of uncertainty--in participant preferences, the feasibility and quality of various swaps, and so on. Our work is motivated by kidney exchange, a real-world barter market in which patients in need of a kidney transplant swap their willing living donors, in order to find a better match. Modern exchanges include 2- and 3-way swaps, making the kidney exchange clearing problem NP-hard. Planned transplants often \emph{fail} for a variety of reasons--if the donor organ is rejected by the recipient's medical team, or if the donor and recipient are found to be medically incompatible. Due to 2- and 3-way swaps, failed transplants can ``cascade'' through an exchange; one US-based exchange estimated that about $85\%$ of planned transplants failed in 2019. Many optimization-based approaches have been designed to avoid these failures; however most exchanges cannot implement these methods, due to legal and policy constraints. Instead, we consider a setting where exchanges can \emph{query} the preferences of certain donors and recipients--asking whether they would accept a particular transplant. We characterize this as a two-stage decision problem, in which the exchange program (a) queries a small number of transplants before committing to a matching, and (b) constructs a matching according to fixed policy. We show that selecting these edges is a challenging combinatorial problem, which is non-monotonic and non-submodular, in addition to being NP-hard. We propose both a greedy heuristic and a Monte Carlo tree search, which outperforms previous approaches, using experiments on both synthetic data and real kidney exchange data from the United Network for Organ Sharing.
Learning abstract structure for drawing by efficient motor program induction
https://papers.nips.cc/paper_files/paper/2020/hash/1c104b9c0accfca52ef21728eaf01453-Abstract.html
Lucas Tian, Kevin Ellis, Marta Kryven, Josh Tenenbaum
https://papers.nips.cc/paper_files/paper/2020/hash/1c104b9c0accfca52ef21728eaf01453-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9950-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1c104b9c0accfca52ef21728eaf01453-Supplemental.pdf
Humans flexibly solve new problems that differ from those previously practiced. This ability to flexibly generalize is supported by learned concepts that represent useful structure common across different problems. Here we develop a naturalistic drawing task to study how humans rapidly acquire structured prior knowledge. The task requires drawing visual figures that share underlying structure, based on a set of composable geometric rules and simple objects. We show that people spontaneously learn abstract drawing procedures that support generalization, and propose a model of how learners can discover these reusable drawing procedures. Trained in the same setting as humans, and constrained to produce efficient motor actions, this model discovers new drawing program subroutines that generalize to test figures and resemble learned features of human behavior. These results suggest that two principles guiding motor program induction in the model - abstraction (programs can reflect high-level structure that ignores figure-specific details) and compositionality (new programs are discovered by recombining previously learned programs) - are key for explaining how humans learn structured internal representations that guide flexible reasoning and learning.
Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? --- A Neural Tangent Kernel Perspective
https://papers.nips.cc/paper_files/paper/2020/hash/1c336b8080f82bcc2cd2499b4c57261d-Abstract.html
Kaixuan Huang, Yuqing Wang, Molei Tao, Tuo Zhao
https://papers.nips.cc/paper_files/paper/2020/hash/1c336b8080f82bcc2cd2499b4c57261d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9951-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1c336b8080f82bcc2cd2499b4c57261d-Supplemental.pdf
Deep residual networks (ResNets) have demonstrated better generalization performance than deep feedforward networks (FFNets). However, the theory behind such a phenomenon is still largely unknown. This paper studies this fundamental problem in deep learning from a so-called ``neural tangent kernel'' perspective. Specifically, we first show that under proper conditions, as the width goes to infinity, training deep ResNets can be viewed as learning reproducing kernel functions with some kernel function. We then compare the kernel of deep ResNets with that of deep FFNets and discover that the class of functions induced by the kernel of FFNets is asymptotically not learnable, as the depth goes to infinity. In contrast, the class of functions induced by the kernel of ResNets does not exhibit such degeneracy. Our discovery partially justifies the advantages of deep ResNets over deep FFNets in generalization abilities. Numerical results are provided to support our claim.
Dual Instrumental Variable Regression
https://papers.nips.cc/paper_files/paper/2020/hash/1c383cd30b7c298ab50293adfecb7b18-Abstract.html
Krikamol Muandet, Arash Mehrjou, Si Kai Lee, Anant Raj
https://papers.nips.cc/paper_files/paper/2020/hash/1c383cd30b7c298ab50293adfecb7b18-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9952-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1c383cd30b7c298ab50293adfecb7b18-Supplemental.pdf
We present a novel algorithm for non-linear instrumental variable (IV) regression, DualIV, which simplifies traditional two-stage methods via a dual formulation. Inspired by problems in stochastic programming, we show that two-stage procedures for non-linear IV regression can be reformulated as a convex-concave saddle-point problem. Our formulation enables us to circumvent the first-stage regression which is a potential bottleneck in real-world applications. We develop a simple kernel-based algorithm with an analytic solution based on this formulation. Empirical results show that we are competitive to existing, more complicated algorithms for non-linear instrumental variable regression.
Stochastic Gradient Descent in Correlated Settings: A Study on Gaussian Processes
https://papers.nips.cc/paper_files/paper/2020/hash/1cb524b5a3f3f82be4a7d954063c07e2-Abstract.html
Hao Chen, Lili Zheng, Raed AL Kontar, Garvesh Raskutti
https://papers.nips.cc/paper_files/paper/2020/hash/1cb524b5a3f3f82be4a7d954063c07e2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9953-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cb524b5a3f3f82be4a7d954063c07e2-Supplemental.pdf
Stochastic gradient descent (SGD) and its variants have established themselves as the go-to algorithms for large-scale machine learning problems with independent samples due to their generalization performance and intrinsic computational advantage. However, the fact that the stochastic gradient is a biased estimator of the full gradient with correlated samples has led to the lack of theoretical understanding of how SGD behaves under correlated settings and hindered its use in such cases. In this paper, we focus on the Gaussian process (GP) and take a step forward towards breaking the barrier by proving minibatch SGD converges to a critical point of the full loss function, and recovers model hyperparameters with rate $O(\frac{1}{K})$ up to a statistical error term depending on the minibatch size. Numerical studies on both simulated and real datasets demonstrate that minibatch SGD has better generalization over state-of-the-art GP methods while reducing the computational burden and opening a new, previously unexplored, data size regime for GPs.
Interventional Few-Shot Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1cc8a8ea51cd0adddf5dab504a285915-Abstract.html
Zhongqi Yue, Hanwang Zhang, Qianru Sun, Xian-Sheng Hua
https://papers.nips.cc/paper_files/paper/2020/hash/1cc8a8ea51cd0adddf5dab504a285915-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9954-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cc8a8ea51cd0adddf5dab504a285915-Supplemental.pdf
We uncover an ever-overlooked deficiency in the prevailing Few-Shot Learning (FSL) methods: the pre-trained knowledge is indeed a confounder that limits the performance. This finding is rooted from our causal assumption: a Structural Causal Model (SCM) for the causalities among the pre-trained knowledge, sample features, and labels. Thanks to it, we propose a novel FSL paradigm: Interventional Few-Shot Learning (IFSL). Specifically, we develop three effective IFSL algorithmic implementations based on the backdoor adjustment, which is essentially a causal intervention towards the SCM of many-shot learning: the upper-bound of FSL in a causal view. It is worth noting that the contribution of IFSL is orthogonal to existing fine-tuning and meta-learning based FSL methods, hence IFSL can improve all of them, achieving a new 1-/5-shot state-of-the-art on miniImageNet, tieredImageNet, and cross-domain CUB. Code is released at https://github.com/yue-zhongqi/ifsl.
Minimax Value Interval for Off-Policy Evaluation and Policy Optimization
https://papers.nips.cc/paper_files/paper/2020/hash/1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html
Nan Jiang, Jiawei Huang
https://papers.nips.cc/paper_files/paper/2020/hash/1cd138d0499a68f4bb72bee04bbec2d7-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9955-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cd138d0499a68f4bb72bee04bbec2d7-Supplemental.pdf
We study minimax methods for off-policy evaluation (OPE) using value functions and marginalized importance weights. Despite that they hold promises of overcoming the exponential variance in traditional importance sampling, several key problems remain: (1) They require function approximation and are generally biased. For the sake of trustworthy OPE, is there anyway to quantify the biases? (2) They are split into two styles (“weight-learning” vs “value-learning”). Can we unify them? In this paper we answer both questions positively. By slightly altering the derivation of previous methods (one from each style), we unify them into a single value interval that comes with a special type of double robustness: when either the value-function or the importance-weight class is well specified, the interval is valid and its length quantifies the misspecification of the other class. Our interval also provides a unified view of and new insights to some recent methods, and we further explore the implications of our results on exploration and exploitation in off-policy policy optimization with insufficient data coverage.
Biased Stochastic First-Order Methods for Conditional Stochastic Optimization and Applications in Meta Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1cdf14d1e3699d61d237cf76ce1c2dca-Abstract.html
Yifan Hu, Siqi Zhang, Xin Chen, Niao He
https://papers.nips.cc/paper_files/paper/2020/hash/1cdf14d1e3699d61d237cf76ce1c2dca-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9956-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cdf14d1e3699d61d237cf76ce1c2dca-Supplemental.pdf
Conditional stochastic optimization covers a variety of applications ranging from invariant learning and causal inference to meta-learning. However, constructing unbiased gradient estimators for such problems is challenging due to the composition structure. As an alternative, we propose a biased stochastic gradient descent (BSGD) algorithm and study the bias-variance tradeoff under different structural assumptions. We establish the sample complexities of BSGD for strongly convex, convex, and weakly convex objectives under smooth and non-smooth conditions. Our lower bound analysis shows that the sample complexities of BSGD cannot be improved for general convex objectives and nonconvex objectives except for smooth nonconvex objectives with Lipschitz continuous gradient estimator. For this special setting, we propose an accelerated algorithm called biased SpiderBoost (BSpiderBoost) that matches the lower bound complexity. We further conduct numerical experiments on invariant logistic regression and model-agnostic meta-learning to illustrate the performance of BSGD and BSpiderBoost.
ShiftAddNet: A Hardware-Inspired Deep Network
https://papers.nips.cc/paper_files/paper/2020/hash/1cf44d7975e6c86cffa70cae95b5fbb2-Abstract.html
Haoran You, Xiaohan Chen, Yongan Zhang, Chaojian Li, Sicheng Li, Zihao Liu, Zhangyang Wang, Yingyan Lin
https://papers.nips.cc/paper_files/paper/2020/hash/1cf44d7975e6c86cffa70cae95b5fbb2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9957-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cf44d7975e6c86cffa70cae95b5fbb2-Supplemental.pdf
Multiplication (e.g., convolution) is arguably a cornerstone of modern deep neural networks (DNNs). However, intensive multiplications cause expensive resource costs that challenge DNNs' deployment on resource-constrained edge devices, driving several attempts for multiplication-less deep networks. This paper presented ShiftAddNet, whose main inspiration is drawn from a common practice in energy-efficient hardware implementation, that is, multiplication can be instead performed with additions and logical bit-shifts. We leverage this idea to explicitly parameterize deep networks in this way, yielding a new type of deep network that involves only bit-shift and additive weight layers. This hardware-inspired ShiftAddNet immediately leads to both energy-efficient inference and training, without compromising the expressive capacity compared to standard DNNs. The two complementary operation types (bit-shift and add) additionally enable finer-grained control of the model's learning capacity, leading to more flexible trade-off between accuracy and (training) efficiency, as well as improved robustness to quantization and pruning. We conduct extensive experiments and ablation studies, all backed up by our FPGA-based ShiftAddNet implementation and energy measurements. Compared to existing DNNs or other multiplication-less models, ShiftAddNet aggressively reduces over 80% hardware-quantified energy cost of DNNs training and inference, while offering comparable or better accuracies. Codes and pre-trained models are available at https://github.com/RICE-EIC/ShiftAddNet.
Network-to-Network Translation with Conditional Invertible Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/1cfa81af29c6f2d8cacb44921722e753-Abstract.html
Robin Rombach, Patrick Esser, Bjorn Ommer
https://papers.nips.cc/paper_files/paper/2020/hash/1cfa81af29c6f2d8cacb44921722e753-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9958-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1cfa81af29c6f2d8cacb44921722e753-Supplemental.pdf
Given the ever-increasing computational costs of modern machine learning models, we need to find new ways to reuse such expert models and thus tap into the resources that have been invested in their creation. Recent work suggests that the power of these massive models is captured by the representations they learn. Therefore, we seek a model that can relate between different existing representations and propose to solve this task with a conditionally invertible network. This network demonstrates its capability by (i) providing generic transfer between diverse domains, (ii) enabling controlled content synthesis by allowing modification in other domains, and (iii) facilitating diagnosis of existing representations by translating them into interpretable domains such as images. Our domain transfer network can translate between fixed representations without having to learn or finetune them. This allows users to utilize various existing domain-specific expert models from the literature that had been trained with extensive computational resources. Experiments on diverse conditional image synthesis tasks, competitive image modification results and experiments on image-to-image and text-to-image generation demonstrate the generic applicability of our approach. For example, we translate between BERT and BigGAN, state-of-the-art text and image models to provide text-to-image generation, which neither of both experts can perform on their own.
Intra-Processing Methods for Debiasing Neural Networks
https://papers.nips.cc/paper_files/paper/2020/hash/1d8d70dddf147d2d92a634817f01b239-Abstract.html
Yash Savani, Colin White, Naveen Sundar Govindarajulu
https://papers.nips.cc/paper_files/paper/2020/hash/1d8d70dddf147d2d92a634817f01b239-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9959-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1d8d70dddf147d2d92a634817f01b239-Supplemental.zip
In this work, we initiate the study of a new paradigm in debiasing research, intra-processing, which sits between in-processing and post-processing methods. Intra-processing methods are designed specifically to debias large models which have been trained on a generic dataset, and fine-tuned on a more specific task. We show how to repurpose existing in-processing methods for this use-case, and we also propose three baseline algorithms: random perturbation, layerwise optimization, and adversarial debiasing. We evaluate these methods across three popular datasets from the AIF360 toolkit, as well as on the CelebA faces dataset. Our code is available at https://github.com/abacusai/intraprocessing_debiasing.
Finding Second-Order Stationary Points Efficiently in Smooth Nonconvex Linearly Constrained Optimization Problems
https://papers.nips.cc/paper_files/paper/2020/hash/1da546f25222c1ee710cf7e2f7a3ff0c-Abstract.html
Songtao Lu, Meisam Razaviyayn, Bo Yang, Kejun Huang, Mingyi Hong
https://papers.nips.cc/paper_files/paper/2020/hash/1da546f25222c1ee710cf7e2f7a3ff0c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9960-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1da546f25222c1ee710cf7e2f7a3ff0c-Supplemental.pdf
This paper proposes two efficient algorithms for computing approximate second-order stationary points (SOSPs) of problems with generic smooth non-convex objective functions and generic linear constraints. While finding (approximate) SOSPs for the class of smooth non-convex linearly constrained problems is computationally intractable, we show that generic problem instances in this class can be solved efficiently. Specifically, for a generic problem instance, we show that certain strict complementarity (SC) condition holds for all Karush-Kuhn-Tucker (KKT) solutions. Based on this condition, we design an algorithm named Successive Negative-curvature grAdient Projection (SNAP), which performs either conventional gradient projection or some negative curvature-based projection steps to find SOSPs. SNAP is a second-order algorithm that requires $\widetilde{\mathcal{O}}(\max\{1/\epsilon^2_G,1/\epsilon^3_H\})$ iterations to compute an $(\epsilon_G,\epsilon_H)$-SOSP, where $\widetilde{\mathcal{O}}$ hides the iteration complexity for eigenvalue-decomposition. Building on SNAP, we propose a first-order algorithm, named SNAP$^+$, that requires $\mathcal{O}(1/\epsilon^{2.5})$ iterations to compute $(\epsilon, \sqrt{\epsilon})$-SOSP. The per-iteration computational complexities of our algorithms are polynomial in the number of constraints and problem dimension. To the best of our knowledge, this is the first time that first-order algorithms with polynomial per-iteration complexity and global sublinear rate are designed to find SOSPs of the important class of non-convex problems with linear constraints (almost surely).
Model-based Policy Optimization with Unsupervised Model Adaptation
https://papers.nips.cc/paper_files/paper/2020/hash/1dc3a89d0d440ba31729b0ba74b93a33-Abstract.html
Jian Shen, Han Zhao, Weinan Zhang, Yong Yu
https://papers.nips.cc/paper_files/paper/2020/hash/1dc3a89d0d440ba31729b0ba74b93a33-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9961-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1dc3a89d0d440ba31729b0ba74b93a33-Supplemental.pdf
Model-based reinforcement learning methods learn a dynamics model with real data sampled from the environment and leverage it to generate simulated data to derive an agent. However, due to the potential distribution mismatch between simulated data and real data, this could lead to degraded performance. Despite much effort being devoted to reducing this distribution mismatch, existing methods fail to solve it explicitly. In this paper, we investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization. To begin with, we first derive a lower bound of the expected return, which naturally inspires a bound maximization algorithm by aligning the simulated and real data distributions. To this end, we propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation to minimize the integral probability metric (IPM) between feature distributions from real and simulated data. Instantiating our framework with Wasserstein-1 distance gives a practical model-based approach. Empirically, our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
Implicit Regularization and Convergence for Weight Normalization
https://papers.nips.cc/paper_files/paper/2020/hash/1de7d2b90d554be9f0db1c338e80197d-Abstract.html
Xiaoxia Wu, Edgar Dobriban, Tongzheng Ren, Shanshan Wu, Zhiyuan Li, Suriya Gunasekar, Rachel Ward, Qiang Liu
https://papers.nips.cc/paper_files/paper/2020/hash/1de7d2b90d554be9f0db1c338e80197d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9962-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1de7d2b90d554be9f0db1c338e80197d-Supplemental.pdf
Normalization methods such as batch, weight, instance, and layer normalization are commonly used in modern machine learning. Here, we study the weight normalization (WN) method \cite{salimans2016weight} and a variant called reparametrized projected gradient descent (rPGD) for overparametrized least squares regression and some more general loss functions. WN and rPGD reparametrize the weights with a scale $g$ and a unit vector such that the objective function becomes \emph{non-convex}. We show that this non-convex formulation has beneficial regularization effects compared to gradient descent on the original objective. These methods adaptively regularize the weights and \emph{converge linearly} close to the minimum $\ell_2$ norm solution even for initializations far from zero. For certain two-phase variants, they can converge to the min norm solution. This is different from the behavior of gradient descent, which only converges to the min norm solution when started at zero, and thus more sensitive to initialization.
Geometric All-way Boolean Tensor Decomposition
https://papers.nips.cc/paper_files/paper/2020/hash/1def1713ebf17722cbe300cfc1c88558-Abstract.html
Changlin Wan, Wennan Chang, Tong Zhao, Sha Cao, Chi Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/1def1713ebf17722cbe300cfc1c88558-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9963-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1def1713ebf17722cbe300cfc1c88558-Supplemental.pdf
Boolean tensor has been broadly utilized in representing high dimensional logical data collected on spatial, temporal and/or other relational domains. Boolean Tensor Decomposition (BTD) factorizes a binary tensor into the Boolean sum of multiple rank-1 tensors, which is an NP-hard problem. Existing BTD methods have been limited by their high computational cost, in applications to large scale or higher order tensors. In this work, we presented a computationally efficient BTD algorithm, namely Geometric Expansion for all-order Tensor Factorization (GETF), that sequentially identifies the rank-1 basis components for a tensor from a geometric perspective. We conducted rigorous theoretical analysis on the validity as well as algorithemic efficiency of GETF in decomposing all-order tensor. Experiments on both synthetic and real-world data demonstrated that GETF has significantly improved performance in reconstruction accuracy, extraction of latent structures and it is an order of magnitude faster than other state-of-the-art methods.
Modular Meta-Learning with Shrinkage
https://papers.nips.cc/paper_files/paper/2020/hash/1e04b969bf040acd252e1faafb51f829-Abstract.html
Yutian Chen, Abram L. Friesen, Feryal Behbahani, Arnaud Doucet, David Budden, Matthew Hoffman, Nando de Freitas
https://papers.nips.cc/paper_files/paper/2020/hash/1e04b969bf040acd252e1faafb51f829-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9964-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e04b969bf040acd252e1faafb51f829-Supplemental.pdf
Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task- specific components. Updating only these task-specific modules then allows the model to be adapted to low-data tasks for as many steps as necessary without risking overfitting. Unfortunately, existing meta-learning methods either do not scale to long adaptation or else rely on handcrafted task-specific architectures. Here, we propose a meta-learning approach that obviates the need for this often sub-optimal hand-selection. In particular, we develop general techniques based on Bayesian shrinkage to automatically discover and learn both task-specific and general reusable modules. Empirically, we demonstrate that our method discovers a small set of meaningful task-specific modules and outperforms existing meta- learning approaches in domains like few-shot text-to-speech that have little task data and long adaptation horizons. We also show that existing meta-learning methods including MAML, iMAML, and Reptile emerge as special cases of our method.
A/B Testing in Dense Large-Scale Networks: Design and Inference
https://papers.nips.cc/paper_files/paper/2020/hash/1e0b802d5c0e1e8434a771ba7ff2c301-Abstract.html
Preetam Nandy, Kinjal Basu, Shaunak Chatterjee, Ye Tu
https://papers.nips.cc/paper_files/paper/2020/hash/1e0b802d5c0e1e8434a771ba7ff2c301-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9965-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e0b802d5c0e1e8434a771ba7ff2c301-Supplemental.zip
Design of experiments and estimation of treatment effects in large-scale networks, in the presence of strong interference, is a challenging and important problem. Most existing methods' performance deteriorates as the density of the network increases. In this paper, we present a novel strategy for accurately estimating the causal effects of a class of treatments in a dense large-scale network. First, we design an approximate randomized controlled experiment by solving an optimization problem to allocate treatments in the presence of competition among neighboring nodes. Then we apply an importance sampling adjustment to correct for any leftover bias (from the approximation) in estimating average treatment effects. We provide theoretical guarantees, verify robustness in a simulation study, and validate the scalability and usefulness of our procedure in a real-world experiment on a large social network.
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/1e14bfe2714193e7af5abc64ecbd6b46-Abstract.html
Vitaly Feldman, Chiyuan Zhang
https://papers.nips.cc/paper_files/paper/2020/hash/1e14bfe2714193e7af5abc64ecbd6b46-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9966-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e14bfe2714193e7af5abc64ecbd6b46-Supplemental.pdf
In this work we design experiments to test the key ideas in this theory. The experiments require estimation of the influence of each training example on the accuracy at each test example as well as memorization values of training examples. Estimating these quantities directly is computationally prohibitive but we show that closely-related subsampled influence and memorization values can be estimated much more efficiently. Our experiments demonstrate the significant benefits of memorization for generalization on several standard benchmarks. They also provide quantitative and visually compelling evidence for the theory put forth in Feldman (2019).
Partially View-aligned Clustering
https://papers.nips.cc/paper_files/paper/2020/hash/1e591403ff232de0f0f139ac51d99295-Abstract.html
Zhenyu Huang, Peng Hu, Joey Tianyi Zhou, Jiancheng Lv, Xi Peng
https://papers.nips.cc/paper_files/paper/2020/hash/1e591403ff232de0f0f139ac51d99295-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9967-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e591403ff232de0f0f139ac51d99295-Supplemental.pdf
In this paper, we study one challenging issue in multi-view data clustering. To be specific, for two data matrices $\mathbf{X}^{(1)}$ and $\mathbf{X}^{(2)}$ corresponding to two views, we do not assume that $\mathbf{X}^{(1)}$ and $\mathbf{X}^{(2)}$ are fully aligned in row-wise. Instead, we assume that only a small portion of the matrices has established the correspondence in advance. Such a partially view-aligned problem (PVP) could lead to the intensive labor of capturing or establishing the aligned multi-view data, which has less been touched so far to the best of our knowledge. To solve this practical and challenging problem, we propose a novel multi-view clustering method termed partially view-aligned clustering (PVC). To be specific, PVC proposes to use a differentiable surrogate of the non-differentiable Hungarian algorithm and recasts it as a pluggable module. As a result, the category-level correspondence of the unaligned data could be established in a latent space learned by a neural network, while learning a common space across different views using the ``aligned'' data. Extensive experimental results show promising results of our method in clustering partially view-aligned data.
Partial Optimal Tranport with applications on Positive-Unlabeled Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1e6e25d952a0d639b676ee20d0519ee2-Abstract.html
Laetitia Chapel, Mokhtar Z. Alaya, Gilles Gasso
https://papers.nips.cc/paper_files/paper/2020/hash/1e6e25d952a0d639b676ee20d0519ee2-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9968-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e6e25d952a0d639b676ee20d0519ee2-Supplemental.pdf
Classical optimal transport problem seeks a transportation map that preserves the total mass between two probability distributions, requiring their masses to be equal. This may be too restrictive in some applications such as color or shape matching, since the distributions may have arbitrary masses and/or only a fraction of the total mass has to be transported. In this paper, we address the partial Wasserstein and Gromov-Wasserstein problems and propose exact algorithms to solve them. We showcase the new formulation in a positive-unlabeled (PU) learning application. To the best of our knowledge, this is the first application of optimal transport in this context and we first highlight that partial Wasserstein-based metrics prove effective in usual PU learning settings. We then demonstrate that partial Gromov-Wasserstein metrics are efficient in scenarii in which the samples from the positive and the unlabeled datasets come from different domains or have different features.
Toward the Fundamental Limits of Imitation Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1e7875cf32d306989d80c14308f3a099-Abstract.html
Nived Rajaraman, Lin Yang, Jiantao Jiao, Kannan Ramchandran
https://papers.nips.cc/paper_files/paper/2020/hash/1e7875cf32d306989d80c14308f3a099-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9969-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e7875cf32d306989d80c14308f3a099-Supplemental.pdf
Imitation learning (IL) aims to mimic the behavior of an expert policy in a sequential decision-making problem given only demonstrations. In this paper, we focus on understanding the minimax statistical limits of IL in episodic Markov Decision Processes (MDPs). We first consider the setting where the learner is provided a dataset of $N$ expert trajectories ahead of time, and cannot interact with the MDP. Here, we show that the policy which mimics the expert whenever possible is in expectation $\lesssim \frac{|\mathcal{S}| H^2 \log (N)}{N}$ suboptimal compared to the value of the expert, even when the expert plays a stochastic policy. Here $\mathcal{S}$ is the state space and $H$ is the length of the episode. Furthermore, we establish a suboptimality lower bound of $\gtrsim |\mathcal{S}| H^2 / N$ which applies even if the expert is constrained to be deterministic, or if the learner is allowed to actively query the expert at visited states while interacting with the MDP for $N$ episodes. To our knowledge, this is the first algorithm with suboptimality having no dependence on the number of actions, under no additional assumptions. We then propose a novel algorithm based on minimum-distance functionals in the setting where the transition model is given and the expert is deterministic. The algorithm is suboptimal by $\lesssim |\mathcal{S}| H^{3/2} / N$, matching our lower bound up to a $\sqrt{H}$ factor, and breaks the $\mathcal{O}(H^2)$ error compounding barrier of IL.
Logarithmic Pruning is All You Need
https://papers.nips.cc/paper_files/paper/2020/hash/1e9491470749d5b0e361ce4f0b24d037-Abstract.html
Laurent Orseau, Marcus Hutter, Omar Rivasplata
https://papers.nips.cc/paper_files/paper/2020/hash/1e9491470749d5b0e361ce4f0b24d037-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9970-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1e9491470749d5b0e361ce4f0b24d037-Supplemental.pdf
The Lottery Ticket Hypothesis is a conjecture that every large neural network contains a subnetwork that, when trained in isolation, achieves comparable performance to the large network. An even stronger conjecture has been proven recently: Every sufficiently overparameterized network contains a subnetwork that, even without training, achieves comparable accuracy to the trained large network. This theorem, however, relies on a number of strong assumptions and guarantees a polynomial factor on the size of the large network compared to the target function. In this work, we remove the most limiting assumptions of this previous work while providing significantly tighter bounds: the overparameterized network only needs a logarithmic factor (in all variables but depth) number of neurons per weight of the target subnetwork.
Hold me tight! Influence of discriminative features on deep network boundaries
https://papers.nips.cc/paper_files/paper/2020/hash/1ea97de85eb634d580161c603422437f-Abstract.html
Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi, Pascal Frossard
https://papers.nips.cc/paper_files/paper/2020/hash/1ea97de85eb634d580161c603422437f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9971-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ea97de85eb634d580161c603422437f-Supplemental.pdf
Important insights towards the explainability of neural networks reside in the characteristics of their decision boundaries. In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary. This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets. We use this framework to reveal some intriguing properties of CNNs. Specifically, we rigorously confirm that neural networks exhibit a high invariance to non-discriminative features, and show that the decision boundaries of a DNN can only exist as long as the classifier is trained with some features that hold them together. Finally, we show that the construction of the decision boundary is extremely sensitive to small perturbations of the training samples, and that changes in certain directions can lead to sudden invariances in the orthogonal ones. This is precisely the mechanism that adversarial training uses to achieve robustness.
Learning from Mixtures of Private and Public Populations
https://papers.nips.cc/paper_files/paper/2020/hash/1ee942c6b182d0f041a2312947385b23-Abstract.html
Raef Bassily, Shay Moran, Anupama Nandi
https://papers.nips.cc/paper_files/paper/2020/hash/1ee942c6b182d0f041a2312947385b23-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9972-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ee942c6b182d0f041a2312947385b23-Supplemental.pdf
We initiate the study of a new model of supervised learning under privacy constraints. Imagine a medical study where a dataset is sampled from a population of both healthy and unhealthy individuals. Suppose healthy individuals have no privacy concerns (in such case, we call their data ``public'') while the unhealthy individuals desire stringent privacy protection for their data. In this example, the population (data distribution) is a mixture of private (unhealthy) and public (healthy) sub-populations that could be very different. Inspired by the above example, we consider a model in which the population $\cD$ is a mixture of two possibly distinct sub-populations: a private sub-population $\Dprv$ of private and sensitive data, and a public sub-population $\Dpub$ of data with no privacy concerns. Each example drawn from $\cD$ is assumed to contain a privacy-status bit that indicates whether the example is private or public. The goal is to design a learning algorithm that satisfies differential privacy only with respect to the private examples. Prior works in this context assumed a homogeneous population where private and public data arise from the same distribution, and in particular designed solutions which exploit this assumption. We demonstrate how to circumvent this assumption by considering, as a case study, the problem of learning linear classifiers in $R^d$. We show that in the case where the privacy status is correlated with the target label (as in the above example), linear classifiers in $R^d$ can be learned, in the agnostic as well as the realizable setting, with sample complexity which is comparable to that of the classical (non-private) PAC-learning. It is known that this task is impossible if all the data is considered private.
Adversarial Weight Perturbation Helps Robust Generalization
https://papers.nips.cc/paper_files/paper/2020/hash/1ef91c212e30e14bf125e9374262401f-Abstract.html
Dongxian Wu, Shu-Tao Xia, Yisen Wang
https://papers.nips.cc/paper_files/paper/2020/hash/1ef91c212e30e14bf125e9374262401f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9973-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Supplemental.pdf
The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years. Among them, adversarial training is the most promising one, which flattens the \textit{input loss landscape} (loss change with respect to input) via training on adversarially perturbed examples. However, how the widely used \textit{weight loss landscape} (loss change with respect to weight) performs in adversarial training is rarely explored. In this paper, we investigate the weight loss landscape from a new perspective, and identify a clear correlation between the flatness of weight loss landscape and robust generalization gap. Several well-recognized adversarial training improvements, such as early stopping, designing new objective functions, or leveraging unlabeled data, all implicitly flatten the weight loss landscape. Based on these observations, we propose a simple yet effective \textit{Adversarial Weight Perturbation (AWP)} to explicitly regularize the flatness of weight loss landscape, forming a \textit{double-perturbation} mechanism in the adversarial training framework that adversarially perturbs both inputs and weights. Extensive experiments demonstrate that AWP indeed brings flatter weight loss landscape and can be easily incorporated into various existing adversarial training methods to further boost their adversarial robustness.
Stateful Posted Pricing with Vanishing Regret via Dynamic Deterministic Markov Decision Processes
https://papers.nips.cc/paper_files/paper/2020/hash/1f10c3650a3aa5912dccc5789fd515e8-Abstract.html
Yuval Emek, Ron Lavi, Rad Niazadeh, Yangguang Shi
https://papers.nips.cc/paper_files/paper/2020/hash/1f10c3650a3aa5912dccc5789fd515e8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9974-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f10c3650a3aa5912dccc5789fd515e8-Supplemental.pdf
In this paper, a rather general online problem called \emph{dynamic resource allocation with capacity constraints (DRACC)} is introduced and studied in the realm of posted price mechanisms. This problem subsumes several applications of stateful pricing, including but not limited to posted prices for online job scheduling and matching over a dynamic bipartite graph. As the existing online learning techniques do not yield vanishing-regret mechanisms for this problem, we develop a novel online learning framework defined over deterministic Markov decision processes with \emph{dynamic} state transition and reward functions. We then prove that if the Markov decision process is guaranteed to admit an oracle that can simulate any given policy from any initial state with bounded loss --- a condition that is satisfied in the DRACC problem --- then the online learning problem can be solved with vanishing regret. Our proof technique is based on a reduction to online learning with \emph{switching cost}, in which an online decision maker incurs an extra cost every time she switches from one arm to another. We formally demonstrate this connection and further show how DRACC can be used in our proposed applications of stateful pricing.
Adversarial Self-Supervised Contrastive Learning
https://papers.nips.cc/paper_files/paper/2020/hash/1f1baa5b8edac74eb4eaa329f14a0361-Abstract.html
Minseon Kim, Jihoon Tack, Sung Ju Hwang
https://papers.nips.cc/paper_files/paper/2020/hash/1f1baa5b8edac74eb4eaa329f14a0361-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9975-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f1baa5b8edac74eb4eaa329f14a0361-Supplemental.zip
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions, which are then used to augment the training of the model for improved robustness. While some recent works propose semi-supervised adversarial learning methods that utilize unlabeled data, they still require class labels. However, do we really need class labels at all, for adversarially robust training of deep neural networks? In this paper, we propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples. Further, we present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data, which aims to maximize the similarity between a random augmentation of a data sample and its instance-wise adversarial perturbation. We validate our method, Robust Contrastive Learning (RoCL), on multiple benchmark datasets, on which it obtains comparable robust accuracy over state-of-the-art supervised adversarial learning methods, and significantly improved robustness against the \emph{black box} and unseen types of attacks. Moreover, with further joint fine-tuning with supervised adversarial loss, RoCL obtains even higher robust accuracy over using self-supervised learning alone. Notably, RoCL also demonstrate impressive results in robust transfer learning.
Normalizing Kalman Filters for Multivariate Time Series Analysis
https://papers.nips.cc/paper_files/paper/2020/hash/1f47cef5e38c952f94c5d61726027439-Abstract.html
Emmanuel de Bézenac, Syama Sundar Rangapuram, Konstantinos Benidis, Michael Bohlke-Schneider, Richard Kurle, Lorenzo Stella, Hilaf Hasson, Patrick Gallinari, Tim Januschowski
https://papers.nips.cc/paper_files/paper/2020/hash/1f47cef5e38c952f94c5d61726027439-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9976-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f47cef5e38c952f94c5d61726027439-Supplemental.pdf
This paper tackles the modelling of large, complex and multivariate time series panels in a probabilistic setting. To this extent, we present a novel approach reconciling classical state space models with deep learning methods. By augmenting state space models with normalizing flows, we mitigate imprecisions stemming from idealized assumptions in state space models. The resulting model is highly flexible while still retaining many of the attractive properties of state space models, e.g., uncertainty and observation errors are properly accounted for, inference is tractable, sampling is efficient, good generalization performance is observed, even in low data regimes. We demonstrate competitiveness against state-of-the-art deep learning methods on the tasks of forecasting real world data and handling varying levels of missing data.
Learning to summarize with human feedback
https://papers.nips.cc/paper_files/paper/2020/hash/1f89885d556929e98d3ef9b86448f951-Abstract.html
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul F. Christiano
https://papers.nips.cc/paper_files/paper/2020/hash/1f89885d556929e98d3ef9b86448f951-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9977-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f89885d556929e98d3ef9b86448f951-Supplemental.pdf
As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about---summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models. We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want.
Fourier Spectrum Discrepancies in Deep Network Generated Images
https://papers.nips.cc/paper_files/paper/2020/hash/1f8d87e1161af68b81bace188a1ec624-Abstract.html
Tarik Dzanic, Karan Shah, Freddie Witherden
https://papers.nips.cc/paper_files/paper/2020/hash/1f8d87e1161af68b81bace188a1ec624-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9978-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1f8d87e1161af68b81bace188a1ec624-Supplemental.zip
Advancements in deep generative models such as generative adversarial networks and variational autoencoders have resulted in the ability to generate realistic images that are visually indistinguishable from real images which raises concerns about their potential malicious usage. In this paper, we present an analysis of the high-frequency Fourier modes of real and deep network generated images and show that deep network generated images share an observable, systematic shortcoming in replicating the attributes of these high-frequency modes. Using this, we propose a novel detection method based on the frequency spectrum of the images which is able to achieve an accuracy of up to 99.2% in classifying real and deep network generated images from various GAN and VAE architectures on a dataset of 5000 images with as few as 8 training examples. Furthermore, we show the impact of image transformations such as compression, cropping, and resolution reduction on the classification accuracy and suggest a method for modifying the high-frequency attributes of deep network generated images to mimic real images.
Lamina-specific neuronal properties promote robust, stable signal propagation in feedforward networks
https://papers.nips.cc/paper_files/paper/2020/hash/1fc214004c9481e4c8073e85323bfd4b-Abstract.html
Dongqi Han, Erik De Schutter, Sungho Hong
https://papers.nips.cc/paper_files/paper/2020/hash/1fc214004c9481e4c8073e85323bfd4b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9979-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fc214004c9481e4c8073e85323bfd4b-Supplemental.pdf
Feedforward networks (FFN) are ubiquitous structures in neural systems and have been studied to understand mechanisms of reliable signal and information transmission. In many FFNs, neurons in one layer have intrinsic properties that are distinct from those in their pre-/postsynaptic layers, but how this affects network-level information processing remains unexplored. Here we show that layer-to-layer heterogeneity arising from lamina-specific cellular properties facilitates signal and information transmission in FFNs. Specifically, we found that signal transformations, made by each layer of neurons on an input-driven spike signal, demodulate signal distortions introduced by preceding layers. This mechanism boosts information transfer carried by a propagating spike signal, and thereby supports reliable spike signal and information transmission in a deep FFN. Our study suggests that distinct cell types in neural circuits, performing different computational functions, facilitate information processing on the whole.
Learning Dynamic Belief Graphs to Generalize on Text-Based Games
https://papers.nips.cc/paper_files/paper/2020/hash/1fc30b9d4319760b04fab735fbfed9a9-Abstract.html
Ashutosh Adhikari, Xingdi Yuan, Marc-Alexandre Côté, Mikuláš Zelinka, Marc-Antoine Rondeau, Romain Laroche, Pascal Poupart, Jian Tang, Adam Trischler, Will Hamilton
https://papers.nips.cc/paper_files/paper/2020/hash/1fc30b9d4319760b04fab735fbfed9a9-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9980-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fc30b9d4319760b04fab735fbfed9a9-Supplemental.pdf
Playing text-based games requires skills in processing natural language and sequential decision making. Achieving human-level performance on text-based games remains an open challenge, and prior research has largely relied on hand-crafted structured representations and heuristics. In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text. We propose a novel graph-aided transformer agent (GATA) that infers and updates latent belief graphs during planning to enable effective action selection by capturing the underlying game dynamics. GATA is trained using a combination of reinforcement and self-supervised learning. Our work demonstrates that the learned graph-based representations help agents converge to better policies than their text-only counterparts and facilitate effective generalization across game configurations. Experiments on 500+ unique games from the TextWorld suite show that our best agent outperforms text-based baselines by an average of 24.2%.
Triple descent and the two kinds of overfitting: where & why do they appear?
https://papers.nips.cc/paper_files/paper/2020/hash/1fd09c5f59a8ff35d499c0ee25a1d47e-Abstract.html
Stéphane d'Ascoli, Levent Sagun, Giulio Biroli
https://papers.nips.cc/paper_files/paper/2020/hash/1fd09c5f59a8ff35d499c0ee25a1d47e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9981-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fd09c5f59a8ff35d499c0ee25a1d47e-Supplemental.zip
A recent line of research has highlighted the existence of a ``double descent'' phenomenon in deep learning, whereby increasing the number of training examples N causes the generalization error of neural networks to peak when N is of the same order as the number of parameters P. In earlier works, a similar phenomenon was shown to exist in simpler models such as linear regression, where the peak instead occurs when N is equal to the input dimension D. Since both peaks coincide with the interpolation threshold, they are often conflated in the litterature. In this paper, we show that despite their apparent similarity, these two scenarios are inherently different. In fact, both peaks can co-exist when neural networks are applied to noisy regression tasks. The relative size of the peaks is then governed by the degree of nonlinearity of the activation function. Building on recent developments in the analysis of random feature models, we provide a theoretical ground for this sample-wise triple descent. As shown previously, the nonlinear peak at N=P is a true divergence caused by the extreme sensitivity of the output function to both the noise corrupting the labels and the initialization of the random features (or the weights in neural networks). This peak survives in the absence of noise, but can be suppressed by regularization. In contrast, the linear peak at N=D is solely due to overfitting the noise in the labels, and forms earlier during training. We show that this peak is implicitly regularized by the nonlinearity, which is why it only becomes salient at high noise and is weakly affected by explicit regularization. Throughout the paper, we compare the analytical results obtained in the random feature model with the outcomes of numerical experiments involving realistic neural networks.
Multimodal Graph Networks for Compositional Generalization in Visual Question Answering
https://papers.nips.cc/paper_files/paper/2020/hash/1fd6c4e41e2c6a6b092eb13ee72bce95-Abstract.html
Raeid Saqur, Karthik Narasimhan
https://papers.nips.cc/paper_files/paper/2020/hash/1fd6c4e41e2c6a6b092eb13ee72bce95-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9982-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fd6c4e41e2c6a6b092eb13ee72bce95-Supplemental.zip
Compositional generalization is a key challenge in grounding natural language to visual perception. While deep learning models have achieved great success in multimodal tasks like visual question answering, recent studies have shown that they fail to generalize to new inputs that are simply an unseen combination of those seen in the training distribution. In this paper, we propose to tackle this challenge by employing neural factor graphs to induce a tighter coupling between concepts in different modalities (e.g. images and text). Graph representations are inherently compositional in nature and allow us to capture entities, attributes and relations in a scalable manner. Our model first creates a multimodal graph, processes it with a graph neural network to induce a factor correspondence matrix, and then outputs a symbolic program to predict answers to questions. Empirically, our model achieves close to perfect scores on a caption truth prediction problem and state-of-the-art results on the recently introduced CLOSURE dataset, improving on the mean overall accuracy across seven compositional templates by 4.77\% over previous approaches.
Learning Graph Structure With A Finite-State Automaton Layer
https://papers.nips.cc/paper_files/paper/2020/hash/1fdc0ee9d95c71d73df82ac8f0721459-Abstract.html
Daniel Johnson, Hugo Larochelle, Daniel Tarlow
https://papers.nips.cc/paper_files/paper/2020/hash/1fdc0ee9d95c71d73df82ac8f0721459-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9983-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/1fdc0ee9d95c71d73df82ac8f0721459-Supplemental.zip
Graph-based neural network models are producing strong results in a number of domains, in part because graphs provide flexibility to encode domain knowledge in the form of relational structure (edges) between nodes in the graph. In practice, edges are used both to represent intrinsic structure (e.g., abstract syntax trees of programs) and more abstract relations that aid reasoning for a downstream task (e.g., results of relevant program analyses). In this work, we study the problem of learning to derive abstract relations from the intrinsic graph structure. Motivated by their power in program analyses, we consider relations defined by paths on the base graph accepted by a finite-state automaton. We show how to learn these relations end-to-end by relaxing the problem into learning finite-state automata policies on a graph-based POMDP and then training these policies using implicit differentiation. The result is a differentiable Graph Finite-State Automaton (GFSA) layer that adds a new edge type (expressed as a weighted adjacency matrix) to a base graph. We demonstrate that this layer can find shortcuts in grid-world graphs and reproduce simple static analyses on Python programs. Additionally, we combine the GFSA layer with a larger graph-based model trained end-to-end on the variable misuse program understanding task, and find that using the GFSA layer leads to better performance than using hand-engineered semantic edges or other baseline methods for adding learned edge types.
A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions
https://papers.nips.cc/paper_files/paper/2020/hash/2000f6325dfc4fc3201fc45ed01c7a5d-Abstract.html
Yulong Lu, Jianfeng Lu
https://papers.nips.cc/paper_files/paper/2020/hash/2000f6325dfc4fc3201fc45ed01c7a5d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9984-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2000f6325dfc4fc3201fc45ed01c7a5d-Supplemental.pdf
This paper studies the universal approximation property of deep neural networks for representing probability distributions. Given a target distribution $\pi$ and a source distribution $p_z$ both defined on $\mathbb{R}^d$, we prove under some assumptions that there exists a deep neural network $g:\mathbb{R}^d\gt \mathbb{R}$ with ReLU activation such that the push-forward measure $(\nabla g)_\# p_z$ of $p_z$ under the map $\nabla g$ is arbitrarily close to the target measure $\pi$. The closeness are measured by three classes of integral probability metrics between probability distributions: $1$-Wasserstein distance, maximum mean distance (MMD) and kernelized Stein discrepancy (KSD). We prove upper bounds for the size (width and depth) of the deep neural network in terms of the dimension $d$ and the approximation error $\varepsilon$ with respect to the three discrepancies. In particular, the size of neural network can grow exponentially in $d$ when $1$-Wasserstein distance is used as the discrepancy, whereas for both MMD and KSD the size of neural network only depends on $d$ at most polynomially. Our proof relies on convergence estimates of empirical measures under aforementioned discrepancies and semi-discrete optimal transport.
Unsupervised object-centric video generation and decomposition in 3D
https://papers.nips.cc/paper_files/paper/2020/hash/20125fd9b2d43e340a35fb0278da235d-Abstract.html
Paul Henderson, Christoph H. Lampert
https://papers.nips.cc/paper_files/paper/2020/hash/20125fd9b2d43e340a35fb0278da235d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9985-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20125fd9b2d43e340a35fb0278da235d-Supplemental.zip
A natural approach to generative modeling of videos is to represent them as a composition of moving objects. Recent works model a set of 2D sprites over a slowly-varying background, but without considering the underlying 3D scene that gives rise to them. We instead propose to model a video as the view seen while moving through a scene with multiple 3D objects and a 3D background. Our model is trained from monocular videos without any supervision, yet learns to generate coherent 3D scenes containing several moving objects. We conduct detailed experiments on two datasets, going beyond the visual complexity supported by state-of-the-art generative approaches. We evaluate our method on depth-prediction and 3D object detection---tasks which cannot be addressed by those earlier works---and show it out-performs them even on 2D instance segmentation and tracking.
Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization
https://papers.nips.cc/paper_files/paper/2020/hash/201d7288b4c18a679e48b31c72c30ded-Abstract.html
Haoliang Li, Yufei Wang, Renjie Wan, Shiqi Wang, Tie-Qiang Li, Alex Kot
https://papers.nips.cc/paper_files/paper/2020/hash/201d7288b4c18a679e48b31c72c30ded-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9986-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/201d7288b4c18a679e48b31c72c30ded-Supplemental.pdf
Recently, we have witnessed great progress in the field of medical imaging classification by adopting deep neural networks. However, the recent advanced models still require accessing sufficiently large and representative datasets for training, which is often unfeasible in clinically realistic environments. When trained on limited datasets, the deep neural network is lack of generalization capability, as the trained deep neural network on data within a certain distribution (e.g. the data captured by a certain device vendor or patient population) may not be able to generalize to the data with another distribution. In this paper, we introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification. Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding with a novel linear-dependency regularization term to capture the shareable information among medical data collected from different domains. As a result, the trained neural network is expected to equip with better generalization capability to the ``unseen" medical data. Experimental results on two challenging medical imaging classification tasks indicate that our method can achieve better cross-domain generalization capability compared with state-of-the-art baselines.
Multi-label classification: do Hamming loss and subset accuracy really conflict with each other?
https://papers.nips.cc/paper_files/paper/2020/hash/20479c788fb27378c2c99eadcf207e7f-Abstract.html
Guoqiang Wu, Jun Zhu
https://papers.nips.cc/paper_files/paper/2020/hash/20479c788fb27378c2c99eadcf207e7f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9987-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20479c788fb27378c2c99eadcf207e7f-Supplemental.pdf
Various evaluation measures have been developed for multi-label classification, including Hamming Loss (HL), Subset Accuracy (SA) and Ranking Loss (RL). However, there is a gap between empirical results and the existing theories: 1) an algorithm often empirically performs well on some measure(s) while poorly on others, while a formal theoretical analysis is lacking; and 2) in small label space cases, the algorithms optimizing HL often have comparable or even better performance on the SA measure than those optimizing SA directly, while existing theoretical results show that SA and HL are conflicting measures. This paper provides an attempt to fill up this gap by analyzing the learning guarantees of the corresponding learning algorithms on both SA and HL measures. We show that when a learning algorithm optimizes HL with its surrogate loss, it enjoys an error bound for the HL measure independent of $c$ (the number of labels), while the bound for the SA measure depends on at most $O(c)$. On the other hand, when directly optimizing SA with its surrogate loss, it has learning guarantees that depend on $O(\sqrt{c})$ for both HL and SA measures. This explains the observation that when the label space is not large, optimizing HL with its surrogate loss can have promising performance for SA. We further show that our techniques are applicable to analyze the learning guarantees of algorithms on other measures, such as RL. Finally, the theoretical analyses are supported by experimental results.
A Novel Automated Curriculum Strategy to Solve Hard Sokoban Planning Instances
https://papers.nips.cc/paper_files/paper/2020/hash/2051bd70fc110a2208bdbd4a743e7f79-Abstract.html
Dieqiao Feng, Carla P. Gomes, Bart Selman
https://papers.nips.cc/paper_files/paper/2020/hash/2051bd70fc110a2208bdbd4a743e7f79-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9988-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2051bd70fc110a2208bdbd4a743e7f79-Supplemental.pdf
In recent years, we have witnessed tremendous progress in deep reinforcement learning (RL) for tasks such as Go, Chess, video games, and robot control. Nevertheless, other combinatorial domains, such as AI planning, still pose considerable challenges for RL approaches. The key difficulty in those domains is that a positive reward signal becomes {\em exponentially rare} as the minimal solution length increases. So, an RL approach loses its training signal. There has been promising recent progress by using a curriculum-driven learning approach that is designed to solve a single hard instance. We present a novel {\em automated} curriculum approach that dynamically selects from a pool of unlabeled training instances of varying task complexity guided by our {\em difficulty quantum momentum} strategy. We show how the smoothness of the task hardness impacts the final learning results. In particular, as the size of the instance pool increases, the ``hardness gap'' decreases, which facilitates a smoother automated curriculum based learning process. Our automated curriculum approach dramatically improves upon the previous approaches. We show our results on Sokoban, which is a traditional PSPACE-complete planning problem and presents a great challenge even for specialized solvers. Our RL agent can solve hard instances that are far out of reach for any previous state-of-the-art Sokoban solver. In particular, our approach can uncover plans that require hundreds of steps, while the best previous search methods would take many years of computing time to solve such instances. In addition, we show that we can further boost the RL performance with an intricate coupling of our automated curriculum approach with a curiosity-driven search strategy and a graph neural net representation.
Causal analysis of Covid-19 Spread in Germany
https://papers.nips.cc/paper_files/paper/2020/hash/205e73579f21c2ed134dbd6ce7e4a1ea-Abstract.html
Atalanti Mastakouri, Bernhard Schölkopf
https://papers.nips.cc/paper_files/paper/2020/hash/205e73579f21c2ed134dbd6ce7e4a1ea-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9989-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/205e73579f21c2ed134dbd6ce7e4a1ea-Supplemental.pdf
In this work, we study the causal relations among German regions in terms of the spread of Covid-19 since the beginning of the pandemic, taking into account the restriction policies that were applied by the different federal states. We loose a strictly formulated assumption for a causal feature selection method for time series data, robust to latent confounders, which we subsequently apply on Covid-19 case numbers. We present findings about the spread of the virus in Germany and the causal impact of restriction measures, discussing the role of various policies in containing the spread. Since our results are based on rather limited target time series (only the numbers of reported cases), care should be exercised in interpreting them. However, it is encouraging that already such limited data seems to contain causal signals. This suggests that as more data becomes available, our causal approach may contribute towards meaningful causal analysis of political interventions on the development of Covid-19, and thus also towards the development of rational and data-driven methodologies for choosing interventions.
Locally private non-asymptotic testing of discrete distributions is faster using interactive mechanisms
https://papers.nips.cc/paper_files/paper/2020/hash/20b02dc95171540bc52912baf3aa709d-Abstract.html
Thomas Berrett, Cristina Butucea
https://papers.nips.cc/paper_files/paper/2020/hash/20b02dc95171540bc52912baf3aa709d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9990-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20b02dc95171540bc52912baf3aa709d-Supplemental.pdf
We find separation rates for testing multinomial or more general discrete distributions under the constraint of alpha-local differential privacy. We construct efficient randomized algorithms and test procedures, in both the case where only non-interactive privacy mechanisms are allowed and also in the case where all sequentially interactive privacy mechanisms are allowed. The separation rates are faster in the latter case. We prove general information theoretical bounds that allow us to establish the optimality of our algorithms among all pairs of privacy mechanisms and test procedures, in most usual cases. Considered examples include testing uniform, polynomially and exponentially decreasing distributions.
Adaptive Gradient Quantization for Data-Parallel SGD
https://papers.nips.cc/paper_files/paper/2020/hash/20b5e1cf8694af7a3c1ba4a87f073021-Abstract.html
Fartash Faghri, Iman Tabrizian, Ilia Markov, Dan Alistarh, Daniel M. Roy, Ali Ramezani-Kebrya
https://papers.nips.cc/paper_files/paper/2020/hash/20b5e1cf8694af7a3c1ba4a87f073021-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9991-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20b5e1cf8694af7a3c1ba4a87f073021-Supplemental.zip
Many communication-efficient variants of SGD use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantization schemes, ALQ and AMQ. In both schemes, processors update their compression schemes in parallel by efficiently computing sufficient statistics of a parametric distribution. We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are also significantly more robust to the choice of hyperparameters.
Finite Continuum-Armed Bandits
https://papers.nips.cc/paper_files/paper/2020/hash/20c86a628232a67e7bd46f76fba7ce12-Abstract.html
Solenne Gaucher
https://papers.nips.cc/paper_files/paper/2020/hash/20c86a628232a67e7bd46f76fba7ce12-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9992-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20c86a628232a67e7bd46f76fba7ce12-Supplemental.pdf
We consider a situation where an agent has $T$ ressources to be allocated to a larger number $N$ of actions. Each action can be completed at most once and results in a stochastic reward with unknown mean. The goal of the agent is to maximize her cumulative reward. Non trivial strategies are possible when side information on the actions is available, for example in the form of covariates. Focusing on a nonparametric setting, where the mean reward is an unknown function of a one-dimensional covariate, we propose an optimal strategy for this problem. Under natural assumptions on the reward function, we prove that the optimal regret scales as $O(T^{1/3})$ up to poly-logarithmic factors when the budget $T$ is proportional to the number of actions $N$. When $T$ becomes small compared to $N$, a smooth transition occurs. When the ratio $T/N$ decreases from a constant to $N^{-1/3}$, the regret increases progressively up to the $O(T^{1/2})$ rate encountered in continuum-armed bandits.
Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies
https://papers.nips.cc/paper_files/paper/2020/hash/20d749bc05f47d2bd3026ce457dcfd8e-Abstract.html
Itai Gat, Idan Schwartz, Alexander Schwing, Tamir Hazan
https://papers.nips.cc/paper_files/paper/2020/hash/20d749bc05f47d2bd3026ce457dcfd8e-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9993-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/20d749bc05f47d2bd3026ce457dcfd8e-Supplemental.pdf
Many recent datasets contain a variety of different data modalities, for instance, image, question, and answer data in visual question answering (VQA). When training deep net classifiers on those multi-modal datasets, the modalities get exploited at different scales, i.e., some modalities can more easily contribute to the classification results than others. This is suboptimal because the classifier is inherently biased towards a subset of the modalities. To alleviate this shortcoming, we propose a novel regularization term based on the functional entropy. Intuitively, this term encourages to balance the contribution of each modality to the classification result. However, regularization with the functional entropy is challenging. To address this, we develop a method based on the log-Sobolev inequality, which bounds the functional entropy with the functional-Fisher-information. Intuitively, this maximizes the amount of information that the modalities contribute. On the two challenging multi-modal datasets VQA-CPv2, and SocialIQ, we obtain state-of-the-art results while more uniformly exploiting the modalities. In addition, we demonstrate the efficacy of our method on Colored MNIST.
Compact task representations as a normative model for higher-order brain activity
https://papers.nips.cc/paper_files/paper/2020/hash/2109737282d2c2de4fc5534be26c9bb6-Abstract.html
Severin Berger, Christian K. Machens
https://papers.nips.cc/paper_files/paper/2020/hash/2109737282d2c2de4fc5534be26c9bb6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9994-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2109737282d2c2de4fc5534be26c9bb6-Supplemental.pdf
Higher-order brain areas such as the frontal cortices are considered essential for the flexible solution of tasks. However, the precise computational role of these areas is still debated. Indeed, even for the simplest of tasks, we cannot really explain how the measured brain activity, which evolves over time in complicated ways, relates to the task structure. Here, we follow a normative approach, based on integrating the principle of efficient coding with the framework of Markov decision processes (MDP). More specifically, we focus on MDPs whose state is based on action-observation histories, and we show how to compress the state space such that unnecessary redundancy is eliminated, while task-relevant information is preserved. We show that the efficiency of a state space representation depends on the (long-term) behavioural goal of the agent, and we distinguish between model-based and habitual agents. We apply our approach to simple tasks that require short-term memory, and we show that the efficient state space representations reproduce the key dynamical features of recorded neural activity in frontal areas (such as ramping, sequentiality, persistence). If we additionally assume that neural systems are subject to accuracy-cost tradeoffs, we find a surprising match to neural data on a population level.
Robust-Adaptive Control of Linear Systems: beyond Quadratic Costs
https://papers.nips.cc/paper_files/paper/2020/hash/211b39255232ab59ce78f2e28cd0292b-Abstract.html
Edouard Leurent, Odalric-Ambrym Maillard, Denis Efimov
https://papers.nips.cc/paper_files/paper/2020/hash/211b39255232ab59ce78f2e28cd0292b-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9995-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/211b39255232ab59ce78f2e28cd0292b-Supplemental.pdf
We consider the problem of robust and adaptive model predictive control (MPC) of a linear system, with unknown parameters that are learned along the way (adaptive), in a critical setting where failures must be prevented (robust). This problem has been studied from different perspectives by different communities. However, the existing theory deals only with the case of quadratic costs (the LQ problem), which limits applications to stabilisation and tracking tasks only. In order to handle more general (non-convex) costs that naturally arise in many practical problems, we carefully select and bring together several tools from different communities, namely non-asymptotic linear regression, recent results in interval prediction, and tree-based planning. Combining and adapting the theoretical guarantees at each layer is non trivial, and we provide the first end-to-end suboptimality analysis for this setting. Interestingly, our analysis naturally adapts to handle many models and combines with a data-driven robust model selection strategy, which enables to relax the modelling assumptions. Last, we strive to preserve tractability at any stage of the method, that we illustrate on two challenging simulated environments.
Co-exposure Maximization in Online Social Networks
https://papers.nips.cc/paper_files/paper/2020/hash/212ab20dbdf4191cbcdcf015511783f4-Abstract.html
Sijing Tu, Cigdem Aslay, Aristides Gionis
https://papers.nips.cc/paper_files/paper/2020/hash/212ab20dbdf4191cbcdcf015511783f4-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9996-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/212ab20dbdf4191cbcdcf015511783f4-Supplemental.zip
We show that the problem of maximizing co-exposure is NP-hard and its objective function is neither submodular nor supermodular. However, by exploiting a connection to a submodular function that acts as a lower bound to the objective, we are able to devise a greedy algorithm with provable approximation guarantee. We further provide a scalable instantiation of our approximation algorithm by introducing a novel extension to the notion of random reverse-reachable sets for efficiently estimating the expected co-exposure. We experimentally demonstrate the quality of our proposal on real-world social networks.
UCLID-Net: Single View Reconstruction in Object Space
https://papers.nips.cc/paper_files/paper/2020/hash/21327ba33b3689e713cdff1641128004-Abstract.html
Benoit Guillard, Edoardo Remelli, Pascal Fua
https://papers.nips.cc/paper_files/paper/2020/hash/21327ba33b3689e713cdff1641128004-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9997-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/21327ba33b3689e713cdff1641128004-Supplemental.pdf
We demonstrate both on ShapeNet synthetic images, which are often used for benchmarking purposes, and on real-world images that our approach outperforms state-of-the-art ones. Furthermore, the single-view pipeline naturally extends to multi-view reconstruction, which we also show.
Reinforcement Learning for Control with Multiple Frequencies
https://papers.nips.cc/paper_files/paper/2020/hash/216f44e2d28d4e175a194492bde9148f-Abstract.html
Jongmin Lee, Byung-Jun Lee, Kee-Eung Kim
https://papers.nips.cc/paper_files/paper/2020/hash/216f44e2d28d4e175a194492bde9148f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9998-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/216f44e2d28d4e175a194492bde9148f-Supplemental.zip
Many real-world sequential decision problems involve multiple action variables whose control frequencies are different, such that actions take their effects at different periods. While these problems can be formulated with the notion of multiple action persistences in factored-action MDP (FA-MDP), it is non-trivial to solve them efficiently since an action-persistent policy constructed from a stationary policy can be arbitrarily suboptimal, rendering solution methods for the standard FA-MDPs hardly applicable. In this paper, we formalize the problem of multiple control frequencies in RL and provide its efficient solution method. Our proposed method, Action-Persistent Policy Iteration (AP-PI), provides a theoretical guarantee on the convergence to an optimal solution while incurring only a factor of $|A|$ increase in time complexity during policy improvement step, compared to the standard policy iteration for FA-MDPs. Extending this result, we present Action-Persistent Actor-Critic (AP-AC), a scalable RL algorithm for high-dimensional control tasks. In the experiments, we demonstrate that AP-AC significantly outperforms the baselines on several continuous control tasks and a traffic control simulation, which highlights the effectiveness of our method that directly optimizes the periodic non-stationary policy for tasks with multiple control frequencies.
Complex Dynamics in Simple Neural Networks: Understanding Gradient Flow in Phase Retrieval
https://papers.nips.cc/paper_files/paper/2020/hash/2172fde49301047270b2897085e4319d-Abstract.html
Stefano Sarao Mannelli, Giulio Biroli, Chiara Cammarota, Florent Krzakala, Pierfrancesco Urbani, Lenka Zdeborová
https://papers.nips.cc/paper_files/paper/2020/hash/2172fde49301047270b2897085e4319d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/9999-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2172fde49301047270b2897085e4319d-Supplemental.pdf
Despite the widespread use of gradient-based algorithms for optimising high-dimensional non-convex functions, understanding their ability of finding good minima instead of being trapped in spurious ones remains to a large extent an open problem. Here we focus on gradient flow dynamics for phase retrieval from random measurements. When the ratio of the number of measurements over the input dimension is small the dynamics remains trapped in spurious minima with large basins of attraction. We find analytically that above a critical ratio those critical points become unstable developing a negative direction toward the signal. By numerical experiments we show that in this regime the gradient flow algorithm is not trapped; it drifts away from the spurious critical points along the unstable direction and succeeds in finding the global minimum. Using tools from statistical physics we characterise this phenomenon, which is related to a BBP-type transition in the Hessian of the spurious minima.
Neural Message Passing for Multi-Relational Ordered and Recursive Hypergraphs
https://papers.nips.cc/paper_files/paper/2020/hash/217eedd1ba8c592db97d0dbe54c7adfc-Abstract.html
Naganand Yadati
https://papers.nips.cc/paper_files/paper/2020/hash/217eedd1ba8c592db97d0dbe54c7adfc-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10000-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-Supplemental.pdf
Message passing neural network (MPNN) has recently emerged as a successful framework by achieving state-of-the-art performances on many graph-based learning tasks. MPNN has also recently been extended to multi-relational graphs (each edge is labelled), and hypergraphs (each edge can connect any number of vertices). However, in real-world datasets involving text and knowledge, relationships are much more complex in which hyperedges can be multi-relational, recursive, and ordered. Such structures present several unique challenges because it is not clear how to adapt MPNN to variable-sized hyperedges in them. In this work, we first unify exisiting MPNNs on different structures into G-MPNN (Generalised MPNN) framework. Motivated by real-world datasets, we then propose a novel extension of the framework, MPNN-R (MPNN-Recursive) to handle recursively-structured data. Experimental results demonstrate the effectiveness of proposed G-MPNN and MPNN-R.
A Unified View of Label Shift Estimation
https://papers.nips.cc/paper_files/paper/2020/hash/219e052492f4008818b8adb6366c7ed6-Abstract.html
Saurabh Garg, Yifan Wu, Sivaraman Balakrishnan, Zachary Lipton
https://papers.nips.cc/paper_files/paper/2020/hash/219e052492f4008818b8adb6366c7ed6-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10001-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/219e052492f4008818b8adb6366c7ed6-Supplemental.pdf
Under label shift, the label distribution $p(y)$ might change but the class-conditional distributions $p(x|y)$ do not. There are two dominant approaches for estimating the label marginal. BBSE, a moment-matching approach based on confusion matrices, is provably consistent and provides interpretable error bounds. However, a maximum likelihood estimation approach, which we call MLLS, dominates empirically. In this paper, we present a unified view of the two methods and the first theoretical characterization of MLLS. Our contributions include (i) consistency conditions for MLLS, which include calibration of the classifier and a confusion matrix invertibility condition that BBSE also requires; (ii) a unified framework, casting BBSE as roughly equivalent to MLLS for a particular choice of calibration method; and (iii) a decomposition of MLLS's finite-sample error into terms reflecting miscalibration and estimation error. Our analysis attributes BBSE's statistical inefficiency to a loss of information due to coarse calibration. Experiments on synthetic data, MNIST, and CIFAR10 support our findings.
Optimal Private Median Estimation under Minimal Distributional Assumptions
https://papers.nips.cc/paper_files/paper/2020/hash/21d144c75af2c3a1cb90441bbb7d8b40-Abstract.html
Christos Tzamos, Emmanouil-Vasileios Vlatakis-Gkaragkounis, Ilias Zadik
https://papers.nips.cc/paper_files/paper/2020/hash/21d144c75af2c3a1cb90441bbb7d8b40-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10002-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/21d144c75af2c3a1cb90441bbb7d8b40-Supplemental.pdf
We study the fundamental task of estimating the median of an underlying distribution from a finite number of samples, under pure differential privacy constraints. We focus on distributions satisfying the minimal assumption that they have a positive density at a small neighborhood around the median. In particular, the distribution is allowed to output unbounded values and is not required to have finite moments. We compute the exact, up-to-constant terms, statistical rate of estimation for the median by providing nearly-tight upper and lower bounds. Furthermore, we design a polynomial-time differentially private algorithm which provably achieves the optimal performance. At a technical level, our results leverage a Lipschitz Extension Lemma which allows us to design and analyze differentially private algorithms solely on appropriately defined ``typical" instances of the samples.
Breaking the Communication-Privacy-Accuracy Trilemma
https://papers.nips.cc/paper_files/paper/2020/hash/222afbe0d68c61de60374b96f1d86715-Abstract.html
Wei-Ning Chen, Peter Kairouz, Ayfer Ozgur
https://papers.nips.cc/paper_files/paper/2020/hash/222afbe0d68c61de60374b96f1d86715-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10003-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/222afbe0d68c61de60374b96f1d86715-Supplemental.pdf
In particular, we consider the problems of mean estimation and frequency estimation under epsilon-local differential privacy and b-bit communication constraints. For mean estimation, we propose a scheme based on Kashin’s representation and random sampling, with order-optimal estimation error under both constraints. For frequency estimation, we present a mechanism that leverages the recursive structure of Walsh-Hadamard matrices and achieves order-optimal estimation error for all privacy levels and communication budgets. As a by-product, we also construct a distribution estimation mechanism that is rate-optimal for all privacy regimes and communication constraints, extending recent work that is limited to b = 1 and epsilon = O(1). Our results demonstrate that intelligent encoding under joint privacy and communication constraints can yield a performance that matches the optimal accuracy achievable under either constraint alone.
Audeo: Audio Generation for a Silent Performance Video
https://papers.nips.cc/paper_files/paper/2020/hash/227f6afd3b7f89b96c4bb91f95d50f6d-Abstract.html
Kun Su, Xiulong Liu, Eli Shlizerman
https://papers.nips.cc/paper_files/paper/2020/hash/227f6afd3b7f89b96c4bb91f95d50f6d-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10004-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/227f6afd3b7f89b96c4bb91f95d50f6d-Supplemental.zip
We present a novel system that gets as an input, video frames of a musician playing the piano, and generates the music for that video. The generation of music from visual cues is a challenging problem and it is not clear whether it is an attainable goal at all. Our main aim in this work is to explore the plausibility of such a transformation and to identify cues and components able to carry the association of sounds with visual events. To achieve the transformation we built a full pipeline named 'Audeo' containing three components. We first translate the video frames of the keyboard and the musician hand movements into raw mechanical musical symbolic representation Piano-Roll (Roll) for each video frame which represents the keys pressed at each time step. We then adapt the Roll to be amenable for audio synthesis by including temporal correlations. This step turns out to be critical for meaningful audio generation. In the last step, we implement Midi synthesizers to generate realistic music. Audeo converts video to audio smoothly and clearly with only a few setup constraints. We evaluate Audeo on piano performance videos collected from Youtube and obtain that their generated music is of reasonable audio quality and can be successfully recognized with high precision by popular music identification software.
Ode to an ODE
https://papers.nips.cc/paper_files/paper/2020/hash/228669109aa3ab1b4ec06b7722efb105-Abstract.html
Krzysztof M. Choromanski, Jared Quincy Davis, Valerii Likhosherstov, Xingyou Song, Jean-Jacques Slotine, Jacob Varley, Honglak Lee, Adrian Weller, Vikas Sindhwani
https://papers.nips.cc/paper_files/paper/2020/hash/228669109aa3ab1b4ec06b7722efb105-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10005-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/228669109aa3ab1b4ec06b7722efb105-Supplemental.pdf
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the orthogonal group O(d). This nested system of two flows, where the parameter-flow is constrained to lie on the compact manifold, provides stability and effectiveness of training and solves the gradient vanishing-explosion problem which is intrinsically related to training deep neural network architectures such as Neural ODEs. Consequently, it leads to better downstream models, as we show on the example of training reinforcement learning policies with evolution strategies, and in the supervised learning setting, by comparing with previous SOTA baselines. We provide strong convergence results for our proposed mechanism that are independent of the width of the network, supporting our empirical studies. Our results show an intriguing connection between the theory of deep neural networks and the field of matrix flows on compact manifolds.
Self-Distillation Amplifies Regularization in Hilbert Space
https://papers.nips.cc/paper_files/paper/2020/hash/2288f691b58edecadcc9a8691762b4fd-Abstract.html
Hossein Mobahi, Mehrdad Farajtabar, Peter Bartlett
https://papers.nips.cc/paper_files/paper/2020/hash/2288f691b58edecadcc9a8691762b4fd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10006-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2288f691b58edecadcc9a8691762b4fd-Supplemental.zip
Knowledge distillation introduced in the deep learning context is a method to transfer knowledge from one architecture to another. In particular, when the architectures are identical, this is called self-distillation. The idea is to feed in predictions of the trained model as new target values for retraining (and iterate this loop possibly a few times). It has been empirically observed that the self-distilled model often achieves higher accuracy on held out data. Why this happens, however, has been a mystery: the self-distillation dynamics does not receive any new information about the task and solely evolves by looping over training. To the best of our knowledge, there is no rigorous understanding of why this happens. This work provides the first theoretical analysis of self-distillation. We focus on fitting a nonlinear function to training data, where the model space is Hilbert space and fitting is subject to L2 regularization in this function space. We show that self-distillation iterations modify regularization by progressively limiting the number of basis functions that can be used to represent the solution. This implies (as we also verify empirically) that while a few rounds of self-distillation may reduce over-fitting, further rounds may lead to under-fitting and thus worse performance.
Coupling-based Invertible Neural Networks Are Universal Diffeomorphism Approximators
https://papers.nips.cc/paper_files/paper/2020/hash/2290a7385ed77cc5592dc2153229f082-Abstract.html
Takeshi Teshima, Isao Ishikawa, Koichi Tojo, Kenta Oono, Masahiro Ikeda, Masashi Sugiyama
https://papers.nips.cc/paper_files/paper/2020/hash/2290a7385ed77cc5592dc2153229f082-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10007-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/2290a7385ed77cc5592dc2153229f082-Supplemental.zip
Invertible neural networks based on coupling flows (CF-INNs) have various machine learning applications such as image synthesis and representation learning. However, their desirable characteristics such as analytic invertibility come at the cost of restricting the functional forms. This poses a question on their representation power: are CF-INNs universal approximators for invertible functions? Without a universality, there could be a well-behaved invertible transformation that the CF-INN can never approximate, hence it would render the model class unreliable. We answer this question by showing a convenient criterion: a CF-INN is universal if its layers contain affine coupling and invertible linear functions as special cases. As its corollary, we can affirmatively resolve a previously unsolved problem: whether normalizing flow models based on affine coupling can be universal distributional approximators. In the course of proving the universality, we prove a general theorem to show the equivalence of the universality for certain diffeomorphism classes, a theoretical insight that is of interest by itself.
Community detection using fast low-cardinality semidefinite programming

https://papers.nips.cc/paper_files/paper/2020/hash/229aeb9e2ae66f2fac1149e5240b2fdd-Abstract.html
Po-Wei Wang, J. Zico Kolter
https://papers.nips.cc/paper_files/paper/2020/hash/229aeb9e2ae66f2fac1149e5240b2fdd-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10008-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/229aeb9e2ae66f2fac1149e5240b2fdd-Supplemental.pdf
Modularity maximization has been a fundamental tool for understanding the community structure of a network, but the underlying optimization problem is nonconvex and NP-hard to solve. State-of-the-art algorithms like the Louvain or Leiden methods focus on different heuristics to help escape local optima, but they still depend on a greedy step that moves node assignment locally and is prone to getting trapped. In this paper, we propose a new class of low-cardinality algorithm that generalizes the local update to maximize a semidefinite relaxation derived from max-k-cut. This proposed algorithm is scalable, empirically achieves the global semidefinite optimality for small cases, and outperforms the state-of-the-art algorithms in real-world datasets with little additional time cost. From the algorithmic perspective, it also opens a new avenue for scaling-up semidefinite programming when the solutions are sparse instead of low-rank.
Modeling Noisy Annotations for Crowd Counting
https://papers.nips.cc/paper_files/paper/2020/hash/22bb543b251c39ccdad8063d486987bb-Abstract.html
Jia Wan, Antoni Chan
https://papers.nips.cc/paper_files/paper/2020/hash/22bb543b251c39ccdad8063d486987bb-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10009-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/22bb543b251c39ccdad8063d486987bb-Supplemental.pdf
The annotation noise in crowd counting is not modeled in traditional crowd counting algorithms based on crowd density maps. In this paper, we first model the annotation noise using a random variable with Gaussian distribution, and derive the pdf of the crowd density value for each spatial location in the image. We then approximate the joint distribution of the density values (i.e., the distribution of density maps) with a full covariance multivariate Gaussian density, and derive a low-rank approximate for tractable implementation. We use our loss function to train a crowd density map estimator and achieve state-of-the-art performance on three large-scale crowd counting datasets, which confirms its effectiveness. Examination of the predictions of the trained model shows that it can correctly predict the locations of people in spite of the noisy training data, which demonstrates the robustness of our loss function to annotation noise.
An operator view of policy gradient methods
https://papers.nips.cc/paper_files/paper/2020/hash/22eda830d1051274a2581d6466c06e6c-Abstract.html
Dibya Ghosh, Marlos C. Machado, Nicolas Le Roux
https://papers.nips.cc/paper_files/paper/2020/hash/22eda830d1051274a2581d6466c06e6c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10010-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/22eda830d1051274a2581d6466c06e6c-Supplemental.pdf
We cast policy gradient methods as the repeated application of two operators: a policy improvement operator $\mathcal{I}$, which maps any policy $\pi$ to a better one $\mathcal{I}\pi$, and a projection operator $\mathcal{P}$, which finds the best approximation of $\mathcal{I}\pi$ in the set of realizable policies. We use this framework to introduce operator-based versions of well-known policy gradient methods such as REINFORCE and PPO, which leads to a better understanding of their original counterparts. We also use the understanding we develop of the role of $\mathcal{I}$ and $\mathcal{P}$ to propose a new global lower bound of the expected return. This new perspective allows us to further bridge the gap between policy-based and value-based methods, showing how REINFORCE and the Bellman optimality operator, for example, can be seen as two sides of the same coin.
Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases
https://papers.nips.cc/paper_files/paper/2020/hash/22f791da07b0d8a2504c2537c560001c-Abstract.html
Senthil Purushwalkam, Abhinav Gupta
https://papers.nips.cc/paper_files/paper/2020/hash/22f791da07b0d8a2504c2537c560001c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10011-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/22f791da07b0d8a2504c2537c560001c-Supplemental.pdf
Self-supervised representation learning approaches have recently surpassed their supervised learning counterparts on downstream tasks like object detection and image classification. Somewhat mysteriously the recent gains in performance come from training instance classification models, treating each image and it's augmented versions as samples of a single class. In this work, we first present quantitative experiments to demystify these gains. We demonstrate that approaches like MOCO and PIRL learn occlusion-invariant representations. However, they fail to capture viewpoint and category instance invariance which are crucial components for object recognition. Second, we demonstrate that these approaches obtain further gains from access to a clean object-centric training dataset like Imagenet. Finally, we propose an approach to leverage unstructured videos to learn representations that possess higher viewpoint invariance. Our results show that the learned representations outperform MOCOv2 trained on the same data in terms of invariances encoded and the performance on downstream image classification and semantic segmentation tasks.
Online MAP Inference of Determinantal Point Processes
https://papers.nips.cc/paper_files/paper/2020/hash/23378a2d0a25c6ade2c1da1c06c5213f-Abstract.html
Aditya Bhaskara, Amin Karbasi, Silvio Lattanzi, Morteza Zadimoghaddam
https://papers.nips.cc/paper_files/paper/2020/hash/23378a2d0a25c6ade2c1da1c06c5213f-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10012-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23378a2d0a25c6ade2c1da1c06c5213f-Supplemental.pdf
In this paper, we provide an efficient approximation algorithm for finding the most likelihood configuration (MAP) of size $k$ for Determinantal Point Processes (DPP) in the online setting where the data points arrive in an arbitrary order and the algorithm cannot discard the selected elements from its local memory. Given a tolerance additive error $\eta$, our \online algorithm achieves a $k^{O(k)}$ multiplicative approximation guarantee with an additive error $\eta$, using a memory footprint independent of the size of the data stream. We note that the exponential dependence on $k$ in the approximation factor is unavoidable even in the offline setting. Our result readily implies a streaming algorithm with an improved memory bound compared to existing results.
Video Object Segmentation with Adaptive Feature Bank and Uncertain-Region Refinement
https://papers.nips.cc/paper_files/paper/2020/hash/234833147b97bb6aed53a8f4f1c7a7d8-Abstract.html
Yongqing Liang, Xin Li, Navid Jafari, Jim Chen
https://papers.nips.cc/paper_files/paper/2020/hash/234833147b97bb6aed53a8f4f1c7a7d8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10013-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/234833147b97bb6aed53a8f4f1c7a7d8-Supplemental.zip
This paper presents a new matching-based framework for semi-supervised video object segmentation (VOS). Recently, state-of-the-art VOS performance has been achieved by matching-based algorithms, in which feature banks are created to store features for region matching and classification. However, how to effectively organize information in the continuously growing feature bank remains under-explored, and this leads to an inefficient design of the bank. We introduced an adaptive feature bank update scheme to dynamically absorb new features and discard obsolete features. We also designed a new confidence loss and a fine-grained segmentation module to enhance the segmentation accuracy in uncertain regions. On public benchmarks, our algorithm outperforms existing state-of-the-arts.
Inferring learning rules from animal decision-making
https://papers.nips.cc/paper_files/paper/2020/hash/234b941e88b755b7a72a1c1dd5022f30-Abstract.html
Zoe Ashwood, Nicholas A. Roy, Ji Hyun Bak, Jonathan W. Pillow
https://papers.nips.cc/paper_files/paper/2020/hash/234b941e88b755b7a72a1c1dd5022f30-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10014-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/234b941e88b755b7a72a1c1dd5022f30-Supplemental.pdf
How do animals learn? This remains an elusive question in neuroscience. Whereas reinforcement learning often focuses on the design of algorithms that enable artificial agents to efficiently learn new tasks, here we develop a modeling framework to directly infer the empirical learning rules that animals use to acquire new behaviors. Our method efficiently infers the trial-to-trial changes in an animal’s policy, and decomposes those changes into a learning component and a noise component. Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal’s policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules. After validating our framework on simulated choice data, we applied our model to data from rats and mice learning perceptual decision-making tasks. We found that certain learning rules were far more capable of explaining trial-to-trial changes in an animal's policy. Whereas the average contribution of the conventional REINFORCE learning rule to the policy update for mice learning the International Brain Laboratory's task was just 30%, we found that adding baseline parameters allowed the learning rule to explain 92% of the animals' policy updates under our model. Intriguingly, the best-fitting learning rates and baseline values indicate that an animal's policy update, at each trial, does not occur in the direction that maximizes expected reward. Understanding how an animal transitions from chance-level to high-accuracy performance when learning a new task not only provides neuroscientists with insight into their animals, but also provides concrete examples of biological learning algorithms to the machine learning community.
Input-Aware Dynamic Backdoor Attack
https://papers.nips.cc/paper_files/paper/2020/hash/234e691320c0ad5b45ee3c96d0d7b8f8-Abstract.html
Tuan Anh Nguyen, Anh Tran
https://papers.nips.cc/paper_files/paper/2020/hash/234e691320c0ad5b45ee3c96d0d7b8f8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10015-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/234e691320c0ad5b45ee3c96d0d7b8f8-Supplemental.zip
In recent years, neural backdoor attack has been considered to be a potential security threat to deep learning systems. Such systems, while achieving the state-of-the-art performance on clean data, perform abnormally on inputs with predefined triggers. Current backdoor techniques, however, rely on uniform trigger patterns, which are easily detected and mitigated by current defense methods. In this work, we propose a novel backdoor attack technique in which the triggers vary from input to input. To achieve this goal, we implement an input-aware trigger generator driven by diversity loss. A novel cross-trigger test is applied to enforce trigger nonreusablity, making backdoor verification impossible. Experiments show that our method is efficient in various attack scenarios as well as multiple datasets. We further demonstrate that our backdoor can bypass the state of the art defense methods. An analysis with a famous neural network inspector again proves the stealthiness of the proposed attack. Our code is publicly available.
How hard is to distinguish graphs with graph neural networks?
https://papers.nips.cc/paper_files/paper/2020/hash/23685a2431acad7789c1e3d43ea1522c-Abstract.html
Andreas Loukas
https://papers.nips.cc/paper_files/paper/2020/hash/23685a2431acad7789c1e3d43ea1522c-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10016-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23685a2431acad7789c1e3d43ea1522c-Supplemental.zip
A hallmark of graph neural networks is their ability to distinguish the isomorphism class of their inputs. This study derives hardness results for the classification variant of graph isomorphism in the message-passing model (MPNN). MPNN encompasses the majority of graph neural networks used today and is universal when nodes are given unique features. The analysis relies on the introduced measure of communication capacity. Capacity measures how much information the nodes of a network can exchange during the forward pass and depends on the depth, message-size, global state, and width of the architecture. It is shown that the capacity of MPNN needs to grow linearly with the number of nodes so that a network can distinguish trees and quadratically for general connected graphs. The derived bounds concern both worst- and average-case behavior and apply to networks with/without unique features and adaptive architecture---they are also up to two orders of magnitude tighter than those given by simpler arguments. An empirical study involving 12 graph classification tasks and 420 networks reveals strong alignment between actual performance and theoretical predictions.
Minimax Regret of Switching-Constrained Online Convex Optimization: No Phase Transition
https://papers.nips.cc/paper_files/paper/2020/hash/236f119f58f5fd102c5a2ca609fdcbd8-Abstract.html
Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi
https://papers.nips.cc/paper_files/paper/2020/hash/236f119f58f5fd102c5a2ca609fdcbd8-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10017-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/236f119f58f5fd102c5a2ca609fdcbd8-Supplemental.pdf
We study the problem of switching-constrained online convex optimization (OCO), where the player has a limited number of opportunities to change her action. While the discrete analog of this online learning task has been studied extensively, previous work in the continuous setting has neither established the minimax rate nor algorithmically achieved it. In this paper, we show that $ T $-round switching-constrained OCO with fewer than $ K $ switches has a minimax regret of $ \Theta(\frac{T}{\sqrt{K}}) $. In particular, it is at least $ \frac{T}{\sqrt{2K}} $ for one dimension and at least $ \frac{T}{\sqrt{K}} $ for higher dimensions. The lower bound in higher dimensions is attained by an orthogonal subspace argument. In one dimension, a novel adversarial strategy yields the lower bound of $O(\frac{T}{\sqrt{K}})$, but a precise minimax analysis including constants is more involved. To establish the tighter one-dimensional result, we introduce the \emph{fugal game} relaxation, whose minimax regret lower bounds that of switching-constrained OCO. We show that the minimax regret of the fugal game is at least $ \frac{T}{\sqrt{2K}} $ and thereby establish the optimal minimax lower bound in one dimension. To establish the dimension-independent upper bound, we next show that a mini-batching algorithm provides an $ O(\frac{T}{\sqrt{K}}) $ upper bound, and therefore conclude that the minimax regret of switching-constrained OCO is $ \Theta(\frac{T}{\sqrt{K}}) $ for any $K$. This is in sharp contrast to its discrete counterpart, the switching-constrained prediction-from-experts problem, which exhibits a phase transition in minimax regret between the low-switching and high-switching regimes.
Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks
https://papers.nips.cc/paper_files/paper/2020/hash/23937b42f9273974570fb5a56a6652ee-Abstract.html
Wei-An Lin, Chun Pong Lau, Alexander Levine, Rama Chellappa, Soheil Feizi
https://papers.nips.cc/paper_files/paper/2020/hash/23937b42f9273974570fb5a56a6652ee-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10018-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23937b42f9273974570fb5a56a6652ee-Supplemental.pdf
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms. However, it often degrades the model performance on normal images and more importantly, the defense does not generalize well to novel attacks. Given the success of deep generative models such as GANs and VAEs in characterizing the underlying manifold of images, we investigate whether or not the aforementioned deficiencies of adversarial training can be remedied by exploiting the underlying manifold information. To partially answer this question, we consider the scenario when the manifold information of the underlying data is available. We use a subset of ImageNet natural images where an approximate underlying manifold is learned using StyleGAN. We also construct an ``On-Manifold ImageNet'' (OM-ImageNet) dataset by projecting the ImageNet samples onto the learned manifold. For OM-ImageNet, the underlying manifold information is exact. Using OM-ImageNet, we first show that on-manifold adversarial training improves both standard accuracy and robustness to on-manifold attacks. However, since no out-of-manifold perturbations are realized, the defense can be broken by Lp adversarial attacks. We further propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model. Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks. In addition, we observe that models defended by DMAT achieve improved robustness against novel attacks which manipulate images by global color shifts or various types of image filtering. Interestingly, similar improvements are also achieved when the defended models are tested on (out-of-manifold) natural images. These results demonstrate the potential benefits of using manifold information in enhancing robustness of deep learning models against various types of novel adversarial attacks.
Cross-Scale Internal Graph Neural Network for Image Super-Resolution
https://papers.nips.cc/paper_files/paper/2020/hash/23ad3e314e2a2b43b4c720507cec0723-Abstract.html
Shangchen Zhou, Jiawei Zhang, Wangmeng Zuo, Chen Change Loy
https://papers.nips.cc/paper_files/paper/2020/hash/23ad3e314e2a2b43b4c720507cec0723-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10019-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23ad3e314e2a2b43b4c720507cec0723-Supplemental.pdf
Non-local self-similarity in natural images has been well studied as an effective prior in image restoration. However, for single image super-resolution (SISR), most existing deep non-local methods (e.g., non-local neural networks) only exploit similar patches within the same scale of the low-resolution (LR) input image. Consequently, the restoration is limited to using the same-scale information while neglecting potential high-resolution (HR) cues from other scales. In this paper, we explore the cross-scale patch recurrence property of a natural image, i.e., similar patches tend to recur many times across different scales. This is achieved using a novel cross-scale internal graph neural network (IGNN). Specifically, we dynamically construct a cross-scale graph by searching k-nearest neighboring patches in the downsampled LR image for each query patch in the LR image. We then obtain the corresponding k HR neighboring patches in the LR image and aggregate them adaptively in accordance to the edge label of the constructed graph. In this way, the HR information can be passed from k HR neighboring patches to the LR query patch to help it recover more detailed textures. Besides, these internal image-specific LR/HR exemplars are also significant complements to the external information learned from the training dataset. Extensive experiments demonstrate the effectiveness of IGNN against the state-of-the-art SISR methods including existing non-local networks on standard benchmarks.
Unsupervised Representation Learning by Invariance Propagation
https://papers.nips.cc/paper_files/paper/2020/hash/23af4b45f1e166141a790d1a3126e77a-Abstract.html
Feng Wang, Huaping Liu, Di Guo, Sun Fuchun
https://papers.nips.cc/paper_files/paper/2020/hash/23af4b45f1e166141a790d1a3126e77a-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10020-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/23af4b45f1e166141a790d1a3126e77a-Supplemental.pdf
Unsupervised learning methods based on contrastive learning have drawn increasing attention and achieved promising results. Most of them aim to learn representations invariant to instance-level variations, which are provided by different views of the same instance. In this paper, we propose Invariance Propagation to focus on learning representations invariant to category-level variations, which are provided by different instances from the same category. Our method recursively discovers semantically consistent samples residing in the same high-density regions in representation space. We demonstrate a hard sampling strategy to concentrate on maximizing the agreement between the anchor sample and its hard positive samples, which provide more intra-class variations to help capture more abstract invariance. As a result, with a ResNet-50 as the backbone, our method achieves 71.3% top-1 accuracy on ImageNet linear classification and 78.2% top-5 accuracy fine-tuning on only 1% labels, surpassing previous results. We also achieve state-of-the-art performance on other downstream tasks, including linear classification on Places205 and Pascal VOC, and transfer learning on small scale datasets.
Restoring Negative Information in Few-Shot Object Detection
https://papers.nips.cc/paper_files/paper/2020/hash/240ac9371ec2671ae99847c3ae2e6384-Abstract.html
Yukuan Yang, Fangyun Wei, Miaojing Shi, Guoqi Li
https://papers.nips.cc/paper_files/paper/2020/hash/240ac9371ec2671ae99847c3ae2e6384-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/240ac9371ec2671ae99847c3ae2e6384-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10021-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/240ac9371ec2671ae99847c3ae2e6384-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/240ac9371ec2671ae99847c3ae2e6384-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/240ac9371ec2671ae99847c3ae2e6384-Review.html
null
Few-shot learning has recently emerged as a new challenge in the deep learning field: unlike conventional methods that train the deep neural networks (DNNs) with a large number of labeled data, it asks for the generalization of DNNs on new classes with few annotated samples. Recent advances in few-shot learning mainly focus on image classification while in this paper we focus on object detection. The initial explorations in few-shot object detection tend to simulate a classification scenario by using the positive proposals in images with respect to certain object class while discarding the negative proposals of that class. Negatives, especially hard negatives, however, are essential to the embedding space learning in few-shot object detection. In this paper, we restore the negative information in few-shot object detection by introducing a new negative- and positive-representative based metric learning framework and a new inference scheme with negative and positive representatives. We build our work on a recent few-shot pipeline RepMet with several new modules to encode negative information for both training and testing. Extensive experiments on ImageNet-LOC and PASCAL VOC show our method substantially improves the state-of-the-art few-shot object detection solutions. Our code is available at https://github.com/yang-yk/NP-RepMet.
Do Adversarially Robust ImageNet Models Transfer Better?
https://papers.nips.cc/paper_files/paper/2020/hash/24357dd085d2c4b1a88a7e0692e60294-Abstract.html
Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry
https://papers.nips.cc/paper_files/paper/2020/hash/24357dd085d2c4b1a88a7e0692e60294-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10022-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/24357dd085d2c4b1a88a7e0692e60294-Supplemental.pdf
Transfer learning is a widely-used paradigm in deep learning, where models pre-trained on standard datasets can be efficiently adapted to downstream tasks. Typically, better pre-trained models yield better transfer results, suggesting that initial accuracy is a key aspect of transfer learning performance. In this work, we identify another such aspect: we find that adversarially robust models, while less accurate, often perform better than their standard-trained counterparts when used for transfer learning. Specifically, we focus on adversarially robust ImageNet classifiers, and show that they yield improved accuracy on a standard suite of downstream classification tasks. Further analysis uncovers more differences between robust and standard models in the context of transfer learning. Our results are consistent with (and in fact, add to) recent hypotheses stating that robustness leads to improved feature representations. Code and models is available in the supplementary material.
Robust Correction of Sampling Bias using Cumulative Distribution Functions
https://papers.nips.cc/paper_files/paper/2020/hash/24368c745de15b3d2d6279667debcba3-Abstract.html
Bijan Mazaheri, Siddharth Jain, Jehoshua Bruck
https://papers.nips.cc/paper_files/paper/2020/hash/24368c745de15b3d2d6279667debcba3-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10023-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/24368c745de15b3d2d6279667debcba3-Supplemental.zip
Varying domains and biased datasets can lead to differences between the training and the target distributions, known as covariate shift. Current approaches for alleviating this often rely on estimating the ratio of training and target probability density functions. These techniques require parameter tuning and can be unstable across different datasets. We present a new method for handling covariate shift using the empirical cumulative distribution function estimates of the target distribution by a rigorous generalization of a recent idea proposed by Vapnik and Izmailov. Further, we show experimentally that our method is more robust in its predictions, is not reliant on parameter tuning and shows similar classification performance compared to the current state-of-the-art techniques on synthetic and real datasets.
Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach
https://papers.nips.cc/paper_files/paper/2020/hash/24389bfe4fe2eba8bf9aa9203a44cdad-Abstract.html
Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
https://papers.nips.cc/paper_files/paper/2020/hash/24389bfe4fe2eba8bf9aa9203a44cdad-Abstract.html
NIPS 2020
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-AuthorFeedback.pdf
https://papers.nips.cc/paper_files/paper/10024-/bibtex
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-MetaReview.html
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-Paper.pdf
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-Review.html
https://papers.nips.cc/paper_files/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-Supplemental.pdf
In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users and allows users to obtain a richer model as their models are trained over a larger set of data points. However, this scheme only develops a common output for all the users, and, therefore, it does not adapt the model to each user. This is an important missing feature, especially given the heterogeneity of the underlying data distribution for various users. In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data. This approach keeps all the benefits of the federated learning architecture, and, by structure, leads to a more personalized model for each user. We show this problem can be studied within the Model-Agnostic Meta-Learning (MAML) framework. Inspired by this connection, we study a personalized variant of the well-known Federated Averaging algorithm and evaluate its performance in terms of gradient norm for non-convex loss functions. Further, we characterize how this performance is affected by the closeness of underlying distributions of user data, measured in terms of distribution distances such as Total Variation and 1-Wasserstein metric.