title
stringlengths
19
143
url
stringlengths
41
43
detail_url
stringlengths
41
43
authors
stringlengths
9
347
tags
stringclasses
3 values
abstract
stringlengths
457
2.38k
pdf
stringlengths
71
71
Self-Supervised Policy Adaptation during Deployment
https://openreview.net/forum?id=o_V-MjyyGV_
https://openreview.net/forum?id=o_V-MjyyGV_
Nicklas Hansen,Rishabh Jangir,Yu Sun,Guillem Alenyà,Pieter Abbeel,Alexei A Efros,Lerrel Pinto,Xiaolong Wang
ICLR 2021,Spotlight
In most real world scenarios, a policy trained by reinforcement learning in one environment needs to be deployed in another, potentially quite different environment. However, generalization across different environments is known to be hard. A natural solution would be to keep training after deployment in the new environment, but this cannot be done if the new environment offers no reward signal. Our work explores the use of self-supervision to allow the policy to continue training after deployment without using any rewards. While previous methods explicitly anticipate changes in the new environment, we assume no prior knowledge of those changes yet still obtain significant improvements. Empirical evaluations are performed on diverse simulation environments from DeepMind Control suite and ViZDoom, as well as real robotic manipulation tasks in continuously changing environments, taking observations from an uncalibrated camera. Our method improves generalization in 31 out of 36 environments across various tasks and outperforms domain randomization on a majority of environments. Webpage and implementation: https://nicklashansen.github.io/PAD/.
https://openreview.net/pdf/6949f5e82ffd2bd635a6de802a733540b19b9cc3.pdf
Differentially Private Learning Needs Better Features (or Much More Data)
https://openreview.net/forum?id=YTWGvpFOQD-
https://openreview.net/forum?id=YTWGvpFOQD-
Florian Tramer,Dan Boneh
ICLR 2021,Spotlight
We demonstrate that differentially private machine learning has not yet reached its ''AlexNet moment'' on many canonical vision tasks: linear models trained on handcrafted features significantly outperform end-to-end deep neural networks for moderate privacy budgets. To exceed the performance of handcrafted features, we show that private learning requires either much more private data, or access to features learned on public data from a similar domain. Our work introduces simple yet strong baselines for differentially private learning that can inform the evaluation of future progress in this area.
https://openreview.net/pdf/63107901e325896b18874aad193314befc47c7ae.pdf
Data-Efficient Reinforcement Learning with Self-Predictive Representations
https://openreview.net/forum?id=uCQfPZwRaUu
https://openreview.net/forum?id=uCQfPZwRaUu
Max Schwarzer,Ankesh Anand,Rishab Goel,R Devon Hjelm,Aaron Courville,Philip Bachman
ICLR 2021,Spotlight
While deep reinforcement learning excels at solving tasks where large amounts of data can be collected through virtually unlimited interaction with the environment, learning from limited interaction remains a key challenge. We posit that an agent can learn more efficiently if we augment reward maximization with self-supervised objectives based on structure in its visual input and sequential interaction with the environment. Our method, Self-Predictive Representations (SPR), trains an agent to predict its own latent state representations multiple steps into the future. We compute target representations for future states using an encoder which is an exponential moving average of the agent’s parameters and we make predictions using a learned transition model. On its own, this future prediction objective outperforms prior methods for sample-efficient deep RL from pixels. We further improve performance by adding data augmentation to the future prediction loss, which forces the agent’s representations to be consistent across multiple views of an observation. Our full self-supervised objective, which combines future prediction and data augmentation, achieves a median human-normalized score of 0.415 on Atari in a setting limited to 100k steps of environment interaction, which represents a 55% relative improvement over the previous state-of-the-art. Notably, even in this limited data regime, SPR exceeds expert human scores on 7 out of 26 games. We’ve made the code associated with this work available at https://github.com/mila-iqia/spr.
https://openreview.net/pdf/1332dd3bfd157968abcdfda3acf4d4a7499d6143.pdf
Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
https://openreview.net/forum?id=wS0UFjsNYjn
https://openreview.net/forum?id=wS0UFjsNYjn
Dong Bok Lee,Dongchan Min,Seanie Lee,Sung Ju Hwang
ICLR 2021,Spotlight
Unsupervised learning aims to learn meaningful representations from unlabeled data which can captures its intrinsic structure, that can be transferred to downstream tasks. Meta-learning, whose objective is to learn to generalize across tasks such that the learned model can rapidly adapt to a novel task, shares the spirit of unsupervised learning in that the both seek to learn more effective and efficient learning procedure than learning from scratch. The fundamental difference of the two is that the most meta-learning approaches are supervised, assuming full access to the labels. However, acquiring labeled dataset for meta-training not only is costly as it requires human efforts in labeling but also limits its applications to pre-defined task distributions. In this paper, we propose a principled unsupervised meta-learning model, namely Meta-GMVAE, based on Variational Autoencoder (VAE) and set-level variational inference. Moreover, we introduce a mixture of Gaussian (GMM) prior, assuming that each modality represents each class-concept in a randomly sampled episode, which we optimize with Expectation-Maximization (EM). Then, the learned model can be used for downstream few-shot classification tasks, where we obtain task-specific parameters by performing semi-supervised EM on the latent representations of the support and query set, and predict labels of the query set by computing aggregated posteriors. We validate our model on Omniglot and Mini-ImageNet datasets by evaluating its performance on downstream few-shot classification tasks. The results show that our model obtain impressive performance gains over existing unsupervised meta-learning baselines, even outperforming supervised MAML on a certain setting.
https://openreview.net/pdf/7b58adedb02a73d26b32a949a08c9238409022a5.pdf
Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time
https://openreview.net/forum?id=0N8jUH4JMv6
https://openreview.net/forum?id=0N8jUH4JMv6
Tolga Ergen,Mert Pilanci
ICLR 2021,Spotlight
We study training of Convolutional Neural Networks (CNNs) with ReLU activations and introduce exact convex optimization formulations with a polynomial complexity with respect to the number of data samples, the number of neurons, and data dimension. More specifically, we develop a convex analytic framework utilizing semi-infinite duality to obtain equivalent convex optimization problems for several two- and three-layer CNN architectures. We first prove that two-layer CNNs can be globally optimized via an $\ell_2$ norm regularized convex program. We then show that multi-layer circular CNN training problems with a single ReLU layer are equivalent to an $\ell_1$ regularized convex program that encourages sparsity in the spectral domain. We also extend these results to three-layer CNNs with two ReLU layers. Furthermore, we present extensions of our approach to different pooling methods, which elucidates the implicit architectural bias as convex regularizers.
https://openreview.net/pdf/dba1d25e1354e478235ccc68af0dd34e0cf91c79.pdf
Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
https://openreview.net/forum?id=GY6-6sTvGaf
https://openreview.net/forum?id=GY6-6sTvGaf
Denis Yarats,Ilya Kostrikov,Rob Fergus
ICLR 2021,Spotlight
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training. The approach leverages input perturbations commonly used in computer vision tasks to transform input examples, as well as regularizing the value function and policy. Existing model-free approaches, such as Soft Actor-Critic (SAC), are not able to train deep networks effectively from image pixels. However, the addition of our augmentation method dramatically improves SAC’s performance, enabling it to reach state-of-the-art performance on the DeepMind control suite, surpassing model-based (Hafner et al., 2019; Lee et al., 2019; Hafner et al., 2018) methods and recently proposed contrastive learning (Srinivas et al., 2020). Our approach, which we dub DrQ: Data-regularized Q, can be combined with any model-free reinforcement learning algorithm. We further demonstrate this by applying it to DQN and significantly improve its data-efficiency on the Atari 100k benchmark.
https://openreview.net/pdf/b8b967965ff52b2eb545d1a7d4284f59f0fc181f.pdf
Dynamic Tensor Rematerialization
https://openreview.net/forum?id=Vfs_2RnOD0H
https://openreview.net/forum?id=Vfs_2RnOD0H
Marisa Kirisame,Steven Lyubomirsky,Altan Haan,Jennifer Brennan,Mike He,Jared Roesch,Tianqi Chen,Zachary Tatlock
ICLR 2021,Spotlight
Checkpointing enables the training of deep learning models under restricted memory budgets by freeing intermediate activations from memory and recomputing them on demand. Current checkpointing techniques statically plan these recomputations offline and assume static computation graphs. We demonstrate that a simple online algorithm can achieve comparable performance by introducing Dynamic Tensor Rematerialization (DTR), a greedy online algorithm for checkpointing that is extensible and general, is parameterized by eviction policy, and supports dynamic models. We prove that DTR can train an $N$-layer linear feedforward network on an $\Omega(\sqrt{N})$ memory budget with only $\mathcal{O}(N)$ tensor operations. DTR closely matches the performance of optimal static checkpointing in simulated experiments. We incorporate a DTR prototype into PyTorch merely by interposing on tensor allocations and operator calls and collecting lightweight metadata on tensors.
https://openreview.net/pdf/241e988e3953566bc4fe0e6a974d29ff78dfcc2e.pdf
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
https://openreview.net/forum?id=VqzVhqxkjH1
https://openreview.net/forum?id=VqzVhqxkjH1
Nils Lukas,Yuxuan Zhang,Florian Kerschbaum
ICLR 2021,Spotlight
In Machine Learning as a Service, a provider trains a deep neural network and gives many users access. The hosted (source) model is susceptible to model stealing attacks, where an adversary derives a surrogate model from API access to the source model. For post hoc detection of such attacks, the provider needs a robust method to determine whether a suspect model is a surrogate of their model. We propose a fingerprinting method for deep neural network classifiers that extracts a set of inputs from the source model so that only surrogates agree with the source model on the classification of such inputs. These inputs are a subclass of transferable adversarial examples which we call conferrable adversarial examples that exclusively transfer with a target label from a source model to its surrogates. We propose a new method to generate these conferrable adversarial examples. We present an extensive study on the irremovability of our fingerprint against fine-tuning, weight pruning, retraining, retraining with different architectures, three model extraction attacks from related work, transfer learning, adversarial training, and two new adaptive attacks. Our fingerprint is robust against distillation, related model extraction attacks, and even transfer learning when the attacker has no access to the model provider's dataset. Our fingerprint is the first method that reaches a ROC AUC of 1.0 in verifying surrogates, compared to a ROC AUC of 0.63 by previous fingerprints.
https://openreview.net/pdf/4c8c09b36e5485077c542426e9f254160401a43c.pdf
Model-Based Visual Planning with Self-Supervised Functional Distances
https://openreview.net/forum?id=UcoXdfrORC
https://openreview.net/forum?id=UcoXdfrORC
Stephen Tian,Suraj Nair,Frederik Ebert,Sudeep Dasari,Benjamin Eysenbach,Chelsea Finn,Sergey Levine
ICLR 2021,Spotlight
A generalist robot must be able to complete a variety of tasks in its environment. One appealing way to specify each task is in terms of a goal observation. However, learning goal-reaching policies with reinforcement learning remains a challenging problem, particularly when hand-engineered reward functions are not available. Learned dynamics models are a promising approach for learning about the environment without rewards or task-directed data, but planning to reach goals with such a model requires a notion of functional similarity between observations and goal states. We present a self-supervised method for model-based visual goal reaching, which uses both a visual dynamics model as well as a dynamical distance function learned using model-free reinforcement learning. Our approach learns entirely using offline, unlabeled data, making it practical to scale to large and diverse datasets. In our experiments, we find that our method can successfully learn models that perform a variety of tasks at test-time, moving objects amid distractors with a simulated robotic arm and even learning to open and close a drawer using a real-world robot. In comparisons, we find that this approach substantially outperforms both model-free and model-based prior methods.
https://openreview.net/pdf/e7d9842e2ee26ac0242d6efcf7a863c669541594.pdf
Mathematical Reasoning via Self-supervised Skip-tree Training
https://openreview.net/forum?id=YmqAnY0CMEy
https://openreview.net/forum?id=YmqAnY0CMEy
Markus Norman Rabe,Dennis Lee,Kshitij Bansal,Christian Szegedy
ICLR 2021,Spotlight
We demonstrate that self-supervised language modeling applied to mathematical formulas enables logical reasoning. To measure the logical reasoning abilities of language models, we formulate several evaluation (downstream) tasks, such as inferring types, suggesting missing assumptions and completing equalities. For training language models for formal mathematics, we propose a novel skip-tree task. We find that models trained on the skip-tree task show surprisingly strong mathematical reasoning abilities, and outperform models trained on standard skip-sequence tasks. We also analyze the models' ability to formulate new conjectures by measuring how often the predictions are provable and useful in other proofs.
https://openreview.net/pdf/405aeadddeb5c223426f15f57b0e520aeb2ce585.pdf
DeepAveragers: Offline Reinforcement Learning By Solving Derived Non-Parametric MDPs
https://openreview.net/forum?id=eMP1j9efXtX
https://openreview.net/forum?id=eMP1j9efXtX
Aayam Kumar Shrestha,Stefan Lee,Prasad Tadepalli,Alan Fern
ICLR 2021,Spotlight
We study an approach to offline reinforcement learning (RL) based on optimally solving finitely-represented MDPs derived from a static dataset of experience. This approach can be applied on top of any learned representation and has the potential to easily support multiple solution objectives as well as zero-shot adjustment to changing environments and goals. Our main contribution is to introduce the Deep Averagers with Costs MDP (DAC-MDP) and to investigate its solutions for offline RL. DAC-MDPs are a non-parametric model that can leverage deep representations and account for limited data by introducing costs for exploiting under-represented parts of the model. In theory, we show conditions that allow for lower-bounding the performance of DAC-MDP solutions. We also investigate the empirical behavior in a number of environments, including those with image-based observations. Overall, the experiments demonstrate that the framework can work in practice and scale to large complex offline RL problems.
https://openreview.net/pdf/41ec2c7a3d80d8e07956f446e858586b83aa7620.pdf
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
https://openreview.net/forum?id=p-NZIuwqhI4
https://openreview.net/forum?id=p-NZIuwqhI4
Kenji Kawaguchi
ICLR 2021,Spotlight
A deep equilibrium model uses implicit layers, which are implicitly defined through an equilibrium point of an infinite sequence of computation. It avoids any explicit computation of the infinite sequence by finding an equilibrium point directly via root-finding and by computing gradients via implicit differentiation. In this paper, we analyze the gradient dynamics of deep equilibrium models with nonlinearity only on weight matrices and non-convex objective functions of weights for regression and classification. Despite non-convexity, convergence to global optimum at a linear rate is guaranteed without any assumption on the width of the models, allowing the width to be smaller than the output dimension and the number of data points. Moreover, we prove a relation between the gradient dynamics of the deep implicit layer and the dynamics of trust region Newton method of a shallow explicit layer. This mathematically proven relation along with our numerical observation suggests the importance of understanding implicit bias of implicit layers and an open problem on the topic. Our proofs deal with implicit layers, weight tying and nonlinearity on weights, and differ from those in the related literature.
https://openreview.net/pdf/2d0baf2a17b567711b0bc3085000c41372e8c2d8.pdf
BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration
https://openreview.net/forum?id=yHeg4PbFHh
https://openreview.net/forum?id=yHeg4PbFHh
Augustus Odena,Kensen Shi,David Bieber,Rishabh Singh,Charles Sutton,Hanjun Dai
ICLR 2021,Spotlight
Program synthesis is challenging largely because of the difficulty of search in a large space of programs. Human programmers routinely tackle the task of writing complex programs by writing sub-programs and then analyzing their intermediate results to compose them in appropriate ways. Motivated by this intuition, we present a new synthesis approach that leverages learning to guide a bottom-up search over programs. In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a given set of input-output examples. This is a powerful combination because of several emergent properties. First, in bottom-up search, intermediate programs can be executed, providing semantic information to the neural network. Second, given the concrete values from those executions, we can exploit rich features based on recent work on property signatures. Finally, bottom-up search allows the system substantial flexibility in what order to generate the solution, allowing the synthesizer to build up a program from multiple smaller sub-programs. Overall, our empirical evaluation finds that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches. We demonstrate the effectiveness of our technique on two datasets, one from the SyGuS competition and one of our own creation.
https://openreview.net/pdf/2a5e6446f3e44243b64f41369e186a582fb55a63.pdf
The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings
https://openreview.net/forum?id=qYda4oLEc1
https://openreview.net/forum?id=qYda4oLEc1
Elliot Meyerson,Risto Miikkulainen
ICLR 2021,Spotlight
This paper frames a general prediction system as an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. This perspective leads to a machine learning framework in which seemingly unrelated tasks can be solved by a single model, by embedding their input and output variables into a shared space. An implementation of the framework is developed in which these variable embeddings are learned jointly with internal model parameters. In experiments, the approach is shown to (1) recover intuitive locations of variables in space and time, (2) exploit regularities across related datasets with completely disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks, outperforming task-specific single-task models and multi-task learning alternatives. The results suggest that even seemingly unrelated tasks may originate from similar underlying processes, a fact that the traveling observer model can use to make better predictions.
https://openreview.net/pdf/467ef145eb7edf951ce11daf694cfa5593a44e89.pdf
Fidelity-based Deep Adiabatic Scheduling
https://openreview.net/forum?id=NECTfffOvn1
https://openreview.net/forum?id=NECTfffOvn1
Eli Ovits,Lior Wolf
ICLR 2021,Spotlight
Adiabatic quantum computation is a form of computation that acts by slowly interpolating a quantum system between an easy to prepare initial state and a final state that represents a solution to a given computational problem. The choice of the interpolation schedule is critical to the performance: if at a certain time point, the evolution is too rapid, the system has a high probability to transfer to a higher energy state, which does not represent a solution to the problem. On the other hand, an evolution that is too slow leads to a loss of computation time and increases the probability of failure due to decoherence. In this work, we train deep neural models to produce optimal schedules that are conditioned on the problem at hand. We consider two types of problem representation: the Hamiltonian form, and the Quadratic Unconstrained Binary Optimization (QUBO) form. A novel loss function that scores schedules according to their approximated success probability is introduced. We benchmark our approach on random QUBO problems, Grover search, 3-SAT, and MAX-CUT problems and show that our approach outperforms, by a sizable margin, the linear schedules as well as alternative approaches that were very recently proposed.
https://openreview.net/pdf/864d26c496d060fce5f6a17f3e6edd74aaead783.pdf
Deciphering and Optimizing Multi-Task Learning: a Random Matrix Approach
https://openreview.net/forum?id=Cri3xz59ga
https://openreview.net/forum?id=Cri3xz59ga
Malik Tiomoko,Hafiz Tiomoko Ali,Romain Couillet
ICLR 2021,Spotlight
This article provides theoretical insights into the inner workings of multi-task and transfer learning methods, by studying the tractable least-square support vector machine multi-task learning (LS-SVM MTL) method, in the limit of large ($p$) and numerous ($n$) data. By a random matrix analysis applied to a Gaussian mixture data model, the performance of MTL LS-SVM is shown to converge, as $n,p\to\infty$, to a deterministic limit involving simple (small-dimensional) statistics of the data. We prove (i) that the standard MTL LS-SVM algorithm is in general strongly biased and may dramatically fail (to the point that individual single-task LS-SVMs may outperform the MTL approach, even for quite resembling tasks): our analysis provides a simple method to correct these biases, and that we reveal (ii) the sufficient statistics at play in the method, which can be efficiently estimated, even for quite small datasets. The latter result is exploited to automatically optimize the hyperparameters without resorting to any cross-validation procedure. Experiments on popular datasets demonstrate that our improved MTL LS-SVM method is computationally-efficient and outperforms sometimes much more elaborate state-of-the-art multi-task and transfer learning techniques.
https://openreview.net/pdf/4484f1a7adf7cb2152f913693079ef4764c69462.pdf
Learning-based Support Estimation in Sublinear Time
https://openreview.net/forum?id=tilovEHA3YS
https://openreview.net/forum?id=tilovEHA3YS
Talya Eden,Piotr Indyk,Shyam Narayanan,Ronitt Rubinfeld,Sandeep Silwal,Tal Wagner
ICLR 2021,Spotlight
We consider the problem of estimating the number of distinct elements in a large data set (or, equivalently, the support size of the distribution induced by the data set) from a random sample of its elements. The problem occurs in many applications, including biology, genomics, computer systems and linguistics. A line of research spanning the last decade resulted in algorithms that estimate the support up to $ \pm \varepsilon n$ from a sample of size $O(\log^2(1/\varepsilon) \cdot n/\log n)$, where $n$ is the data set size. Unfortunately, this bound is known to be tight, limiting further improvements to the complexity of this problem. In this paper we consider estimation algorithms augmented with a machine-learning-based predictor that, given any element, returns an estimation of its frequency. We show that if the predictor is correct up to a constant approximation factor, then the sample complexity can be reduced significantly, to $$ \ \log (1/\varepsilon) \cdot n^{1-\Theta(1/\log(1/\varepsilon))}. $$ We evaluate the proposed algorithms on a collection of data sets, using the neural-network based estimators from {Hsu et al, ICLR'19} as predictors. Our experiments demonstrate substantial (up to 3x) improvements in the estimation accuracy compared to the state of the art algorithm.
https://openreview.net/pdf/5febcda9d574f8ade6a7ee98fde88e0c8e140481.pdf
Unlearnable Examples: Making Personal Data Unexploitable
https://openreview.net/forum?id=iAmZUo0DxC0
https://openreview.net/forum?id=iAmZUo0DxC0
Hanxun Huang,Xingjun Ma,Sarah Monazam Erfani,James Bailey,Yisen Wang
ICLR 2021,Spotlight
The volume of "free" data on the internet has been key to the current success of deep learning. However, it also raises privacy concerns about the unauthorized exploitation of personal data for training commercial models. It is thus crucial to develop methods to prevent unauthorized data exploitation. This paper raises the question: can data be made unlearnable for deep learning models? We present a type of error-minimizing noise that can indeed make training examples unlearnable. Error-minimizing noise is intentionally generated to reduce the error of one or more of the training example(s) close to zero, which can trick the model into believing there is "nothing" to learn from these example(s). The noise is restricted to be imperceptible to human eyes, and thus does not affect normal data utility. We empirically verify the effectiveness of error-minimizing noise in both sample-wise and class-wise forms. We also demonstrate its flexibility under extensive experimental settings and practicability in a case study of face recognition. Our work establishes an important first step towards making personal data unexploitable to deep learning models.
https://openreview.net/pdf/eb123b0f1c20d0c5d47b33fa7feca81748e02666.pdf
How Benign is Benign Overfitting ?
https://openreview.net/forum?id=g-wu9TMPODo
https://openreview.net/forum?id=g-wu9TMPODo
Amartya Sanyal,Puneet K. Dokania,Varun Kanade,Philip Torr
ICLR 2021,Spotlight
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models. When trained with SGD, deep neural networks essentially achieve zero training error, even in the presence of label noise, while also exhibiting good generalization on natural test data, something referred to as benign overfitting (Bartlett et al., 2020; Chatterji & Long, 2020). However, these models are vulnerable to adversarial attacks. We identify label noise as one of the causes for adversarial vulnerability, and provide theoretical and empirical evidence in support of this. Surprisingly, we find several instances of label noise in datasets such as MNIST and CIFAR, and that robustly trained models incur training error on some of these, i.e. they don’t fit the noise. However, removing noisy labels alone does not suffice to achieve adversarial robustness. We conjecture that in part sub-optimal representation learning is also responsible for adversarial vulnerability. By means of simple theoretical setups, we show how the choice of representation can drastically affect adversarial robustness.
https://openreview.net/pdf/b7f336ebe5df354fdcbb88c27b978a6581289ca8.pdf
Autoregressive Entity Retrieval
https://openreview.net/forum?id=5k8F6UU39V
https://openreview.net/forum?id=5k8F6UU39V
Nicola De Cao,Gautier Izacard,Sebastian Riedel,Fabio Petroni
ICLR 2021,Spotlight
Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. One way to understand current approaches is as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach leads to several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions between the two; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion and conditioned on the context. This enables us to mitigate the aforementioned technical issues since: (i) the autoregressive formulation allows us to directly capture relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the exact softmax loss can be efficiently computed without the need to subsample negative data. We show the efficacy of the approach, experimenting with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their unambiguous name. Code and pre-trained models at https://github.com/facebookresearch/GENRE.
https://openreview.net/pdf/921ba67c80871fda61a4c0cf8f889b1c381a2a78.pdf
Neural Approximate Sufficient Statistics for Implicit Models
https://openreview.net/forum?id=SRDuJssQud
https://openreview.net/forum?id=SRDuJssQud
Yanzhi Chen,Dinghuai Zhang,Michael U. Gutmann,Aaron Courville,Zhanxing Zhu
ICLR 2021,Spotlight
We consider the fundamental problem of how to automatically construct summary statistics for implicit generative models where the evaluation of the likelihood function is intractable but sampling data from the model is possible. The idea is to frame the task of constructing sufficient statistics as learning mutual information maximizing representations of the data with the help of deep neural networks. The infomax learning procedure does not need to estimate any density or density ratio. We apply our approach to both traditional approximate Bayesian computation and recent neural likelihood methods, boosting their performance on a range of tasks.
https://openreview.net/pdf/dcb75e787b368e0cd6057205f63497c6fa17f9cb.pdf
Large Scale Image Completion via Co-Modulated Generative Adversarial Networks
https://openreview.net/forum?id=sSjqmfsk95O
https://openreview.net/forum?id=sSjqmfsk95O
Shengyu Zhao,Jonathan Cui,Yilun Sheng,Yue Dong,Xiao Liang,Eric I-Chao Chang,Yan Xu
ICLR 2021,Spotlight
Numerous task-specific variants of conditional generative adversarial networks have been developed for image completion. Yet, a serious limitation remains that all existing algorithms tend to fail when handling large-scale missing regions. To overcome this challenge, we propose a generic new approach that bridges the gap between image-conditional and recent modulated unconditional generative architectures via co-modulation of both conditional and stochastic style representations. Also, due to the lack of good quantitative metrics for image completion, we propose the new Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS), which robustly measures the perceptual fidelity of inpainted images compared to real images via linear separability in a feature space. Experiments demonstrate superior performance in terms of both quality and diversity over state-of-the-art methods in free-form image completion and easy generalization to image-to-image translation. Code is available at https://github.com/zsyzzsoft/co-mod-gan.
https://openreview.net/pdf/9a3cfa3a1710ee23378772a3be3070ef32a29e17.pdf
DDPNOpt: Differential Dynamic Programming Neural Optimizer
https://openreview.net/forum?id=6s7ME_X5_Un
https://openreview.net/forum?id=6s7ME_X5_Un
Guan-Horng Liu,Tianrong Chen,Evangelos Theodorou
ICLR 2021,Spotlight
Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited. In this work, we make an attempt along this line by reformulating the training procedure from the trajectory optimization perspective. We first show that most widely-used algorithms for training DNNs can be linked to the Differential Dynamic Programming (DDP), a celebrated second-order method rooted in the Approximate Dynamic Programming. In this vein, we propose a new class of optimizer, DDP Neural Optimizer (DDPNOpt), for training feedforward and convolution networks. DDPNOpt features layer-wise feedback policies which improve convergence and reduce sensitivity to hyper-parameter over existing methods. It outperforms other optimal-control inspired training methods in both convergence and complexity, and is competitive against state-of-the-art first and second order methods. We also observe DDPNOpt has surprising benefit in preventing gradient vanishing. Our work opens up new avenues for principled algorithmic design built upon the optimal control theory.
https://openreview.net/pdf/73e56442dda6b1e73bab62c8ca8c2dac7d319003.pdf
Geometry-Aware Gradient Algorithms for Neural Architecture Search
https://openreview.net/forum?id=MuSYkd1hxRP
https://openreview.net/forum?id=MuSYkd1hxRP
Liam Li,Mikhail Khodak,Nina Balcan,Ameet Talwalkar
ICLR 2021,Spotlight
Recent state-of-the-art methods for neural architecture search (NAS) exploit gradient-based optimization by relaxing the problem into continuous optimization over architectures and shared-weights, a noisy process that remains poorly understood. We argue for the study of single-level empirical risk minimization to understand NAS with weight-sharing, reducing the design of NAS methods to devising optimizers and regularizers that can quickly obtain high-quality solutions to this problem. Invoking the theory of mirror descent, we present a geometry-aware framework that exploits the underlying structure of this optimization to return sparse architectural parameters, leading to simple yet novel algorithms that enjoy fast convergence guarantees and achieve state-of-the-art accuracy on the latest NAS benchmarks in computer vision. Notably, we exceed the best published results for both CIFAR and ImageNet on both the DARTS search space and NAS-Bench-201; on the latter we achieve near-oracle-optimal performance on CIFAR-10 and CIFAR-100. Together, our theory and experiments demonstrate a principled way to co-design optimizers and continuous relaxations of discrete NAS search spaces.
https://openreview.net/pdf/110552d41d9f40c3d50988fde09b3b5038c2bebd.pdf
Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with $1/n$ Parameters
https://openreview.net/forum?id=rcQdycl0zyk
https://openreview.net/forum?id=rcQdycl0zyk
Aston Zhang,Yi Tay,SHUAI Zhang,Alvin Chan,Anh Tuan Luu,Siu Hui,Jie Fu
ICLR 2021,Spotlight
Recent works have demonstrated reasonable success of representation learning in hypercomplex space. Specifically, “fully-connected layers with quaternions” (quaternions are 4D hypercomplex numbers), which replace real-valued matrix multiplications in fully-connected layers with Hamilton products of quaternions, both enjoy parameter savings with only 1/4 learnable parameters and achieve comparable performance in various applications. However, one key caveat is that hypercomplex space only exists at very few predefined dimensions (4D, 8D, and 16D). This restricts the flexibility of models that leverage hypercomplex multiplications. To this end, we propose parameterizing hypercomplex multiplications, allowing models to learn multiplication rules from data regardless of whether such rules are predefined. As a result, our method not only subsumes the Hamilton product, but also learns to operate on any arbitrary $n$D hypercomplex space, providing more architectural flexibility using arbitrarily $1/n$ learnable parameters compared with the fully-connected layer counterpart. Experiments of applications to the LSTM and transformer models on natural language inference, machine translation, text style transfer, and subject verb agreement demonstrate architectural flexibility and effectiveness of the proposed approach.
https://openreview.net/pdf/98639a764ded8e038fa188dc104694519947e67c.pdf
Tent: Fully Test-Time Adaptation by Entropy Minimization
https://openreview.net/forum?id=uXl3bZLkr3c
https://openreview.net/forum?id=uXl3bZLkr3c
Dequan Wang,Evan Shelhamer,Shaoteng Liu,Bruno Olshausen,Trevor Darrell
ICLR 2021,Spotlight
A model must adapt itself to generalize to new and different data during testing. In this setting of fully test-time adaptation the model has only the test data and its own parameters. We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predictions. Our method estimates normalization statistics and optimizes channel-wise affine transformations to update online on each batch. Tent reduces generalization error for image classification on corrupted ImageNet and CIFAR-10/100 and reaches a new state-of-the-art error on ImageNet-C. Tent handles source-free domain adaptation on digit recognition from SVHN to MNIST/MNIST-M/USPS, on semantic segmentation from GTA to Cityscapes, and on the VisDA-C benchmark. These results are achieved in one epoch of test-time optimization without altering training.
https://openreview.net/pdf/4de0af9691a5dcc52de7de756676fded33d037ef.pdf
Neural Topic Model via Optimal Transport
https://openreview.net/forum?id=Oos98K9Lv-k
https://openreview.net/forum?id=Oos98K9Lv-k
He Zhao,Dinh Phung,Viet Huynh,Trung Le,Wray Buntine
ICLR 2021,Spotlight
Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis. However, it is usually hard for existing NTMs to achieve good document representation and coherent/diverse topics at the same time. Moreover, they often degrade their performance severely on short documents. The requirement of reparameterisation could also comprise their training quality and model flexibility. To address these shortcomings, we present a new neural topic model via the theory of optimal transport (OT). Specifically, we propose to learn the topic distribution of a document by directly minimising its OT distance to the document's word distributions. Importantly, the cost matrix of the OT distance models the weights between topics and words, which is constructed by the distances between topics and words in an embedding space. Our proposed model can be trained efficiently with a differentiable loss. Extensive experiments show that our framework significantly outperforms the state-of-the-art NTMs on discovering more coherent and diverse topics and deriving better document representations for both regular and short texts.
https://openreview.net/pdf/7be7e3b207a273ccbe61f42c2358cc4fb090748f.pdf
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
https://openreview.net/forum?id=9xC2tWEwBD
https://openreview.net/forum?id=9xC2tWEwBD
Sanghyun Hong,Yigitcan Kaya,Ionuț-Vlad Modoranu,Tudor Dumitras
ICLR 2021,Spotlight
Recent increases in the computational demands of deep neural networks (DNNs), combined with the observation that most input samples require only simple models, have sparked interest in input-adaptive multi-exit architectures, such as MSDNets or Shallow-Deep Networks. These architectures enable faster inferences and could bring DNNs to low-power devices, e.g., in the Internet of Things (IoT). However, it is unknown if the computational savings provided by this approach are robust against adversarial pressure. In particular, an adversary may aim to slowdown adaptive DNNs by increasing their average inference time—a threat analogous to the denial-of-service attacks from the Internet. In this paper, we conduct a systematic evaluation of this threat by experimenting with three generic multi-exit DNNs (based on VGG16, MobileNet, and ResNet56) and a custom multi-exit architecture, on two popular image classification benchmarks (CIFAR-10 and Tiny ImageNet). To this end, we show that adversarial example-crafting techniques can be modified to cause slowdown, and we propose a metric for comparing their impact on different architectures. We show that a slowdown attack reduces the efficacy of multi-exit DNNs by 90–100%, and it amplifies the latency by 1.5–5× in a typical IoT deployment. We also show that it is possible to craft universal, reusable perturbations and that the attack can be effective in realistic black-box scenarios, where the attacker has limited knowledge about the victim. Finally, we show that adversarial training provides limited protection against slowdowns. These results suggest that further research is needed for defending multi-exit architectures against this emerging threat. Our code is available at https://github.com/sanghyun-hong/deepsloth.
https://openreview.net/pdf/c7b1e1ec7f160d09cea4ae461b498ee701297eb3.pdf
Are Neural Rankers still Outperformed by Gradient Boosted Decision Trees?
https://openreview.net/forum?id=Ut1vF_q_vC
https://openreview.net/forum?id=Ut1vF_q_vC
Zhen Qin,Le Yan,Honglei Zhuang,Yi Tay,Rama Kumar Pasumarthi,Xuanhui Wang,Michael Bendersky,Marc Najork
ICLR 2021,Spotlight
Despite the success of neural models on many major machine learning problems, their effectiveness on traditional Learning-to-Rank (LTR) problems is still not widely acknowledged. We first validate this concern by showing that most recent neural LTR models are, by a large margin, inferior to the best publicly available Gradient Boosted Decision Trees (GBDT) in terms of their reported ranking accuracy on benchmark datasets. This unfortunately was somehow overlooked in recent neural LTR papers. We then investigate why existing neural LTR models under-perform and identify several of their weaknesses. Furthermore, we propose a unified framework comprising of counter strategies to ameliorate the existing weaknesses of neural models. Our models are the first to be able to perform equally well, comparing with the best tree-based baseline, while outperforming recently published neural LTR models by a large margin. Our results can also serve as a benchmark to facilitate future improvement of neural LTR models.
https://openreview.net/pdf/ad3fca583fdc23233f81a4e1b068afdb9ccb877f.pdf
Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning
https://openreview.net/forum?id=qda7-sVg84
https://openreview.net/forum?id=qda7-sVg84
Rishabh Agarwal,Marlos C. Machado,Pablo Samuel Castro,Marc G Bellemare
ICLR 2021,Spotlight
Reinforcement learning methods trained on few environments rarely learn policies that generalize to unseen environments. To improve generalization, we incorporate the inherent sequential structure in reinforcement learning into the representation learning process. This approach is orthogonal to recent approaches, which rarely exploit this structure explicitly. Specifically, we introduce a theoretically motivated policy similarity metric (PSM) for measuring behavioral similarity between states. PSM assigns high similarity to states for which the optimal policies in those states as well as in future states are similar. We also present a contrastive representation learning procedure to embed any state similarity metric, which we instantiate with PSM to obtain policy similarity embeddings (PSEs). We demonstrate that PSEs improve generalization on diverse benchmarks, including LQR with spurious correlations, a jumping task from pixels, and Distracting DM Control Suite.
https://openreview.net/pdf/18d8a7a260105accf754ef2ec331bcf48e817b1a.pdf
Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images
https://openreview.net/forum?id=RLRXCV6DbEJ
https://openreview.net/forum?id=RLRXCV6DbEJ
Rewon Child
ICLR 2021,Spotlight
We present a hierarchical VAE that, for the first time, generates samples quickly $\textit{and}$ outperforms the PixelCNN in log-likelihood on all natural image benchmarks. We begin by observing that, in theory, VAEs can actually represent autoregressive models, as well as faster, better models if they exist, when made sufficiently deep. Despite this, autoregressive models have historically outperformed VAEs in log-likelihood. We test if insufficient depth explains why by scaling a VAE to greater stochastic depth than previously explored and evaluating it CIFAR-10, ImageNet, and FFHQ. In comparison to the PixelCNN, these very deep VAEs achieve higher likelihoods, use fewer parameters, generate samples thousands of times faster, and are more easily applied to high-resolution images. Qualitative studies suggest this is because the VAE learns efficient hierarchical visual representations. We release our source code and models at https://github.com/openai/vdvae.
https://openreview.net/pdf/e63933fc98cb52a55d96ffe8bb28d87410c6438e.pdf
Async-RED: A Provably Convergent Asynchronous Block Parallel Stochastic Method using Deep Denoising Priors
https://openreview.net/forum?id=9EsrXMzlFQY
https://openreview.net/forum?id=9EsrXMzlFQY
Yu Sun,Jiaming Liu,Yiran Sun,Brendt Wohlberg,Ulugbek Kamilov
ICLR 2021,Spotlight
Regularization by denoising (RED) is a recently developed framework for solving inverse problems by integrating advanced denoisers as image priors. Recent work has shown its state-of-the-art performance when combined with pre-trained deep denoisers. However, current RED algorithms are inadequate for parallel processing on multicore systems. We address this issue by proposing a new{asynchronous RED (Async-RED) algorithm that enables asynchronous parallel processing of data, making it significantly faster than its serial counterparts for large-scale inverse problems. The computational complexity of Async-RED is further reduced by using a random subset of measurements at every iteration. We present a complete theoretical analysis of the algorithm by establishing its convergence under explicit assumptions on the data-fidelity and the denoiser. We validate Async-RED on image recovery using pre-trained deep denoisers as priors.
https://openreview.net/pdf/42abafb63caa1b6ddc6bda1b8e8337b1c2a9db91.pdf
A Good Image Generator Is What You Need for High-Resolution Video Synthesis
https://openreview.net/forum?id=6puCSjH3hwA
https://openreview.net/forum?id=6puCSjH3hwA
Yu Tian,Jian Ren,Menglei Chai,Kyle Olszewski,Xi Peng,Dimitris N. Metaxas,Sergey Tulyakov
ICLR 2021,Spotlight
Image and video synthesis are closely related areas aiming at generating content from noise. While rapid progress has been demonstrated in improving image-based models to handle large resolutions, high-quality renderings, and wide variations in image content, achieving comparable video generation results remains problematic. We present a framework that leverages contemporary image generators to render high-resolution videos. We frame the video synthesis problem as discovering a trajectory in the latent space of a pre-trained and fixed image generator. Not only does such a framework render high-resolution videos, but it also is an order of magnitude more computationally efficient. We introduce a motion generator that discovers the desired trajectory, in which content and motion are disentangled. With such a representation, our framework allows for a broad range of applications, including content and motion manipulation. Furthermore, we introduce a new task, which we call cross-domain video synthesis, in which the image and motion generators are trained on disjoint datasets belonging to different domains. This allows for generating moving objects for which the desired video data is not available. Extensive experiments on various datasets demonstrate the advantages of our methods over existing video generation techniques. Code will be released at https://github.com/snap-research/MoCoGAN-HD.
https://openreview.net/pdf/08bf1c319723defae9a4e04ca258811da08d2ed3.pdf
Undistillable: Making A Nasty Teacher That CANNOT teach students
https://openreview.net/forum?id=0zvfm-nZqQs
https://openreview.net/forum?id=0zvfm-nZqQs
Haoyu Ma,Tianlong Chen,Ting-Kuei Hu,Chenyu You,Xiaohui Xie,Zhangyang Wang
ICLR 2021,Spotlight
Knowledge Distillation (KD) is a widely used technique to transfer knowledge from pre-trained teacher models to (usually more lightweight) student models. However, in certain situations, this technique is more of a curse than a blessing. For instance, KD poses a potential risk of exposing intellectual properties (IPs): even if a trained machine learning model is released in ``black boxes'' (e.g., as executable software or APIs without open-sourcing code), it can still be replicated by KD through imitating input-output behaviors. To prevent this unwanted effect of KD, this paper introduces and investigates a concept called $\textit{Nasty Teacher}$: a specially trained teacher network that yields nearly the same performance as a normal one, but would significantly degrade the performance of student models learned by imitating it. We propose a simple yet effective algorithm to build the nasty teacher, called $\textit{self-undermining knowledge distillation}$. Specifically, we aim to maximize the difference between the output of the nasty teacher and a normal pre-trained network. Extensive experiments on several datasets demonstrate that our method is effective on both standard KD and data-free KD, providing the desirable KD-immunity to model owners for the first time. We hope our preliminary study can draw more awareness and interest in this new practical problem of both social and legal importance. Our codes and pre-trained models can be found at: $\url{https://github.com/VITA-Group/Nasty-Teacher}$.
https://openreview.net/pdf/42f6ff4cc0e85c1f3a226c56205d2f78953cdc7c.pdf
Support-set bottlenecks for video-text representation learning
https://openreview.net/forum?id=EqoXe2zmhrh
https://openreview.net/forum?id=EqoXe2zmhrh
Mandela Patrick,Po-Yao Huang,Yuki Asano,Florian Metze,Alexander G Hauptmann,Joao F. Henriques,Andrea Vedaldi
ICLR 2021,Spotlight
The dominant paradigm for learning video-text representations – noise contrastive learning – increases the similarity of the representations of pairs of samples that are known to be related, such as text and video from the same sample, and pushes away the representations of all other pairs. We posit that this last behaviour is too strict, enforcing dissimilar representations even for samples that are semantically-related – for example, visually similar videos or ones that share the same depicted action. In this paper, we propose a novel method that alleviates this by leveraging a generative model to naturally push these related samples together: each sample’s caption must be reconstructed as a weighted combination of a support set of visual representations. This simple idea ensures that representations are not overly-specialized to individual samples, are reusable across the dataset, and results in representations that explicitly encode semantics shared between samples, unlike noise contrastive learning. Our proposed method outperforms others by a large margin on MSR-VTT, VATEX, ActivityNet, and MSVD for video-to-text and text-to-video retrieval.
https://openreview.net/pdf/a650da3e5bc4bc919f69887e2a9264dc61a58c94.pdf
Grounded Language Learning Fast and Slow
https://openreview.net/forum?id=wpSWuz_hyqA
https://openreview.net/forum?id=wpSWuz_hyqA
Felix Hill,Olivier Tieleman,Tamara von Glehn,Nathaniel Wong,Hamza Merzic,Stephen Clark
ICLR 2021,Spotlight
Recent work has shown that large text-based neural language models acquire a surprising propensity for one-shot learning. Here, we show that an agent situated in a simulated 3D world, and endowed with a novel dual-coding external memory, can exhibit similar one-shot word learning when trained with conventional RL algorithms. After a single introduction to a novel object via visual perception and language ("This is a dax"), the agent can manipulate the object as instructed ("Put the dax on the bed"), combining short-term, within-episode knowledge of the nonsense word with long-term lexical and motor knowledge. We find that, under certain training conditions and with a particular memory writing mechanism, the agent's one-shot word-object binding generalizes to novel exemplars within the same ShapeNet category, and is effective in settings with unfamiliar numbers of objects. We further show how dual-coding memory can be exploited as a signal for intrinsic motivation, stimulating the agent to seek names for objects that may be useful later. Together, the results demonstrate that deep neural networks can exploit meta-learning, episodic memory and an explicitly multi-modal environment to account for 'fast-mapping', a fundamental pillar of human cognitive development and a potentially transformative capacity for artificial agents.
https://openreview.net/pdf/e357c41d68e8a24bfdaba368a3b2baa867fa25e2.pdf
GAN "Steerability" without optimization
https://openreview.net/forum?id=zDy_nQCXiIj
https://openreview.net/forum?id=zDy_nQCXiIj
Nurit Spingarn,Ron Banner,Tomer Michaeli
ICLR 2021,Spotlight
Recent research has shown remarkable success in revealing "steering" directions in the latent spaces of pre-trained GANs. These directions correspond to semantically meaningful image transformations (e.g., shift, zoom, color manipulations), and have the same interpretable effect across all categories that the GAN can generate. Some methods focus on user-specified transformations, while others discover transformations in an unsupervised manner. However, all existing techniques rely on an optimization procedure to expose those directions, and offer no control over the degree of allowed interaction between different transformations. In this paper, we show that "steering" trajectories can be computed in closed form directly from the generator's weights without any form of training or optimization. This applies to user-prescribed geometric transformations, as well as to unsupervised discovery of more complex effects. Our approach allows determining both linear and nonlinear trajectories, and has many advantages over previous methods. In particular, we can control whether one transformation is allowed to come on the expense of another (e.g., zoom-in with or without allowing translation to keep the object centered). Moreover, we can determine the natural end-point of the trajectory, which corresponds to the largest extent to which a transformation can be applied without incurring degradation. Finally, we show how transferring attributes between images can be achieved without optimization, even across different categories.
https://openreview.net/pdf/78417c13154fe1e724c34ef2fcfef9f5a84707a0.pdf
Noise against noise: stochastic label noise helps combat inherent label noise
https://openreview.net/forum?id=80FMcTSZ6J0
https://openreview.net/forum?id=80FMcTSZ6J0
Pengfei Chen,Guangyong Chen,Junjie Ye,jingwei zhao,Pheng-Ann Heng
ICLR 2021,Spotlight
The noise in stochastic gradient descent (SGD) provides a crucial implicit regularization effect, previously studied in optimization by analyzing the dynamics of parameter updates. In this paper, we are interested in learning with noisy labels, where we have a collection of samples with potential mislabeling. We show that a previously rarely discussed SGD noise, induced by stochastic label noise (SLN), mitigates the effects of inherent label noise. In contrast, the common SGD noise directly applied to model parameters does not. We formalize the differences and connections of SGD noise variants, showing that SLN induces SGD noise dependent on the sharpness of output landscape and the confidence of output probability, which may help escape from sharp minima and prevent overconfidence. SLN not only improves generalization in its simplest form but also boosts popular robust training methods, including sample selection and label correction. Specifically, we present an enhanced algorithm by applying SLN to label correction. Our code is released.
https://openreview.net/pdf/cb07afb92c9402f5b191a438058b6a911ae61ba1.pdf
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
https://openreview.net/forum?id=5m3SEczOV8L
https://openreview.net/forum?id=5m3SEczOV8L
Zhisheng Xiao,Karsten Kreis,Jan Kautz,Arash Vahdat
ICLR 2021,Spotlight
Energy-based models (EBMs) have recently been successful in representing complex distributions of small images. However, sampling from them requires expensive Markov chain Monte Carlo (MCMC) iterations that mix slowly in high dimensional pixel space. Unlike EBMs, variational autoencoders (VAEs) generate samples quickly and are equipped with a latent space that enables fast traversal of the data manifold. However, VAEs tend to assign high probability density to regions in data space outside the actual data distribution and often fail at generating sharp images. In this paper, we propose VAEBM, a symbiotic composition of a VAE and an EBM that offers the best of both worlds. VAEBM captures the overall mode structure of the data distribution using a state-of-the-art VAE and it relies on its EBM component to explicitly exclude non-data-like regions from the model and refine the image samples. Moreover, the VAE component in VAEBM allows us to speed up MCMC updates by reparameterizing them in the VAE's latent space. Our experimental results show that VAEBM outperforms state-of-the-art VAEs and EBMs in generative quality on several benchmark image datasets by a large margin. It can generate high-quality images as large as 256$\times$256 pixels with short MCMC chains. We also demonstrate that VAEBM provides complete mode coverage and performs well in out-of-distribution detection.
https://openreview.net/pdf/e45436941ac36b0258992469d1932909c6cbed5e.pdf
Graph-Based Continual Learning
https://openreview.net/forum?id=HHSEKOnPvaO
https://openreview.net/forum?id=HHSEKOnPvaO
Binh Tang,David S. Matteson
ICLR 2021,Spotlight
Despite significant advances, continual learning models still suffer from catastrophic forgetting when exposed to incrementally available data from non-stationary distributions. Rehearsal approaches alleviate the problem by maintaining and replaying a small episodic memory of previous samples, often implemented as an array of independent memory slots. In this work, we propose to augment such an array with a learnable random graph that captures pairwise similarities between its samples, and use it not only to learn new tasks but also to guard against forgetting. Empirical results on several benchmark datasets show that our model consistently outperforms recently proposed baselines for task-free continual learning.
https://openreview.net/pdf/39a91d5348ba8489817ac3ee4a93637e12b23c4b.pdf
Sparse Quantized Spectral Clustering
https://openreview.net/forum?id=pBqLS-7KYAF
https://openreview.net/forum?id=pBqLS-7KYAF
Zhenyu Liao,Romain Couillet,Michael W. Mahoney
ICLR 2021,Spotlight
Given a large data matrix, sparsifying, quantizing, and/or performing other entry-wise nonlinear operations can have numerous benefits, ranging from speeding up iterative algorithms for core numerical linear algebra problems to providing nonlinear filters to design state-of-the-art neural network models. Here, we exploit tools from random matrix theory to make precise statements about how the eigenspectrum of a matrix changes under such nonlinear transformations. In particular, we show that very little change occurs in the informative eigenstructure, even under drastic sparsification/quantization, and consequently that very little downstream performance loss occurs when working with very aggressively sparsified or quantized spectral clustering problems. We illustrate how these results depend on the nonlinearity, we characterize a phase transition beyond which spectral clustering becomes possible, and we show when such nonlinear transformations can introduce spurious non-informative eigenvectors.
https://openreview.net/pdf/26efa7df2ba6ab8a833b21a2c4c741e420ba7584.pdf
LambdaNetworks: Modeling long-range Interactions without Attention
https://openreview.net/forum?id=xTJEN-ggl1b
https://openreview.net/forum?id=xTJEN-ggl1b
Irwan Bello
ICLR 2021,Spotlight
We present lambda layers -- an alternative framework to self-attention -- for capturing long-range interactions between an input and structured contextual information (e.g. a pixel surrounded by other pixels). Lambda layers capture such interactions by transforming available contexts into linear functions, termed lambdas, and applying these linear functions to each input separately. Similar to linear attention, lambda layers bypass expensive attention maps, but in contrast, they model both content and position-based interactions which enables their application to large structured inputs such as images. The resulting neural network architectures, LambdaNetworks, significantly outperform their convolutional and attentional counterparts on ImageNet classification, COCO object detection and instance segmentation, while being more computationally efficient. Additionally, we design LambdaResNets, a family of hybrid architectures across different scales, that considerably improves the speed-accuracy tradeoff of image classification models. LambdaResNets reach excellent accuracies on ImageNet while being 3.2 - 4.4x faster than the popular EfficientNets on modern machine learning accelerators. In large-scale semi-supervised training with an additional 130M pseudo-labeled images, LambdaResNets achieve up to 86.7% ImageNet accuracy while being 9.5x faster than EfficientNet NoisyStudent and 9x faster than a Vision Transformer with comparable accuracies.
https://openreview.net/pdf/811ba70b99e04f0d84a07c0c93d21c805e4466ff.pdf
Contrastive Divergence Learning is a Time Reversal Adversarial Game
https://openreview.net/forum?id=MLSvqIHRidA
https://openreview.net/forum?id=MLSvqIHRidA
Omer Yair,Tomer Michaeli
ICLR 2021,Spotlight
Contrastive divergence (CD) learning is a classical method for fitting unnormalized statistical models to data samples. Despite its wide-spread use, the convergence properties of this algorithm are still not well understood. The main source of difficulty is an unjustified approximation which has been used to derive the gradient of the loss. In this paper, we present an alternative derivation of CD that does not require any approximation and sheds new light on the objective that is actually being optimized by the algorithm. Specifically, we show that CD is an adversarial learning procedure, where a discriminator attempts to classify whether a Markov chain generated from the model has been time-reversed. Thus, although predating generative adversarial networks (GANs) by more than a decade, CD is, in fact, closely related to these techniques. Our derivation settles well with previous observations, which have concluded that CD's update steps cannot be expressed as the gradients of any fixed objective function. In addition, as a byproduct, our derivation reveals a simple correction that can be used as an alternative to Metropolis-Hastings rejection, which is required when the underlying Markov chain is inexact (e.g., when using Langevin dynamics with a large step).
https://openreview.net/pdf/03d95d33dbce2d626edf50ba2d01876c374f7049.pdf
Quantifying Differences in Reward Functions
https://openreview.net/forum?id=LwEQnp6CYev
https://openreview.net/forum?id=LwEQnp6CYev
Adam Gleave,Michael D Dennis,Shane Legg,Stuart Russell,Jan Leike
ICLR 2021,Spotlight
For many tasks, the reward function is inaccessible to introspection or too complex to be specified procedurally, and must instead be learned from user data. Prior work has evaluated learned reward functions by evaluating policies optimized for the learned reward. However, this method cannot distinguish between the learned reward function failing to reflect user preferences and the policy optimization process failing to optimize the learned reward. Moreover, this method can only tell us about behavior in the evaluation environment, but the reward may incentivize very different behavior in even a slightly different deployment environment. To address these problems, we introduce the Equivalent-Policy Invariant Comparison (EPIC) distance to quantify the difference between two reward functions directly, without a policy optimization step. We prove EPIC is invariant on an equivalence class of reward functions that always induce the same optimal policy. Furthermore, we find EPIC can be efficiently approximated and is more robust than baselines to the choice of coverage distribution. Finally, we show that EPIC distance bounds the regret of optimal policies even under different transition dynamics, and we confirm empirically that it predicts policy training success. Our source code is available at https://github.com/HumanCompatibleAI/evaluating-rewards.
https://openreview.net/pdf/c9babbffccc1b8e389a2e8de1c7aac4cee00f966.pdf
Long-tail learning via logit adjustment
https://openreview.net/forum?id=37nvvqkCo5
https://openreview.net/forum?id=37nvvqkCo5
Aditya Krishna Menon,Sadeep Jayasumana,Ankit Singh Rawat,Himanshu Jain,Andreas Veit,Sanjiv Kumar
ICLR 2021,Spotlight
Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels have only a few associated samples. This poses a challenge for generalisation on such labels, and also makes naive learning biased towards dominant labels. In this paper, we present a statistical framework that unifies and generalises several recent proposals to cope with these challenges. Our framework revisits the classic idea of logit adjustment based on the label frequencies, which encourages a large relative margin between logits of rare positive versus dominant negative labels. This yields two techniques for long-tail learning, where such adjustment is either applied post-hoc to a trained model, or enforced in the loss during training. These techniques are statistically grounded, and practically effective on four real-world datasets with long-tailed label distributions.
https://openreview.net/pdf/7b399c4dfe989810af6c9881d1716bd2ae07b903.pdf
Locally Free Weight Sharing for Network Width Search
https://openreview.net/forum?id=S0UdquAnr9k
https://openreview.net/forum?id=S0UdquAnr9k
Xiu Su,Shan You,Tao Huang,Fei Wang,Chen Qian,Changshui Zhang,Chang Xu
ICLR 2021,Spotlight
Searching for network width is an effective way to slim deep neural networks with hardware budgets. With this aim, a one-shot supernet is usually leveraged as a performance evaluator to rank the performance \wrt~different width. Nevertheless, current methods mainly follow a manually fixed weight sharing pattern, which is limited to distinguish the performance gap of different width. In this paper, to better evaluate each width, we propose a locally free weight sharing strategy (CafeNet) accordingly. In CafeNet, weights are more freely shared, and each width is jointly indicated by its base channels and free channels, where free channels are supposed to locate freely in a local zone to better represent each width. Besides, we propose to further reduce the search space by leveraging our introduced FLOPs-sensitive bins. As a result, our CafeNet can be trained stochastically and get optimized within a min-min strategy. Extensive experiments on ImageNet, CIFAR-10, CelebA and MS COCO dataset have verified our superiority comparing to other state-of-the-art baselines. For example, our method can further boost the benchmark NAS network EfficientNet-B0 by 0.41\% via searching its width more delicately.
https://openreview.net/pdf/72deb2e0a363d37cf758dfbccea8fada27ebf7a8.pdf
Mutual Information State Intrinsic Control
https://openreview.net/forum?id=OthEq8I5v1
https://openreview.net/forum?id=OthEq8I5v1
Rui Zhao,Yang Gao,Pieter Abbeel,Volker Tresp,Wei Xu
ICLR 2021,Spotlight
Reinforcement learning has been shown to be highly successful at many challenging tasks. However, success heavily relies on well-shaped rewards. Intrinsically motivated RL attempts to remove this constraint by defining an intrinsic reward function. Motivated by the self-consciousness concept in psychology, we make a natural assumption that the agent knows what constitutes itself, and propose a new intrinsic objective that encourages the agent to have maximum control on the environment. We mathematically formalize this reward as the mutual information between the agent state and the surrounding state under the current agent policy. With this new intrinsic motivation, we are able to outperform previous methods, including being able to complete the pick-and-place task for the first time without using any task reward. A video showing experimental results is available at https://youtu.be/AUCwc9RThpk.
https://openreview.net/pdf/6dac086e50a2341e09a0b7d6c417b5cdfd9ed47a.pdf
Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows
https://openreview.net/forum?id=WiGQBFuVRv
https://openreview.net/forum?id=WiGQBFuVRv
Kashif Rasul,Abdul-Saboor Sheikh,Ingmar Schuster,Urs M Bergmann,Roland Vollgraf
ICLR 2021,Spotlight
Time series forecasting is often fundamental to scientific and engineering problems and enables decision making. With ever increasing data set sizes, a trivial solution to scale up predictions is to assume independence between interacting time series. However, modeling statistical dependencies can improve accuracy and enable analysis of interaction effects. Deep learning methods are well suited for this problem, but multi-variate models often assume a simple parametric distribution and do not scale to high dimensions. In this work we model the multi-variate temporal dynamics of time series via an autoregressive deep learning model, where the data distribution is represented by a conditioned normalizing flow. This combination retains the power of autoregressive models, such as good performance in extrapolation into the future, with the flexibility of flows as a general purpose high-dimensional distribution model, while remaining computationally tractable. We show that it improves over the state-of-the-art for standard metrics on many real-world data sets with several thousand interacting time-series.
https://openreview.net/pdf/d83950d8eebdd224b7c8b0eb72ca044ccead7fb6.pdf
Information Laundering for Model Privacy
https://openreview.net/forum?id=dyaIRud1zXg
https://openreview.net/forum?id=dyaIRud1zXg
Xinran Wang,Yu Xiang,Jun Gao,Jie Ding
ICLR 2021,Spotlight
In this work, we propose information laundering, a novel framework for enhancing model privacy. Unlike data privacy that concerns the protection of raw data information, model privacy aims to protect an already-learned model that is to be deployed for public use. The private model can be obtained from general learning methods, and its deployment means that it will return a deterministic or random response for a given input query. An information-laundered model consists of probabilistic components that deliberately maneuver the intended input and output for queries of the model, so the model's adversarial acquisition is less likely. Under the proposed framework, we develop an information-theoretic principle to quantify the fundamental tradeoffs between model utility and privacy leakage and derive the optimal design.
https://openreview.net/pdf/1ad035bf98810a860bec5ef38d3032842170c0e5.pdf
UPDeT: Universal Multi-agent RL via Policy Decoupling with Transformers
https://openreview.net/forum?id=v9c7hr9ADKx
https://openreview.net/forum?id=v9c7hr9ADKx
Siyi Hu,Fengda Zhu,Xiaojun Chang,Xiaodan Liang
ICLR 2021,Spotlight
Recent advances in multi-agent reinforcement learning have been largely limited in training one model from scratch for every new task. The limitation is due to the restricted model architecture related to fixed input and output dimensions. This hinders the experience accumulation and transfer of the learned agent over tasks with diverse levels of difficulty (e.g. 3 vs 3 or 5 vs 6 multi-agent games). In this paper, we make the first attempt to explore a universal multi-agent reinforcement learning pipeline, designing one single architecture to fit tasks with the requirement of different observation and action configurations. Unlike previous RNN-based models, we utilize a transformer-based model to generate a flexible policy by decoupling the policy distribution from the intertwined input observation with an importance weight measured by the merits of the self-attention mechanism. Compared to a standard transformer block, the proposed model, named as Universal Policy Decoupling Transformer (UPDeT), further relaxes the action restriction and makes the multi-agent task's decision process more explainable. UPDeT is general enough to be plugged into any multi-agent reinforcement learning pipeline and equip them with strong generalization abilities that enables the handling of multiple tasks at a time. Extensive experiments on large-scale SMAC multi-agent competitive games demonstrate that the proposed UPDeT-based multi-agent reinforcement learning achieves significant results relative to state-of-the-art approaches, demonstrating advantageous transfer capability in terms of both performance and training speed (10 times faster).
https://openreview.net/pdf/1f24b0b3a09ad8484d3887053d6c4c6a87d96ba1.pdf
Correcting experience replay for multi-agent communication
https://openreview.net/forum?id=xvxPuCkCNPO
https://openreview.net/forum?id=xvxPuCkCNPO
Sanjeevan Ahilan,Peter Dayan
ICLR 2021,Spotlight
We consider the problem of learning to communicate using multi-agent reinforcement learning (MARL). A common approach is to learn off-policy, using data sampled from a replay buffer. However, messages received in the past may not accurately reflect the current communication policy of each agent, and this complicates learning. We therefore introduce a 'communication correction' which accounts for the non-stationarity of observed communication induced by multi-agent learning. It works by relabelling the received message to make it likely under the communicator's current policy, and thus be a better reflection of the receiver's current environment. To account for cases in which agents are both senders and receivers, we introduce an ordered relabelling scheme. Our correction is computationally efficient and can be integrated with a range of off-policy algorithms. We find in our experiments that it substantially improves the ability of communicating MARL systems to learn across a variety of cooperative and competitive tasks.
https://openreview.net/pdf/85eff27bc850ea7f5ee060f1c1d0156c4703f81b.pdf
Improving Adversarial Robustness via Channel-wise Activation Suppressing
https://openreview.net/forum?id=zQTezqCCtNx
https://openreview.net/forum?id=zQTezqCCtNx
Yang Bai,Yuyuan Zeng,Yong Jiang,Shu-Tao Xia,Xingjun Ma,Yisen Wang
ICLR 2021,Spotlight
The study of adversarial examples and their activations have attracted significant attention for secure and robust learning with deep neural networks (DNNs). Different from existing works, in this paper, we highlight two new characteristics of adversarial examples from the channel-wise activation perspective: 1) the activation magnitudes of adversarial examples are higher than that of natural examples; and 2) the channels are activated more uniformly by adversarial examples than natural examples. We find that, while the state-of-the-art defense adversarial training has addressed the first issue of high activation magnitude via training on adversarial examples, the second issue of uniform activation remains. This motivates us to suppress redundant activations from being activated by adversarial perturbations during the adversarial training process, via a Channel-wise Activation Suppressing (CAS) training strategy. We show that CAS can train a model that inherently suppresses adversarial activations, and can be easily applied to existing defense methods to further improve their robustness. Our work provides a simplebut generic training strategy for robustifying the intermediate layer activations of DNNs.
https://openreview.net/pdf/199e4955d4d5c6f552177bff197e00df7b1a3432.pdf
Long-tailed Recognition by Routing Diverse Distribution-Aware Experts
https://openreview.net/forum?id=D9I3drBz4UC
https://openreview.net/forum?id=D9I3drBz4UC
Xudong Wang,Long Lian,Zhongqi Miao,Ziwei Liu,Stella Yu
ICLR 2021,Spotlight
Natural data are often long-tail distributed over semantic classes. Existing recognition methods tackle this imbalanced classification by placing more emphasis on the tail data, through class re-balancing/re-weighting or ensembling over different data groups, resulting in increased tail accuracies but reduced head accuracies. We take a dynamic view of the training data and provide a principled model bias and variance analysis as the training data fluctuates: Existing long-tail classifiers invariably increase the model variance and the head-tail model bias gap remains large, due to more and larger confusion with hard negatives for the tail. We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE). It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module. RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks. It is also a universal framework that is applicable to various backbone networks, long-tailed algorithms and training mechanisms for consistent performance gains. Our code is available at: https://github.com/frank-xwang/RIDE-LongTailRecognition.
https://openreview.net/pdf/a53160f4df3e5d7b2d13f02d20579f6dd0460010.pdf
Generalization in data-driven models of primary visual cortex
https://openreview.net/forum?id=Tp7kI90Htd
https://openreview.net/forum?id=Tp7kI90Htd
Konstantin-Klemens Lurz,Mohammad Bashiri,Konstantin Willeke,Akshay Jagadish,Eric Wang,Edgar Y. Walker,Santiago A Cadena,Taliah Muhammad,Erick Cobos,Andreas S. Tolias,Alexander S Ecker,Fabian H. Sinz
ICLR 2021,Spotlight
Deep neural networks (DNN) have set new standards at predicting responses of neural populations to visual input. Most such DNNs consist of a convolutional network (core) shared across all neurons which learns a representation of neural computation in visual cortex and a neuron-specific readout that linearly combines the relevant features in this representation. The goal of this paper is to test whether such a representation is indeed generally characteristic for visual cortex, i.e. generalizes between animals of a species, and what factors contribute to obtaining such a generalizing core. To push all non-linear computations into the core where the generalizing cortical features should be learned, we devise a novel readout that reduces the number of parameters per neuron in the readout by up to two orders of magnitude compared to the previous state-of-the-art. It does so by taking advantage of retinotopy and learns a Gaussian distribution over the neuron’s receptive field position. With this new readout we train our network on neural responses from mouse primary visual cortex (V1) and obtain a gain in performance of 7% compared to the previous state-of-the-art network. We then investigate whether the convolutional core indeed captures general cortical features by using the core in transfer learning to a different animal. When transferring a core trained on thousands of neurons from various animals and scans we exceed the performance of training directly on that animal by 12%, and outperform a commonly used VGG16 core pre-trained on imagenet by 33%. In addition, transfer learning with our data-driven core is more data-efficient than direct training, achieving the same performance with only 40% of the data. Our model with its novel readout thus sets a new state-of-the-art for neural response prediction in mouse visual cortex from natural images, generalizes between animals, and captures better characteristic cortical features than current task-driven pre-training approaches such as VGG16.
https://openreview.net/pdf/a10fba1a4a77e56503923a67cf4e95e82d6f9b59.pdf
Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy
https://openreview.net/forum?id=Rhsu5qD36cL
https://openreview.net/forum?id=Rhsu5qD36cL
Akinori F Ebihara,Taiki Miyagawa,Kazuyuki Sakurai,Hitoshi Imaoka
ICLR 2021,Spotlight
Classifying sequential data as early and as accurately as possible is a challenging yet critical problem, especially when a sampling cost is high. One algorithm that achieves this goal is the sequential probability ratio test (SPRT), which is known as Bayes-optimal: it can keep the expected number of data samples as small as possible, given the desired error upper-bound. However, the original SPRT makes two critical assumptions that limit its application in real-world scenarios: (i) samples are independently and identically distributed, and (ii) the likelihood of the data being derived from each class can be calculated precisely. Here, we propose the SPRT-TANDEM, a deep neural network-based SPRT algorithm that overcomes the above two obstacles. The SPRT-TANDEM sequentially estimates the log-likelihood ratio of two alternative hypotheses by leveraging a novel Loss function for Log-Likelihood Ratio estimation (LLLR) while allowing correlations up to $N (\in \mathbb{N})$ preceding samples. In tests on one original and two public video databases, Nosaic MNIST, UCF101, and SiW, the SPRT-TANDEM achieves statistically significantly better classification accuracy than other baseline classifiers, with a smaller number of data samples. The code and Nosaic MNIST are publicly available at https://github.com/TaikiMiyagawa/SPRT-TANDEM.
https://openreview.net/pdf/61baa81a79a2975a98aad96ab59d3ca65685492b.pdf
Uncertainty Sets for Image Classifiers using Conformal Prediction
https://openreview.net/forum?id=eNdiU_DbM9
https://openreview.net/forum?id=eNdiU_DbM9
Anastasios Nikolas Angelopoulos,Stephen Bates,Michael Jordan,Jitendra Malik
ICLR 2021,Spotlight
Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings. Existing uncertainty quantification techniques, such as Platt scaling, attempt to calibrate the network’s probability estimates, but they do not have formal guarantees. We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%. The algorithm is simple and fast like Platt scaling, but provides a formal finite-sample coverage guarantee for every model and dataset. Our method modifies an existing conformal prediction algorithm to give more stable predictive sets by regularizing the small scores of unlikely classes after Platt scaling. In experiments on both Imagenet and Imagenet-V2 with ResNet-152 and other classifiers, our scheme outperforms existing approaches, achieving coverage with sets that are often factors of 5 to 10 smaller than a stand-alone Platt scaling baseline.
https://openreview.net/pdf/54ecc59706032f693269ac3a32a22051e5b97bbd.pdf
Graph Convolution with Low-rank Learnable Local Filters
https://openreview.net/forum?id=9OHFhefeB86
https://openreview.net/forum?id=9OHFhefeB86
Xiuyuan Cheng,Zichen Miao,Qiang Qiu
ICLR 2021,Spotlight
Geometric variations like rotation, scaling, and viewpoint changes pose a significant challenge to visual understanding. One common solution is to directly model certain intrinsic structures, e.g., using landmarks. However, it then becomes non-trivial to build effective deep models, especially when the underlying non-Euclidean grid is irregular and coarse. Recent deep models using graph convolutions provide an appropriate framework to handle such non-Euclidean data, but many of them, particularly those based on global graph Laplacians, lack expressiveness to capture local features required for representation of signals lying on the non-Euclidean grid. The current paper introduces a new type of graph convolution with learnable low-rank local filters, which is provably more expressive than previous spectral graph convolution methods. The model also provides a unified framework for both spectral and spatial graph convolutions. To improve model robustness, regularization by local graph Laplacians is introduced. The representation stability against input graph data perturbation is theoretically proved, making use of the graph filter locality and the local graph regularization. Experiments on spherical mesh data, real-world facial expression recognition/skeleton-based action recognition data, and data with simulated graph noise show the empirical advantage of the proposed model.
https://openreview.net/pdf/8bd3676b06c9fadecea1934914c2d52aedf3b689.pdf
Mind the Pad -- CNNs Can Develop Blind Spots
https://openreview.net/forum?id=m1CD7tPubNy
https://openreview.net/forum?id=m1CD7tPubNy
Bilal Alsallakh,Narine Kokhlikyan,Vivek Miglani,Jun Yuan,Orion Reblitz-Richardson
ICLR 2021,Spotlight
We show how feature maps in convolutional networks are susceptible to spatial bias. Due to a combination of architectural choices, the activation at certain locations is systematically elevated or weakened. The major source of this bias is the padding mechanism. Depending on several aspects of convolution arithmetic, this mechanism can apply the padding unevenly, leading to asymmetries in the learned weights. We demonstrate how such bias can be detrimental to certain tasks such as small object detection: the activation is suppressed if the stimulus lies in the impacted area, leading to blind spots and misdetection. We explore alternative padding methods and propose solutions for analyzing and mitigating spatial bias.
https://openreview.net/pdf/70ef163f0737eab414d51c5c352b8292272c77d4.pdf
Stabilized Medical Image Attacks
https://openreview.net/forum?id=QfTXQiGYudJ
https://openreview.net/forum?id=QfTXQiGYudJ
Gege Qi,Lijun GONG,Yibing Song,Kai Ma,Yefeng Zheng
ICLR 2021,Spotlight
Convolutional Neural Networks (CNNs) have advanced existing medical systems for automatic disease diagnosis. However, a threat to these systems arises that adversarial attacks make CNNs vulnerable. Inaccurate diagnosis results make a negative influence on human healthcare. There is a need to investigate potential adversarial attacks to robustify deep medical diagnosis systems. On the other side, there are several modalities of medical images (e.g., CT, fundus, and endoscopic image) of which each type is significantly different from others. It is more challenging to generate adversarial perturbations for different types of medical images. In this paper, we propose an image-based medical adversarial attack method to consistently produce adversarial perturbations on medical images. The objective function of our method consists of a loss deviation term and a loss stabilization term. The loss deviation term increases the divergence between the CNN prediction of an adversarial example and its ground truth label. Meanwhile, the loss stabilization term ensures similar CNN predictions of this example and its smoothed input. From the perspective of the whole iterations for perturbation generation, the proposed loss stabilization term exhaustively searches the perturbation space to smooth the single spot for local optimum escape. We further analyze the KL-divergence of the proposed loss function and find that the loss stabilization term makes the perturbations updated towards a fixed objective spot while deviating from the ground truth. This stabilization ensures the proposed medical attack effective for different types of medical images while producing perturbations in small variance. Experiments on several medical image analysis benchmarks including the recent COVID-19 dataset show the stability of the proposed method.
https://openreview.net/pdf/537abf2d751c6e93209bde7c1e550fadad61af0f.pdf