title
stringlengths 19
143
| url
stringlengths 41
43
| detail_url
stringlengths 41
43
| authors
stringlengths 9
347
| tags
stringclasses 3
values | abstract
stringlengths 457
2.38k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Temporally-Extended ε-Greedy Exploration | https://openreview.net/forum?id=ONBPHFZ7zG4 | https://openreview.net/forum?id=ONBPHFZ7zG4 | Will Dabney,Georg Ostrovski,Andre Barreto | ICLR 2021,Poster | Recent work on exploration in reinforcement learning (RL) has led to a series of increasingly complex solutions to the problem. This increase in complexity often comes at the expense of generality. Recent empirical studies suggest that, when applied to a broader set of domains, some sophisticated exploration methods are outperformed by simpler counterparts, such as ε-greedy. In this paper we propose an exploration algorithm that retains the simplicity of ε-greedy while reducing dithering. We build on a simple hypothesis: the main limitation of ε-greedy exploration is its lack of temporal persistence, which limits its ability to escape local optima. We propose a temporally extended form of ε-greedy that simply repeats the sampled action for a random duration. It turns out that, for many duration distributions, this suffices to improve exploration on a large set of domains. Interestingly, a class of distributions inspired by ecological models of animal foraging behaviour yields particularly strong performance. | https://openreview.net/pdf/be288b1cdd527108548adea1d4d8319ce8a8eae8.pdf |
Learning Associative Inference Using Fast Weight Memory | https://openreview.net/forum?id=TuK6agbdt27 | https://openreview.net/forum?id=TuK6agbdt27 | Imanol Schlag,Tsendsuren Munkhdalai,Jürgen Schmidhuber | ICLR 2021,Poster | Humans can quickly associate stimuli to solve problems in novel contexts. Our novel neural network model learns state representations of facts that can be composed to perform such associative inference. To this end, we augment the LSTM model with an associative memory, dubbed \textit{Fast Weight Memory} (FWM). Through differentiable operations at every step of a given input sequence, the LSTM \textit{updates and maintains} compositional associations stored in the rapidly changing FWM weights. Our model is trained end-to-end by gradient descent and yields excellent performance on compositional language reasoning problems, meta-reinforcement-learning for POMDPs, and small-scale word-level language modelling. | https://openreview.net/pdf/96ccad214bb6dc5b347aa32436f14fdd5391d21b.pdf |
Multiscale Score Matching for Out-of-Distribution Detection | https://openreview.net/forum?id=xoHdgbQJohv | https://openreview.net/forum?id=xoHdgbQJohv | Ahsan Mahmood,Junier Oliva,Martin Andreas Styner | ICLR 2021,Poster | We present a new methodology for detecting out-of-distribution (OOD) images by utilizing norms of the score estimates at multiple noise scales. A score is defined to be the gradient of the log density with respect to the input data. Our methodology is completely unsupervised and follows a straight forward training scheme. First, we train a deep network to estimate scores for $L$ levels of noise. Once trained, we calculate the noisy score estimates for $N$ in-distribution samples and take the L2-norms across the input dimensions (resulting in an $N$x$L$ matrix). Then we train an auxiliary model (such as a Gaussian Mixture Model) to learn the in-distribution spatial regions in this $L$-dimensional space. This auxiliary model can now be used to identify points that reside outside the learned space. Despite its simplicity, our experiments show that this methodology significantly outperforms the state-of-the-art in detecting out-of-distribution images. For example, our method can effectively separate CIFAR-10 (inlier) and SVHN (OOD) images, a setting which has been previously shown to be difficult for deep likelihood models. | https://openreview.net/pdf/639279c160eb93e79cf2ee33db8f9dc5b040f345.pdf |
Learning to Sample with Local and Global Contexts in Experience Replay Buffer | https://openreview.net/forum?id=gJYlaqL8i8 | https://openreview.net/forum?id=gJYlaqL8i8 | Youngmin Oh,Kimin Lee,Jinwoo Shin,Eunho Yang,Sung Ju Hwang | ICLR 2021,Poster | Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL). To utilize the experience replay efficiently, the existing sampling methods allow selecting out more meaningful experiences by imposing priorities on them based on certain metrics (e.g. TD-error). However, they may result in sampling highly biased, redundant transitions since they compute the sampling rate for each transition independently, without consideration of its importance in relation to other transitions. In this paper, we aim to address the issue by proposing a new learning-based sampling method that can compute the relative importance of transition. To this end, we design a novel permutation-equivariant neural architecture that takes contexts from not only features of each transition (local) but also those of others (global) as inputs. We validate our framework, which we refer to as Neural Experience Replay Sampler (NERS), on multiple benchmark tasks for both continuous and discrete control tasks and show that it can significantly improve the performance of various off-policy RL methods. Further analysis confirms that the improvements of the sample efficiency indeed are due to sampling diverse and meaningful transitions by NERS that considers both local and global contexts. | https://openreview.net/pdf/92ef8e632b99778a17bd8e0187962812d2cd42c5.pdf |
Parameter-Based Value Functions | https://openreview.net/forum?id=tV6oBfuyLTQ | https://openreview.net/forum?id=tV6oBfuyLTQ | Francesco Faccio,Louis Kirsch,Jürgen Schmidhuber | ICLR 2021,Poster | Traditional off-policy actor-critic Reinforcement Learning (RL) algorithms learn value functions of a single target policy. However, when value functions are updated to track the learned policy, they forget potentially useful information about old policies. We introduce a class of value functions called Parameter-Based Value Functions (PBVFs) whose inputs include the policy parameters. They can generalize across different policies. PBVFs can evaluate the performance of any policy given a state, a state-action pair, or a distribution over the RL agent's initial states. First we show how PBVFs yield novel off-policy policy gradient theorems. Then we derive off-policy actor-critic algorithms based on PBVFs trained by Monte Carlo or Temporal Difference methods. We show how learned PBVFs can zero-shot learn new policies that outperform any policy seen during training. Finally our algorithms are evaluated on a selection of discrete and continuous control tasks using shallow policies and deep neural networks. Their performance is comparable to state-of-the-art methods. | https://openreview.net/pdf/c79ef13431e3c5decbd9f2ba989bc20a847b37be.pdf |
New Bounds For Distributed Mean Estimation and Variance Reduction | https://openreview.net/forum?id=t86MwoUCCNe | https://openreview.net/forum?id=t86MwoUCCNe | Peter Davies,Vijaykrishna Gurunanthan,Niusha Moshrefi,Saleh Ashkboos,Dan Alistarh | ICLR 2021,Poster | We consider the problem of distributed mean estimation (DME), in which $n$ machines are each given a local $d$-dimensional vector $\mathbf x_v \in \mathbb R^d$, and must cooperate to estimate the mean of their inputs $\mathbf \mu = \frac 1n\sum_{v = 1}^n \mathbf x_v$, while minimizing total communication cost. DME is a fundamental construct in distributed machine learning, and there has been considerable work on variants of this problem, especially in the context of distributed variance reduction for stochastic gradients in parallel SGD. Previous work typically assumes an upper bound on the norm of the input vectors, and achieves an error bound in terms of this norm. However, in many real applications, the input vectors are concentrated around the correct output $\mathbf \mu$, but $\mathbf \mu$ itself has large norm. In such cases, previous output error bounds perform poorly.
In this paper, we show that output error bounds need not depend on input norm. We provide a method of quantization which allows distributed mean estimation to be performed with solution quality dependent only on the distance between inputs, not on input norm, and show an analogous result for distributed variance reduction. The technique is based on a new connection with lattice theory. We also provide lower bounds showing that the communication to error trade-off of our algorithms is asymptotically optimal. As the lattices achieving optimal bounds under $\ell_2$-norm can be computationally impractical, we also present an extension which leverages easy-to-use cubic lattices, and is loose only up to a logarithmic factor in $d$. We show experimentally that our method yields practical improvements for common applications, relative to prior approaches. | https://openreview.net/pdf/02618eb8b76a664b33780ceb32a0450c69a54d1c.pdf |
Learning to Set Waypoints for Audio-Visual Navigation | https://openreview.net/forum?id=cR91FAodFMe | https://openreview.net/forum?id=cR91FAodFMe | Changan Chen,Sagnik Majumder,Ziad Al-Halah,Ruohan Gao,Santhosh Kumar Ramakrishnan,Kristen Grauman | ICLR 2021,Poster | In audio-visual navigation, an agent intelligently travels through a complex, unmapped 3D environment using both sights and sounds to find a sound source (e.g., a phone ringing in another room). Existing models learn to act at a fixed granularity of agent motion and rely on simple recurrent aggregations of the audio observations. We introduce a reinforcement learning approach to audio-visual navigation with two key novel elements: 1) waypoints that are dynamically set and learned end-to-end within the navigation policy, and 2) an acoustic memory that provides a structured, spatially grounded record of what the agent has heard as it moves. Both new ideas capitalize on the synergy of audio and visual data for revealing the geometry of an unmapped space. We demonstrate our approach on two challenging datasets of real-world 3D scenes, Replica and Matterport3D. Our model improves the state of the art by a substantial margin, and our experiments reveal that learning the links between sights, sounds, and space is essential for audio-visual navigation. | https://openreview.net/pdf/fa0a991905ae30b2fa74ca7b101b3acabd532c13.pdf |
Disambiguating Symbolic Expressions in Informal Documents | https://openreview.net/forum?id=K5j7D81ABvt | https://openreview.net/forum?id=K5j7D81ABvt | Dennis Müller,Cezary Kaliszyk | ICLR 2021,Poster | We propose the task of \emph{disambiguating} symbolic expressions in informal STEM documents in the form of \LaTeX files -- that is, determining their precise semantics and abstract syntax tree -- as a neural machine translation task. We discuss the distinct challenges involved and present a dataset with roughly 33,000 entries. We evaluated several baseline models on this dataset, which failed to yield even syntactically valid \LaTeX before overfitting. Consequently, we describe a methodology using a \emph{transformer} language model pre-trained on sources obtained from \url{arxiv.org}, which yields promising results despite the small size of the dataset. We evaluate our model using a plurality of dedicated techniques, taking syntax and semantics of symbolic expressions into account. | https://openreview.net/pdf/006f5f9df1ed650389c8a89fd0087c3a9cb81605.pdf |
Colorization Transformer | https://openreview.net/forum?id=5NA1PinlGFu | https://openreview.net/forum?id=5NA1PinlGFu | Manoj Kumar,Dirk Weissenborn,Nal Kalchbrenner | ICLR 2021,Poster | We present the Colorization Transformer, a novel approach for diverse high fidelity image colorization based on self-attention. Given a grayscale image, the colorization proceeds in three steps. We first use a conditional autoregressive transformer to produce a low resolution coarse coloring of the grayscale image. Our architecture adopts conditional transformer layers to effectively condition grayscale input. Two subsequent fully parallel networks upsample the coarse colored low resolution image into a finely colored high resolution image. Sampling from the Colorization Transformer produces diverse colorings whose fidelity outperforms the previous state-of-the-art on colorising ImageNet based on FID results and based on a human evaluation in a Mechanical Turk test. Remarkably, in more than 60\% of cases human evaluators prefer the highest rated among three generated colorings over the ground truth. The code and pre-trained checkpoints for Colorization Transformer are publicly available at https://github.com/google-research/google-research/tree/master/coltran | https://openreview.net/pdf/f2f5d9057587995de8d113d1ba35dd7d8b98f48e.pdf |
Theoretical bounds on estimation error for meta-learning | https://openreview.net/forum?id=SZ3wtsXfzQR | https://openreview.net/forum?id=SZ3wtsXfzQR | James Lucas,Mengye Ren,Irene Raissa KAMENI KAMENI,Toniann Pitassi,Richard Zemel | ICLR 2021,Poster | Machine learning models have traditionally been developed under the assumption that the training and test distributions match exactly. However, recent success in few-shot learning and related problems are encouraging signs that these models can be adapted to more realistic settings where train and test distributions differ. Unfortunately, there is severely limited theoretical support for these algorithms and little is known about the difficulty of these problems. In this work, we provide novel information-theoretic lower-bounds on minimax rates of convergence for algorithms that are trained on data from multiple sources and tested on novel data. Our bounds depend intuitively on the information shared between sources of data, and characterize the difficulty of learning in this setting for arbitrary algorithms. We demonstrate these bounds on a hierarchical Bayesian model of meta-learning, computing both upper and lower bounds on parameter estimation via maximum-a-posteriori inference. | https://openreview.net/pdf/f6e0a3923ea91b65f312ccf597276c427be18097.pdf |
Variational Information Bottleneck for Effective Low-Resource Fine-Tuning | https://openreview.net/forum?id=kvhzKz-_DMF | https://openreview.net/forum?id=kvhzKz-_DMF | Rabeeh Karimi mahabadi,Yonatan Belinkov,James Henderson | ICLR 2021,Poster | While large-scale pretrained language models have obtained impressive results when fine-tuned on a wide variety of tasks, they still often suffer from overfitting in low-resource scenarios. Since such models are general-purpose feature extractors, many of these features are inevitably irrelevant for a given target task. We propose to use Variational Information Bottleneck (VIB) to suppress irrelevant features when fine-tuning on low-resource target tasks, and show that our method successfully reduces overfitting. Moreover, we show that our VIB model finds sentence representations that are more robust to biases in natural language inference datasets, and thereby obtains better generalization to out-of-domain datasets. Evaluation on seven low-resource datasets in different tasks shows that our method significantly improves transfer learning in low-resource scenarios, surpassing prior work. Moreover, it improves generalization on 13 out of 15 out-of-domain natural language inference benchmarks. Our code is publicly available in https://github.com/rabeehk/vibert. | https://openreview.net/pdf/62f3ae7c05e30f870e3a6435b704afbd5c5290ba.pdf |
TropEx: An Algorithm for Extracting Linear Terms in Deep Neural Networks | https://openreview.net/forum?id=IqtonxWI0V3 | https://openreview.net/forum?id=IqtonxWI0V3 | Martin Trimmel,Henning Petzka,Cristian Sminchisescu | ICLR 2021,Poster | Deep neural networks with rectified linear (ReLU) activations are piecewise linear functions, where hyperplanes partition the input space into an astronomically high number of linear regions. Previous work focused on counting linear regions to measure the network's expressive power and on analyzing geometric properties of the hyperplane configurations. In contrast, we aim to understand the impact of the linear terms on network performance, by examining the information encoded in their coefficients. To this end, we derive TropEx, a nontrivial tropical algebra-inspired algorithm to systematically extract linear terms based on data. Applied to convolutional and fully-connected networks, our algorithm uncovers significant differences in how the different networks utilize linear regions for generalization. This underlines the importance of systematic linear term exploration, to better understand generalization in neural networks trained with complex data sets. | https://openreview.net/pdf/6f5f94ea1f9082d97859b79f1358b2a25baa8fcd.pdf |
Seq2Tens: An Efficient Representation of Sequences by Low-Rank Tensor Projections | https://openreview.net/forum?id=dx4b7lm8jMM | https://openreview.net/forum?id=dx4b7lm8jMM | Csaba Toth,Patric Bonnier,Harald Oberhauser | ICLR 2021,Poster | Sequential data such as time series, video, or text can be challenging to analyse as the ordered structure gives rise to complex dependencies. At the heart of this is non-commutativity, in the sense that reordering the elements of a sequence can completely change its meaning. We use a classical mathematical object -- the free algebra -- to capture this non-commutativity. To address the innate computational complexity of this algebra, we use compositions of low-rank tensor projections. This yields modular and scalable building blocks that give state-of-the-art performance on standard benchmarks such as multivariate time series classification, mortality prediction and generative models for video. | https://openreview.net/pdf/bc313164adf3017b7e94a07aecbd830b43e5c49a.pdf |
Representation learning for improved interpretability and classification accuracy of clinical factors from EEG | https://openreview.net/forum?id=TVjLza1t4hI | https://openreview.net/forum?id=TVjLza1t4hI | Garrett Honke,Irina Higgins,Nina Thigpen,Vladimir Miskovic,Katie Link,Sunny Duan,Pramod Gupta,Julia Klawohn,Greg Hajcak | ICLR 2021,Poster | Despite extensive standardization, diagnostic interviews for mental health disorders encompass substantial subjective judgment. Previous studies have demonstrated that EEG-based neural measures can function as reliable objective correlates of depression, or even predictors of depression and its course. However, their clinical utility has not been fully realized because of 1) the lack of automated ways to deal with the inherent noise associated with EEG data at scale, and 2) the lack of knowledge of which aspects of the EEG signal may be markers of a clinical disorder. Here we adapt an unsupervised pipeline from the recent deep representation learning literature to address these problems by 1) learning a disentangled representation using $\beta$-VAE to denoise the signal, and 2) extracting interpretable features associated with a sparse set of clinical labels using a Symbol-Concept Association Network (SCAN). We demonstrate that our method is able to outperform the canonical hand-engineered baseline classification method on a number of factors, including participant age and depression diagnosis. Furthermore, our method recovers a representation that can be used to automatically extract denoised Event Related Potentials (ERPs) from novel, single EEG trajectories, and supports fast supervised re-mapping to various clinical labels, allowing clinicians to re-use a single EEG representation regardless of updates to the standardized diagnostic system. Finally, single factors of the learned disentangled representations often correspond to meaningful markers of clinical factors, as automatically detected by SCAN, allowing for human interpretability and post-hoc expert analysis of the recommendations made by the model. | https://openreview.net/pdf/2932815e2ab354b7f926e4803d4ba6847916d44d.pdf |
Language-Agnostic Representation Learning of Source Code from Structure and Context | https://openreview.net/forum?id=Xh5eMZVONGF | https://openreview.net/forum?id=Xh5eMZVONGF | Daniel Zügner,Tobias Kirschstein,Michele Catasta,Jure Leskovec,Stephan Günnemann | ICLR 2021,Poster | Source code (Context) and its parsed abstract syntax tree (AST; Structure) are two complementary representations of the same computer program. Traditionally, designers of machine learning models have relied predominantly either on Structure or Context. We propose a new model, which jointly learns on Context and Structure of source code. In contrast to previous approaches, our model uses only language-agnostic features, i.e., source code and features that can be computed directly from the AST. Besides obtaining state-of-the-art on monolingual code summarization on all five programming languages considered in this work, we propose the first multilingual code summarization model. We show that jointly training on non-parallel data from multiple programming languages improves results on all individual languages, where the strongest gains are on low-resource languages. Remarkably, multilingual training only from Context does not lead to the same improvements, highlighting the benefits of combining Structure and Context for representation learning on code. | https://openreview.net/pdf/69c9ae01f0f1b9a15ea1b21d87cdf95dff32a6f5.pdf |
Generalized Multimodal ELBO | https://openreview.net/forum?id=5Y21V0RDBV | https://openreview.net/forum?id=5Y21V0RDBV | Thomas M. Sutter,Imant Daunhawer,Julia E Vogt | ICLR 2021,Poster | Multiple data types naturally co-occur when describing real-world phenomena and learning from them is a long-standing goal in machine learning research. However, existing self-supervised generative models approximating an ELBO are not able to fulfill all desired requirements of multimodal models: their posterior approximation functions lead to a trade-off between the semantic coherence and the ability to learn the joint data distribution. We propose a new, generalized ELBO formulation for multimodal data that overcomes these limitations. The new objective encompasses two previous methods as special cases and combines their benefits without compromises. In extensive experiments, we demonstrate the advantage of the proposed method compared to state-of-the-art models in self-supervised, generative learning tasks. | https://openreview.net/pdf/2cfd5fea6a35d4586487da796743d75dacc7118c.pdf |
Model-based micro-data reinforcement learning: what are the crucial model properties and which model to choose? | https://openreview.net/forum?id=p5uylG94S68 | https://openreview.net/forum?id=p5uylG94S68 | Balázs Kégl,Gabriel Hurtado,Albert Thomas | ICLR 2021,Poster | We contribute to micro-data model-based reinforcement learning (MBRL) by rigorously comparing popular generative models using a fixed (random shooting) control agent. We find that on an environment that requires multimodal posterior predictives, mixture density nets outperform all other models by a large margin. When multimodality is not required, our surprising finding is that we do not need probabilistic posterior predictives: deterministic models are on par, in fact they consistently (although non-significantly) outperform their probabilistic counterparts. We also found that heteroscedasticity at training time, perhaps acting as a regularizer, improves predictions at longer horizons. At the methodological side, we design metrics and an experimental protocol which can be used to evaluate the various models, predicting their asymptotic performance when using them on the control problem. Using this framework, we improve the state-of-the-art sample complexity of MBRL on Acrobot by two to four folds, using an aggressive training schedule which is outside of the hyperparameter interval usually considered. | https://openreview.net/pdf/04313ea0678f51bf6e97525219f5b92003b041b9.pdf |
Set Prediction without Imposing Structure as Conditional Density Estimation | https://openreview.net/forum?id=04ArenGOz3 | https://openreview.net/forum?id=04ArenGOz3 | David W Zhang,Gertjan J. Burghouts,Cees G. M. Snoek | ICLR 2021,Poster | Set prediction is about learning to predict a collection of unordered variables with unknown interrelations. Training such models with set losses imposes the structure of a metric space over sets. We focus on stochastic and underdefined cases, where an incorrectly chosen loss function leads to implausible predictions. Example tasks include conditional point-cloud reconstruction and predicting future states of molecules. In this paper we propose an alternative to training via set losses, by viewing learning as conditional density estimation. Our learning framework fits deep energy-based models and approximates the intractable likelihood with gradient-guided sampling. Furthermore, we propose a stochastically augmented prediction algorithm that enables multiple predictions, reflecting the possible variations in the target set. We empirically demonstrate on a variety of datasets the capability to learn multi-modal densities and produce different plausible predictions. Our approach is competitive with previous set prediction models on standard benchmarks. More importantly, it extends the family of addressable tasks beyond those that have unambiguous predictions. | https://openreview.net/pdf/04c489674227569994e57717321c907597b1355c.pdf |
Learning Value Functions in Deep Policy Gradients using Residual Variance | https://openreview.net/forum?id=NX1He-aFO_F | https://openreview.net/forum?id=NX1He-aFO_F | Yannis Flet-Berliac,reda ouhamma,odalric-ambrym maillard,Philippe Preux | ICLR 2021,Poster | Policy gradient algorithms have proven to be successful in diverse decision making and control tasks. However, these methods suffer from high sample complexity and instability issues. In this paper, we address these challenges by providing a different approach for training the critic in the actor-critic framework. Our work builds on recent studies indicating that traditional actor-critic algorithms do not succeed in fitting the true value function, calling for the need to identify a better objective for the critic. In our method, the critic uses a new state-value (resp. state-action-value) function approximation that learns the value of the states (resp. state-action pairs) relative to their mean value rather than the absolute value as in conventional actor-critic. We prove the theoretical consistency of the new gradient estimator and observe dramatic empirical improvement across a variety of continuous control tasks and algorithms. Furthermore, we validate our method in tasks with sparse rewards, where we provide experimental evidence and theoretical insights. | https://openreview.net/pdf/d19c38b4919b1481e2aa3972a928c866f4502b44.pdf |
IDF++: Analyzing and Improving Integer Discrete Flows for Lossless Compression | https://openreview.net/forum?id=MBOyiNnYthd | https://openreview.net/forum?id=MBOyiNnYthd | Rianne van den Berg,Alexey A. Gritsenko,Mostafa Dehghani,Casper Kaae Sønderby,Tim Salimans | ICLR 2021,Poster | In this paper we analyse and improve integer discrete flows for lossless compression. Integer discrete flows are a recently proposed class of models that learn invertible transformations for integer-valued random variables. Their discrete nature makes them particularly suitable for lossless compression with entropy coding schemes. We start by investigating a recent theoretical claim that states that invertible flows for discrete random variables are less flexible than their continuous counterparts. We demonstrate with a proof that this claim does not hold for integer discrete flows due to the embedding of data with finite support into the countably infinite integer lattice. Furthermore, we zoom in on the effect of gradient bias due to the straight-through estimator in integer discrete flows, and demonstrate that its influence is highly dependent on architecture choices and less prominent than previously thought. Finally, we show how different architecture modifications improve the performance of this model class for lossless compression, and that they also enable more efficient compression: a model with half the number of flow layers performs on par with or better than the original integer discrete flow model. | https://openreview.net/pdf/049fd6f43de5700220bd49a24b2ae38e78c3782c.pdf |
Fully Unsupervised Diversity Denoising with Convolutional Variational Autoencoders | https://openreview.net/forum?id=agHLCOBM5jP | https://openreview.net/forum?id=agHLCOBM5jP | Mangal Prakash,Alexander Krull,Florian Jug | ICLR 2021,Poster | Deep Learning based methods have emerged as the indisputable leaders for virtually all image restoration tasks. Especially in the domain of microscopy images, various content-aware image restoration (CARE) approaches are now used to improve the interpretability of acquired data. Naturally, there are limitations to what can be restored in corrupted images, and like for all inverse problems, many potential solutions exist, and one of them must be chosen. Here, we propose DivNoising, a denoising approach based on fully convolutional variational autoencoders (VAEs), overcoming the problem of having to choose a single solution by predicting a whole distribution of denoised images. First we introduce a principled way of formulating the unsupervised denoising problem within the VAE framework by explicitly incorporating imaging noise models into the decoder. Our approach is fully unsupervised, only requiring noisy images and a suitable description of the imaging noise distribution. We show that such a noise model can either be measured, bootstrapped from noisy data, or co-learned during training. If desired, consensus predictions can be inferred from a set of DivNoising predictions, leading to competitive results with other unsupervised methods and, on occasion, even with the supervised state-of-the-art. DivNoising samples from the posterior enable a plethora of useful applications. We are (i) showing denoising results for 13 datasets, (ii) discussing how optical character recognition (OCR) applications can benefit from diverse predictions, and are (iii) demonstrating how instance cell segmentation improves when using diverse DivNoising predictions. | https://openreview.net/pdf/2afe972808ebb66f3926468902039c366b274c59.pdf |
Is Attention Better Than Matrix Decomposition? | https://openreview.net/forum?id=1FvkSpWosOl | https://openreview.net/forum?id=1FvkSpWosOl | Zhengyang Geng,Meng-Hao Guo,Hongxu Chen,Xia Li,Ke Wei,Zhouchen Lin | ICLR 2021,Poster | As an essential ingredient of modern deep learning, attention mechanism, especially self-attention, plays a vital role in the global correlation discovery. However, is hand-crafted attention irreplaceable when modeling the global context? Our intriguing finding is that self-attention is not better than the matrix decomposition~(MD) model developed 20 years ago regarding the performance and computational cost for encoding the long-distance dependencies. We model the global context issue as a low-rank completion problem and show that its optimization algorithms can help design global information blocks. This paper then proposes a series of Hamburgers, in which we employ the optimization algorithms for solving MDs to factorize the input representations into sub-matrices and reconstruct a low-rank embedding. Hamburgers with different MDs can perform favorably against the popular global context module self-attention when carefully coping with gradients back-propagated through MDs. Comprehensive experiments are conducted in the vision tasks where it is crucial to learn the global context, including semantic segmentation and image generation, demonstrating significant improvements over self-attention and its variants. Code is available at https://github.com/Gsunshine/Enjoy-Hamburger. | https://openreview.net/pdf/1cb5acc6fe475a215dd1192beec6158b8a4da5dc.pdf |
Improving Transformation Invariance in Contrastive Representation Learning | https://openreview.net/forum?id=NomEDgIEBwE | https://openreview.net/forum?id=NomEDgIEBwE | Adam Foster,Rattana Pukdee,Tom Rainforth | ICLR 2021,Poster | We propose methods to strengthen the invariance properties of representations obtained by contrastive learning. While existing approaches implicitly induce a degree of invariance as representations are learned, we look to more directly enforce invariance in the encoding process. To this end, we first introduce a training objective for contrastive learning that uses a novel regularizer to control how the representation changes under transformation. We show that representations trained with this objective perform better on downstream tasks and are more robust to the introduction of nuisance transformations at test time. Second, we propose a change to how test time representations are generated by introducing a feature averaging approach that combines encodings from multiple transformations of the original input, finding that this leads to across the board performance gains. Finally, we introduce the novel Spirograph dataset to explore our ideas in the context of a differentiable generative process with multiple downstream tasks, showing that our techniques for learning invariance are highly beneficial. | https://openreview.net/pdf/401efbc12f590198cf9a4094f6a0ce66e21be5e9.pdf |
On the Origin of Implicit Regularization in Stochastic Gradient Descent | https://openreview.net/forum?id=rq_Qr0c1Hyo | https://openreview.net/forum?id=rq_Qr0c1Hyo | Samuel L Smith,Benoit Dherin,David Barrett,Soham De | ICLR 2021,Poster | For infinitesimal learning rates, stochastic gradient descent (SGD) follows the path of gradient flow on the full batch loss function. However moderately large learning rates can achieve higher test accuracies, and this generalization benefit is not explained by convergence bounds, since the learning rate which maximizes test accuracy is often larger than the learning rate which minimizes training loss. To interpret this phenomenon we prove that for SGD with random shuffling, the mean SGD iterate also stays close to the path of gradient flow if the learning rate is small and finite, but on a modified loss. This modified loss is composed of the original loss function and an implicit regularizer, which penalizes the norms of the minibatch gradients. Under mild assumptions, when the batch size is small the scale of the implicit regularization term is proportional to the ratio of the learning rate to the batch size. We verify empirically that explicitly including the implicit regularizer in the loss can enhance the test accuracy when the learning rate is small. | https://openreview.net/pdf/e5f4bcf96d3ed905ac91e4ea6e3993321ecda830.pdf |
Transient Non-stationarity and Generalisation in Deep Reinforcement Learning | https://openreview.net/forum?id=Qun8fv4qSby | https://openreview.net/forum?id=Qun8fv4qSby | Maximilian Igl,Gregory Farquhar,Jelena Luketina,Wendelin Boehmer,Shimon Whiteson | ICLR 2021,Poster | Non-stationarity can arise in Reinforcement Learning (RL) even in stationary environments. For example, most RL algorithms collect new data throughout training, using a non-stationary behaviour policy. Due to the transience of this non-stationarity, it is often not explicitly addressed in deep RL and a single neural network is continually updated. However, we find evidence that neural networks exhibit a memory effect, where these transient non-stationarities can permanently impact the latent representation and adversely affect generalisation performance. Consequently, to improve generalisation of deep RL agents, we propose Iterated Relearning (ITER). ITER augments standard RL training by repeated knowledge transfer of the current policy into a freshly initialised network, which thereby experiences less non-stationarity during training. Experimentally, we show that ITER improves performance on the challenging generalisation benchmarks ProcGen and Multiroom. | https://openreview.net/pdf/ea444807010b334cd2b90645f1cfa31bd38f3ef7.pdf |
Lossless Compression of Structured Convolutional Models via Lifting | https://openreview.net/forum?id=oxnp2q-PGL4 | https://openreview.net/forum?id=oxnp2q-PGL4 | Gustav Sourek,Filip Zelezny,Ondrej Kuzelka | ICLR 2021,Poster | Lifting is an efficient technique to scale up graphical models generalized to relational domains by exploiting the underlying symmetries. Concurrently, neural models are continuously expanding from grid-like tensor data into structured representations, such as various attributed graphs and relational databases. To address the irregular structure of the data, the models typically extrapolate on the idea of convolution, effectively introducing parameter sharing in their, dynamically unfolded, computation graphs. The computation graphs themselves then reflect the symmetries of the underlying data, similarly to the lifted graphical models. Inspired by lifting, we introduce a simple and efficient technique to detect the symmetries and compress the neural models without loss of any information. We demonstrate through experiments that such compression can lead to significant speedups of structured convolutional models, such as various Graph Neural Networks, across various tasks, such as molecule classification and knowledge-base completion. | https://openreview.net/pdf/6ca46d0a2419236e20aac30bbf133f4c81154953.pdf |
Analyzing the Expressive Power of Graph Neural Networks in a Spectral Perspective | https://openreview.net/forum?id=-qh0M9XWxnv | https://openreview.net/forum?id=-qh0M9XWxnv | Muhammet Balcilar,Guillaume Renton,Pierre Héroux,Benoit Gaüzère,Sébastien Adam,Paul Honeine | ICLR 2021,Poster | In the recent literature of Graph Neural Networks (GNN), the expressive power of models has been studied through their capability to distinguish if two given graphs are isomorphic or not. Since the graph isomorphism problem is NP-intermediate, and Weisfeiler-Lehman (WL) test can give sufficient but not enough evidence in polynomial time, the theoretical power of GNNs is usually evaluated by the equivalence of WL-test order, followed by an empirical analysis of the models on some reference inductive and transductive datasets. However, such analysis does not account the signal processing pipeline, whose capability is generally evaluated in the spectral domain. In this paper, we argue that a spectral analysis of GNNs behavior can provide a complementary point of view to go one step further in the understanding of GNNs. By bridging the gap between the spectral and spatial design of graph convolutions, we theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. Using this connection, we managed to re-formulate most of the state-of-the-art graph neural networks into one common framework. This general framework allows to lead a spectral analysis of the most popular GNNs, explaining their performance and showing their limits according to spectral point of view. Our theoretical spectral analysis is confirmed by experiments on various graph databases. Furthermore, we demonstrate the necessity of high and/or band-pass filters on a graph dataset, while the majority of GNN is limited to only low-pass and inevitably it fails. | https://openreview.net/pdf/859c9ee357c81e0b9a1cb989b1e23b8b42d741f1.pdf |
A unifying view on implicit bias in training linear neural networks | https://openreview.net/forum?id=ZsZM-4iMQkH | https://openreview.net/forum?id=ZsZM-4iMQkH | Chulhee Yun,Shankar Krishnan,Hossein Mobahi | ICLR 2021,Poster | We study the implicit bias of gradient flow (i.e., gradient descent with infinitesimal step size) on linear neural network training. We propose a tensor formulation of neural networks that includes fully-connected, diagonal, and convolutional networks as special cases, and investigate the linear version of the formulation called linear tensor networks. With this formulation, we can characterize the convergence direction of the network parameters as singular vectors of a tensor defined by the network. For $L$-layer linear tensor networks that are orthogonally decomposable, we show that gradient flow on separable classification finds a stationary point of the $\ell_{2/L}$ max-margin problem in a "transformed" input space defined by the network. For underdetermined regression, we prove that gradient flow finds a global minimum which minimizes a norm-like function that interpolates between weighted $\ell_1$ and $\ell_2$ norms in the transformed input space. Our theorems subsume existing results in the literature while removing standard convergence assumptions. We also provide experiments that corroborate our analysis. | https://openreview.net/pdf/7592938b320208bd563349d1ea3385dd9e80cbe6.pdf |
Balancing Constraints and Rewards with Meta-Gradient D4PG | https://openreview.net/forum?id=TQt98Ya7UMP | https://openreview.net/forum?id=TQt98Ya7UMP | Dan A. Calian,Daniel J Mankowitz,Tom Zahavy,Zhongwen Xu,Junhyuk Oh,Nir Levine,Timothy Mann | ICLR 2021,Poster | Deploying Reinforcement Learning (RL) agents to solve real-world applications often requires satisfying complex system constraints. Often the constraint thresholds are incorrectly set due to the complex nature of a system or the inability to verify the thresholds offline (e.g, no simulator or reasonable offline evaluation procedure exists). This results in solutions where a task cannot be solved without violating the constraints. However, in many real-world cases, constraint violations are undesirable yet they are not catastrophic, motivating the need for soft-constrained RL approaches. We present two soft-constrained RL approaches that utilize meta-gradients to find a good trade-off between expected return and minimizing constraint violations. We demonstrate the effectiveness of these approaches by showing that they consistently outperform the baselines across four different Mujoco domains. | https://openreview.net/pdf/c2bc1eac3b05c897508a2b6cf4f096a98dbcc8e2.pdf |
Robust Curriculum Learning: from clean label detection to noisy label self-correction | https://openreview.net/forum?id=lmTWnm3coJJ | https://openreview.net/forum?id=lmTWnm3coJJ | Tianyi Zhou,Shengjie Wang,Jeff Bilmes | ICLR 2021,Poster | Neural network training can easily overfit noisy labels resulting in poor generalization performance. Existing methods address this problem by (1) filtering out the noisy data and only using the clean data for training or (2) relabeling the noisy data by the model during training or by another model trained only on a clean dataset. However, the former does not leverage the features' information of wrongly-labeled data, while the latter may produce wrong pseudo-labels for some data and introduce extra noises. In this paper, we propose a smooth transition and interplay between these two strategies as a curriculum that selects training samples dynamically. In particular, we start with learning from clean data and then gradually move to learn noisy-labeled data with pseudo labels produced by a time-ensemble of the model and data augmentations. Instead of using the instantaneous loss computed at the current step, our data selection is based on the dynamics of both the loss and output consistency for each sample across historical steps and different data augmentations, resulting in more precise detection of both clean labels and correct pseudo labels. On multiple benchmarks of noisy labels, we show that our curriculum learning strategy can significantly improve the test accuracy without any auxiliary model or extra clean data. | https://openreview.net/pdf/06ca7281bb3ba57d591dedb4b5127373e0c1d429.pdf |
Clairvoyance: A Pipeline Toolkit for Medical Time Series | https://openreview.net/forum?id=xnC8YwKUE3k | https://openreview.net/forum?id=xnC8YwKUE3k | Daniel Jarrett,Jinsung Yoon,Ioana Bica,Zhaozhi Qian,Ari Ercole,Mihaela van der Schaar | ICLR 2021,Poster | Time-series learning is the bread and butter of data-driven *clinical decision support*, and the recent explosion in ML research has demonstrated great potential in various healthcare settings. At the same time, medical time-series problems in the wild are challenging due to their highly *composite* nature: They entail design choices and interactions among components that preprocess data, impute missing values, select features, issue predictions, estimate uncertainty, and interpret models. Despite exponential growth in electronic patient data, there is a remarkable gap between the potential and realized utilization of ML for clinical research and decision support. In particular, orchestrating a real-world project lifecycle poses challenges in engineering (i.e. hard to build), evaluation (i.e. hard to assess), and efficiency (i.e. hard to optimize). Designed to address these issues simultaneously, Clairvoyance proposes a unified, end-to-end, autoML-friendly pipeline that serves as a (i) software toolkit, (ii) empirical standard, and (iii) interface for optimization. Our ultimate goal lies in facilitating transparent and reproducible experimentation with complex inference workflows, providing integrated pathways for (1) personalized prediction, (2) treatment-effect estimation, and (3) information acquisition. Through illustrative examples on real-world data in outpatient, general wards, and intensive-care settings, we illustrate the applicability of the pipeline paradigm on core tasks in the healthcare journey. To the best of our knowledge, Clairvoyance is the first to demonstrate viability of a comprehensive and automatable pipeline for clinical time-series ML. | https://openreview.net/pdf/c4f52313ee7aa37bb754ae2f6524cc0aeb47ce43.pdf |
Plan-Based Relaxed Reward Shaping for Goal-Directed Tasks | https://openreview.net/forum?id=w2Z2OwVNeK | https://openreview.net/forum?id=w2Z2OwVNeK | Ingmar Schubert,Ozgur S Oguz,Marc Toussaint | ICLR 2021,Poster | In high-dimensional state spaces, the usefulness of Reinforcement Learning (RL) is limited by the problem of exploration. This issue has been addressed using potential-based reward shaping (PB-RS) previously. In the present work, we introduce Final-Volume-Preserving Reward Shaping (FV-RS). FV-RS relaxes the strict optimality guarantees of PB-RS to a guarantee of preserved long-term behavior. Being less restrictive, FV-RS allows for reward shaping functions that are even better suited for improving the sample efficiency of RL algorithms. In particular, we consider settings in which the agent has access to an approximate plan. Here, we use examples of simulated robotic manipulation tasks to demonstrate that plan-based FV-RS can indeed significantly improve the sample efficiency of RL over plan-based PB-RS. | https://openreview.net/pdf/6ab6b9e3a9fe5a364f986aaff177de866990899b.pdf |
Improving VAEs' Robustness to Adversarial Attack | https://openreview.net/forum?id=-Hs_otp2RB | https://openreview.net/forum?id=-Hs_otp2RB | Matthew JF Willetts,Alexander Camuto,Tom Rainforth,S Roberts,Christopher C Holmes | ICLR 2021,Poster | Variational autoencoders (VAEs) have recently been shown to be vulnerable to adversarial attacks, wherein they are fooled into reconstructing a chosen target image. However, how to defend against such attacks remains an open problem. We make significant advances in addressing this issue by introducing methods for producing adversarially robust VAEs. Namely, we first demonstrate that methods proposed to obtain disentangled latent representations produce VAEs that are more robust to these attacks. However, this robustness comes at the cost of reducing the quality of the reconstructions. We ameliorate this by applying disentangling methods to hierarchical VAEs. The resulting models produce high--fidelity autoencoders that are also adversarially robust. We confirm their capabilities on several different datasets and with current state-of-the-art VAE adversarial attacks, and also show that they increase the robustness of downstream tasks to attack. | https://openreview.net/pdf/99d30d8f3d5b1463f05554f92526d389e651b1db.pdf |
Differentiable Segmentation of Sequences | https://openreview.net/forum?id=4T489T4yav | https://openreview.net/forum?id=4T489T4yav | Erik Scharwächter,Jonathan Lennartz,Emmanuel Müller | ICLR 2021,Poster | Segmented models are widely used to describe non-stationary sequential data with discrete change points. Their estimation usually requires solving a mixed discrete-continuous optimization problem, where the segmentation is the discrete part and all other model parameters are continuous. A number of estimation algorithms have been developed that are highly specialized for their specific model assumptions. The dependence on non-standard algorithms makes it hard to integrate segmented models in state-of-the-art deep learning architectures that critically depend on gradient-based optimization techniques. In this work, we formulate a relaxed variant of segmented models that enables joint estimation of all model parameters, including the segmentation, with gradient descent. We build on recent advances in learning continuous warping functions and propose a novel family of warping functions based on the two-sided power (TSP) distribution. TSP-based warping functions are differentiable, have simple closed-form expressions, and can represent segmentation functions exactly. Our formulation includes the important class of segmented generalized linear models as a special case, which makes it highly versatile. We use our approach to model the spread of COVID-19 with Poisson regression, apply it on a change point detection task, and learn classification models with concept drift. The experiments show that our approach effectively learns all these tasks with standard algorithms for gradient descent. | https://openreview.net/pdf/211648c2242f789fd76f662801f326094db7433d.pdf |
GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing | https://openreview.net/forum?id=kyaIeYj4zZ | https://openreview.net/forum?id=kyaIeYj4zZ | Tao Yu,Chien-Sheng Wu,Xi Victoria Lin,bailin wang,Yi Chern Tan,Xinyi Yang,Dragomir Radev,richard socher,Caiming Xiong | ICLR 2021,Poster | We present GraPPa, an effective pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data. We construct synthetic question-SQL pairs over high-quality tables via a synchronous context-free grammar (SCFG). We pre-train our model on the synthetic data to inject important structural properties commonly found in semantic parsing into the pre-training language model. To maintain the model's ability to represent real-world data, we also include masked language modeling (MLM) on several existing table-related datasets to regularize our pre-training process. Our proposed pre-training strategy is much data-efficient. When incorporated with strong base semantic parsers, GraPPa achieves new state-of-the-art results on four popular fully supervised and weakly supervised table semantic parsing tasks. | https://openreview.net/pdf/41a8d65642880c0853bfa9f37d81b4fc15cba53e.pdf |
Sliced Kernelized Stein Discrepancy | https://openreview.net/forum?id=t0TaKv0Gx6Z | https://openreview.net/forum?id=t0TaKv0Gx6Z | Wenbo Gong,Yingzhen Li,José Miguel Hernández-Lobato | ICLR 2021,Poster | Kernelized Stein discrepancy (KSD), though being extensively used in goodness-of-fit tests and model learning, suffers from the curse-of-dimensionality. We address this issue by proposing the sliced Stein discrepancy and its scalable and kernelized variants, which employs kernel-based test functions defined on the optimal one-dimensional projections. When applied to goodness-of-fit tests, extensive experiments show the proposed discrepancy significantly outperforms KSD and various baselines in high dimensions. For model learning, we show its advantages by training an independent component analysis when compared with existing Stein discrepancy baselines. We further propose a novel particle inference method called sliced Stein variational gradient descent (S-SVGD) which alleviates the mode-collapse issue of SVGD in training variational autoencoders. | https://openreview.net/pdf/39d9fa2661eb33fc05f7d9de6fddb979108767c4.pdf |
Variational Intrinsic Control Revisited | https://openreview.net/forum?id=P0p33rgyoE | https://openreview.net/forum?id=P0p33rgyoE | Taehwan Kwon | ICLR 2021,Poster | In this paper, we revisit variational intrinsic control (VIC), an unsupervised reinforcement learning method for finding the largest set of intrinsic options available to an agent. In the original work by Gregor et al. (2016), two VIC algorithms were proposed: one that represents the options explicitly, and the other that does it implicitly. We show that the intrinsic reward used in the latter is subject to bias in stochastic environments, causing convergence to suboptimal solutions. To correct this behavior, we propose two methods respectively based on the transitional probability model and Gaussian Mixture Model. We substantiate our claims through rigorous mathematical derivations and experimental analyses. | https://openreview.net/pdf/8841dcedad713be63398c9001418c334c7479b4e.pdf |
HyperDynamics: Meta-Learning Object and Agent Dynamics with Hypernetworks | https://openreview.net/forum?id=pHXfe1cOmA | https://openreview.net/forum?id=pHXfe1cOmA | Zhou Xian,Shamit Lal,Hsiao-Yu Tung,Emmanouil Antonios Platanios,Katerina Fragkiadaki | ICLR 2021,Poster | We propose HyperDynamics, a dynamics meta-learning framework that conditions on an agent’s interactions with the environment and optionally its visual observations, and generates the parameters of neural dynamics models based on inferred properties of the dynamical system. Physical and visual properties of the environment that are not part of the low-dimensional state yet affect its temporal dynamics are inferred from the interaction history and visual observations, and are implicitly captured in the generated parameters. We test HyperDynamics on a set of object pushing and locomotion tasks. It outperforms existing dynamics models in the literature that adapt to environment variations by learning dynamics over high dimensional visual observations, capturing the interactions of the agent in recurrent state representations, or using gradient-based meta-optimization. We also show our method matches the performance of an ensemble of separately trained experts, while also being able to generalize well to unseen environment variations at test time. We attribute its good performance to the multiplicative interactions between the inferred system properties—captured in the generated parameters—and the low-dimensional state representation of the dynamical system. | https://openreview.net/pdf/08774c9cdcc696092021d165f4b6e807b414198c.pdf |
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning | https://openreview.net/forum?id=AHOs7Sm5H7R | https://openreview.net/forum?id=AHOs7Sm5H7R | Zhiyuan Li,Yuping Luo,Kaifeng Lyu | ICLR 2021,Poster | Matrix factorization is a simple and natural test-bed to investigate the implicit regularization of gradient descent. Gunasekar et al. (2017) conjectured that gradient flow with infinitesimal initialization converges to the solution that minimizes the nuclear norm, but a series of recent papers argued that the language of norm minimization is not sufficient to give a full characterization for the implicit regularization. In this work, we provide theoretical and empirical evidence that for depth-2 matrix factorization, gradient flow with infinitesimal initialization is mathematically equivalent to a simple heuristic rank minimization algorithm, Greedy Low-Rank Learning, under some reasonable assumptions. This generalizes the rank minimization view from previous works to a much broader setting and enables us to construct counter-examples to refute the conjecture from Gunasekar et al. (2017). We also extend the results to the case where depth >= 3, and we show that the benefit of being deeper is that the above convergence has a much weaker dependence over initialization magnitude so that this rank minimization is more likely to take effect for initialization with practical scale. | https://openreview.net/pdf/e29b53584bc9017cb15b9394735cd51b56c32446.pdf |
Private Post-GAN Boosting | https://openreview.net/forum?id=6isfR3JCbi | https://openreview.net/forum?id=6isfR3JCbi | Marcel Neunhoeffer,Steven Wu,Cynthia Dwork | ICLR 2021,Poster | Differentially private GANs have proven to be a promising approach for generating realistic synthetic data without compromising the privacy of individuals. Due to the privacy-protective noise introduced in the training, the convergence of GANs becomes even more elusive, which often leads to poor utility in the output generator at the end of training. We propose Private post-GAN boosting (Private PGB), a differentially private method that combines samples produced by the sequence of generators obtained during GAN training to create a high-quality synthetic dataset. To that end, our method leverages the Private Multiplicative Weights method (Hardt and Rothblum, 2010) to reweight generated samples. We evaluate Private PGB on two dimensional toy data, MNIST images, US Census data and a standard machine learning prediction task. Our experiments show that Private PGB improves upon a standard private GAN approach across a collection of quality measures. We also provide a non-private variant of PGB that improves the data quality of standard GAN training. | https://openreview.net/pdf/9af34be61229e9ded84048009befadeb57d1957d.pdf |
Characterizing signal propagation to close the performance gap in unnormalized ResNets | https://openreview.net/forum?id=IX3Nnir2omJ | https://openreview.net/forum?id=IX3Nnir2omJ | Andrew Brock,Soham De,Samuel L Smith | ICLR 2021,Poster | Batch Normalization is a key component in almost all state-of-the-art image classifiers, but it also introduces practical challenges: it breaks the independence between training examples within a batch, can incur compute and memory overhead, and often results in unexpected bugs. Building on recent theoretical analyses of deep ResNets at initialization, we propose a simple set of analysis tools to characterize signal propagation on the forward pass, and leverage these tools to design highly performant ResNets without activation normalization layers. Crucial to our success is an adapted version of the recently proposed Weight Standardization. Our analysis tools show how this technique preserves the signal in ReLU networks by ensuring that the per-channel activation means do not grow with depth. Across a range of FLOP budgets, our networks attain performance competitive with state-of-the-art EfficientNets on ImageNet. | https://openreview.net/pdf/796f0f646a7dc728f2d8d89bc6d55288c9457889.pdf |
Prototypical Contrastive Learning of Unsupervised Representations | https://openreview.net/forum?id=KmykpuSrjcq | https://openreview.net/forum?id=KmykpuSrjcq | Junnan Li,Pan Zhou,Caiming Xiong,Steven Hoi | ICLR 2021,Poster | This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that bridges contrastive learning with clustering. PCL not only learns low-level features for the task of instance discrimination, but more importantly, it implicitly encodes semantic structures of the data into the learned embedding space. Specifically, we introduce prototypes as latent variables to help find the maximum-likelihood estimation of the network parameters in an Expectation-Maximization framework. We iteratively perform E-step as finding the distribution of prototypes via clustering and M-step as optimizing the network via contrastive learning. We propose ProtoNCE loss, a generalized version of the InfoNCE loss for contrastive learning, which encourages representations to be closer to their assigned prototypes. PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks with substantial improvement in low-resource transfer learning. Code and pretrained models are available at https://github.com/salesforce/PCL. | https://openreview.net/pdf/601011be0933cda056049e8fd0b25a10bcfd4515.pdf |
Hyperbolic Neural Networks++ | https://openreview.net/forum?id=Ec85b0tUwbA | https://openreview.net/forum?id=Ec85b0tUwbA | Ryohei Shimizu,YUSUKE Mukuta,Tatsuya Harada | ICLR 2021,Poster | Hyperbolic spaces, which have the capacity to embed tree structures without distortion owing to their exponential volume growth, have recently been applied to machine learning to better capture the hierarchical nature of data. In this study, we generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincaré ball model. This novel methodology constructs a multinomial logistic regression, fully-connected layers, convolutional layers, and attention mechanisms under a unified mathematical interpretation, without increasing the parameters. Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts. | https://openreview.net/pdf/83447b5937824f2d585bcbca44769d242615f9f5.pdf |
Lipschitz Recurrent Neural Networks | https://openreview.net/forum?id=-N7PBXqOUJZ | https://openreview.net/forum?id=-N7PBXqOUJZ | N. Benjamin Erichson,Omri Azencot,Alejandro Queiruga,Liam Hodgkinson,Michael W. Mahoney | ICLR 2021,Poster | Viewing recurrent neural networks (RNNs) as continuous-time dynamical systems, we propose a recurrent unit that describes the hidden state's evolution with two parts: a well-understood linear component plus a Lipschitz nonlinearity. This particular functional form facilitates stability analysis of the long-term behavior of the recurrent unit using tools from nonlinear systems theory. In turn, this enables architectural design decisions before experimentation. Sufficient conditions for global stability of the recurrent unit are obtained, motivating a novel scheme for constructing hidden-to-hidden matrices. Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks, including computer vision, language modeling and speech prediction tasks. Finally, through Hessian-based analysis we demonstrate that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs. | https://openreview.net/pdf/fab880544ab1da571de32581b8939abf93ce475f.pdf |
A statistical theory of cold posteriors in deep neural networks | https://openreview.net/forum?id=Rd138pWXMvG | https://openreview.net/forum?id=Rd138pWXMvG | Laurence Aitchison | ICLR 2021,Poster | To get Bayesian neural networks to perform comparably to standard neural networks it is usually necessary to artificially reduce uncertainty using a tempered or cold posterior. This is extremely concerning: if the prior is accurate, Bayes inference/decision theory is optimal, and any artificial changes to the posterior should harm performance. While this suggests that the prior may be at fault, here we argue that in fact, BNNs for image classification use the wrong likelihood. In particular, standard image benchmark datasets such as CIFAR-10 are carefully curated. We develop a generative model describing curation which gives a principled Bayesian account of cold posteriors, because the likelihood under this new generative model closely matches the tempered likelihoods used in past work. | https://openreview.net/pdf/ad6b61823bafd130bfd5c821fd1ceb7913a54d2d.pdf |
Boost then Convolve: Gradient Boosting Meets Graph Neural Networks | https://openreview.net/forum?id=ebS5NUfoMKL | https://openreview.net/forum?id=ebS5NUfoMKL | Sergei Ivanov,Liudmila Prokhorenkova | ICLR 2021,Poster | Graph neural networks (GNNs) are powerful models that have been successful in various graph representation learning tasks. Whereas gradient boosted decision trees (GBDT) often outperform other machine learning methods when faced with heterogeneous tabular data. But what approach should be used for graphs with tabular node features? Previous GNN models have mostly focused on networks with homogeneous sparse features and, as we show, are suboptimal in the heterogeneous setting. In this work, we propose a novel architecture that trains GBDT and GNN jointly to get the best of both worlds: the GBDT model deals with heterogeneous features, while GNN accounts for the graph structure. Our model benefits from end-to-end optimization by allowing new trees to fit the gradient updates of GNN. With an extensive experimental comparison to the leading GBDT and GNN models, we demonstrate a significant increase in performance on a variety of graphs with tabular features. The code is available: https://github.com/nd7141/bgnn. | https://openreview.net/pdf/e8b53ad374bcf1f4207b1153a22ea94fb05e3311.pdf |
Genetic Soft Updates for Policy Evolution in Deep Reinforcement Learning | https://openreview.net/forum?id=TGFO0DbD_pk | https://openreview.net/forum?id=TGFO0DbD_pk | Enrico Marchesini,Davide Corsi,Alessandro Farinelli | ICLR 2021,Poster | The combination of Evolutionary Algorithms (EAs) and Deep Reinforcement Learning (DRL) has been recently proposed to merge the benefits of both solutions. Existing mixed approaches, however, have been successfully applied only to actor-critic methods and present significant overhead. We address these issues by introducing a novel mixed framework that exploits a periodical genetic evaluation to soft update the weights of a DRL agent. The resulting approach is applicable with any DRL method and, in a worst-case scenario, it does not exhibit detrimental behaviours. Experiments in robotic applications and continuous control benchmarks demonstrate the versatility of our approach that significantly outperforms prior DRL, EAs, and mixed approaches. Finally, we employ formal verification to confirm the policy improvement, mitigating the inefficient exploration and hyper-parameter sensitivity of DRL.ment, mitigating the inefficient exploration and hyper-parameter sensitivity of DRL. | https://openreview.net/pdf/2a012533ff0b6880941f619b1e03b63abd1414c6.pdf |
Spatially Structured Recurrent Modules | https://openreview.net/forum?id=5l9zj5G7vDY | https://openreview.net/forum?id=5l9zj5G7vDY | Nasim Rahaman,Anirudh Goyal,Muhammad Waleed Gondal,Manuel Wuthrich,Stefan Bauer,Yash Sharma,Yoshua Bengio,Bernhard Schölkopf | ICLR 2021,Poster | Capturing the structure of a data-generating process by means of appropriate inductive biases can help in learning models that generalise well and are robust to changes in the input distribution. While methods that harness spatial and temporal structures find broad application, recent work has demonstrated the potential of models that leverage sparse and modular structure using an ensemble of sparingly interacting modules. In this work, we take a step towards dynamic models that are capable of simultaneously exploiting both modular and spatiotemporal structures. To this end, we model the dynamical system as a collection of autonomous but sparsely interacting sub-systems that interact according to a learned topology which is informed by the spatial structure of the underlying system. This gives rise to a class of models that are well suited for capturing the dynamics of systems that only offer local views into their state, along with corresponding spatial locations of those views. On the tasks of video prediction from cropped frames and multi-agent world modelling from partial observations in the challenging Starcraft2 domain, we find our models to be more robust to the number of available views and better capable of generalisation to novel tasks without additional training than strong baselines that perform equally well or better on the training distribution. | https://openreview.net/pdf/3590e3dd48376daa86d4fee6c6cb3c8b051d03b9.pdf |
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines | https://openreview.net/forum?id=nzpLWnVAyah | https://openreview.net/forum?id=nzpLWnVAyah | Marius Mosbach,Maksym Andriushchenko,Dietrich Klakow | ICLR 2021,Poster | Fine-tuning pre-trained transformer-based language models such as BERT has become a common practice dominating leaderboards across various NLP benchmarks. Despite the strong empirical performance of fine-tuned models, fine-tuning is an unstable process: training the same model with multiple random seeds can result in a large variance of the task performance. Previous literature (Devlin et al., 2019; Lee et al., 2020; Dodge et al., 2020) identified two potential reasons for the observed instability: catastrophic forgetting and small size of the fine-tuning datasets. In this paper, we show that both hypotheses fail to explain the fine-tuning instability. We analyze BERT, RoBERTa, and ALBERT, fine-tuned on commonly used datasets from the GLUE benchmark, and show that the observed instability is caused by optimization difficulties that lead to vanishing gradients. Additionally, we show that the remaining variance of the downstream task performance can be attributed to differences in generalization where fine-tuned models with the same training loss exhibit noticeably different test performance. Based on our analysis, we present a simple but strong baseline that makes fine-tuning BERT-based models significantly more stable than the previously proposed approaches. Code to reproduce our results is available online: https://github.com/uds-lsv/bert-stable-fine-tuning. | https://openreview.net/pdf/ecb1af8e8fc55b9e071db6ef6b56163a21f00a44.pdf |
End-to-End Egospheric Spatial Memory | https://openreview.net/forum?id=rRFIni1CYmy | https://openreview.net/forum?id=rRFIni1CYmy | Daniel James Lenton,Stephen James,Ronald Clark,Andrew Davison | ICLR 2021,Poster | Spatial memory, or the ability to remember and recall specific locations and objects, is central to autonomous agents' ability to carry out tasks in real environments. However, most existing artificial memory modules are not very adept at storing spatial information. We propose a parameter-free module, Egospheric Spatial Memory (ESM), which encodes the memory in an ego-sphere around the agent, enabling expressive 3D representations. ESM can be trained end-to-end via either imitation or reinforcement learning, and improves both training efficiency and final performance against other memory baselines on both drone and manipulator visuomotor control tasks. The explicit egocentric geometry also enables us to seamlessly combine the learned controller with other non-learned modalities, such as local obstacle avoidance. We further show applications to semantic segmentation on the ScanNet dataset, where ESM naturally combines image-level and map-level inference modalities. Through our broad set of experiments, we show that ESM provides a general computation graph for embodied spatial reasoning, and the module forms a bridge between real-time mapping systems and differentiable memory architectures. Implementation at: https://github.com/ivy-dl/memory. | https://openreview.net/pdf/5c9e921c94b83d510872e5e048479c56c66cad04.pdf |
LEAF: A Learnable Frontend for Audio Classification | https://openreview.net/forum?id=jM76BCb6F9m | https://openreview.net/forum?id=jM76BCb6F9m | Neil Zeghidour,Olivier Teboul,Félix de Chaumont Quitry,Marco Tagliasacchi | ICLR 2021,Poster | Mel-filterbanks are fixed, engineered audio features which emulate human perception and have been used through the history of audio understanding up to today. However, their undeniable qualities are counterbalanced by the fundamental limitations of handmade representations. In this work we show that we can train a single learnable frontend that outperforms mel-filterbanks on a wide range of audio signals, including speech, music, audio events and animal sounds, providing a general-purpose learned frontend for audio classification. To do so, we introduce a new principled, lightweight, fully learnable architecture that can be used as a drop-in replacement of mel-filterbanks. Our system learns all operations of audio features extraction, from filtering to pooling, compression and normalization, and can be integrated into any neural network at a negligible parameter cost. We perform multi-task training on eight diverse audio classification tasks, and show consistent improvements of our model over mel-filterbanks and previous learnable alternatives. Moreover, our system outperforms the current state-of-the-art learnable frontend on Audioset, with orders of magnitude fewer parameters. | https://openreview.net/pdf/426d58043e09ff47db27ab72f40e8db575a46f7b.pdf |
Simple Augmentation Goes a Long Way: ADRL for DNN Quantization | https://openreview.net/forum?id=Qr0aRliE_Hb | https://openreview.net/forum?id=Qr0aRliE_Hb | Lin Ning,Guoyang Chen,Weifeng Zhang,Xipeng Shen | ICLR 2021,Poster | Mixed precision quantization improves DNN performance by assigning different layers with different bit-width values. Searching for the optimal bit-width for each layer, however, remains a challenge. Deep Reinforcement Learning (DRL) shows some recent promise. It however suffers instability due to function approximation errors, causing large variances in the early training stages, slow convergence, and suboptimal policies in the mixed-precision quantization problem. This paper proposes augmented DRL (ADRL) as a way to alleviate these issues. This new strategy augments the neural networks in DRL with a complementary scheme to boost the performance of learning. The paper examines the effectiveness of ADRL both analytically and empirically, showing that it can produce more accurate quantized models than the state of the art DRL-based quantization while improving the learning speed by 4.5-64 times. | https://openreview.net/pdf/4f1af14f420632aa60f163e48701a935fae3a547.pdf |
The inductive bias of ReLU networks on orthogonally separable data | https://openreview.net/forum?id=krz7T0xU9Z_ | https://openreview.net/forum?id=krz7T0xU9Z_ | Mary Phuong,Christoph H Lampert | ICLR 2021,Poster | We study the inductive bias of two-layer ReLU networks trained by gradient flow. We identify a class of easy-to-learn (`orthogonally separable') datasets, and characterise the solution that ReLU networks trained on such datasets converge to. Irrespective of network width, the solution turns out to be a combination of two max-margin classifiers: one corresponding to the positive data subset and one corresponding to the negative data subset.
The proof is based on the recently introduced concept of extremal sectors, for which we prove a number of properties in the context of orthogonal separability. In particular, we prove stationarity of activation patterns from some time $T$ onwards, which enables a reduction of the ReLU network to an ensemble of linear subnetworks.
| https://openreview.net/pdf/a68e4ef7c465175fddb6ba540763c62f8708c9e3.pdf |
Monte-Carlo Planning and Learning with Language Action Value Estimates | https://openreview.net/forum?id=7_G8JySGecm | https://openreview.net/forum?id=7_G8JySGecm | Youngsoo Jang,Seokin Seo,Jongmin Lee,Kee-Eung Kim | ICLR 2021,Poster | Interactive Fiction (IF) games provide a useful testbed for language-based reinforcement learning agents, posing significant challenges of natural language understanding, commonsense reasoning, and non-myopic planning in the combinatorial search space. Agents based on standard planning algorithms struggle to play IF games due to the massive search space of language actions. Thus, language-grounded planning is a key ability of such agents, since inferring the consequence of language action based on semantic understanding can drastically improve search. In this paper, we introduce Monte-Carlo planning with Language Action Value Estimates (MC-LAVE) that combines a Monte-Carlo tree search with language-driven exploration. MC-LAVE invests more search effort into semantically promising language actions using locally optimistic language value estimates, yielding a significant reduction in the effective search space of language actions. We then present a reinforcement learning approach via MC-LAVE, which alternates between MC-LAVE planning and supervised learning of the self-generated language actions. In the experiments, we demonstrate that our method achieves new high scores in various IF games. | https://openreview.net/pdf/255385188b591f81f5ec4cb8c99ea2b92467f6be.pdf |
Learning Energy-Based Models by Diffusion Recovery Likelihood | https://openreview.net/forum?id=v_1Soh8QUNc | https://openreview.net/forum?id=v_1Soh8QUNc | Ruiqi Gao,Yang Song,Ben Poole,Ying Nian Wu,Diederik P Kingma | ICLR 2021,Poster | While energy-based models (EBMs) exhibit a number of desirable properties, training and sampling on high-dimensional datasets remains challenging. Inspired by recent progress on diffusion probabilistic models, we present a diffusion recovery likelihood method to tractably learn and sample from a sequence of EBMs trained on increasingly noisy versions of a dataset. Each EBM is trained with recovery likelihood, which maximizes the conditional probability of the data at a certain noise level given their noisy versions at a higher noise level. Optimizing recovery likelihood is more tractable than marginal likelihood, as sampling from the conditional distributions is much easier than sampling from the marginal distributions. After training, synthesized images can be generated by the sampling process that initializes from Gaussian white noise distribution and progressively samples the conditional distributions at decreasingly lower noise levels. Our method generates high fidelity samples on various image datasets. On unconditional CIFAR-10 our method achieves FID 9.58 and inception score 8.30, superior to the majority of GANs. Moreover, we demonstrate that unlike previous work on EBMs, our long-run MCMC samples from the conditional distributions do not diverge and still represent realistic images, allowing us to accurately estimate the normalized density of data even for high-dimensional datasets. Our implementation is available at \url{https://github.com/ruiqigao/recovery_likelihood}.
| https://openreview.net/pdf/bb74e78ec73a15dcfd250d8dac827fa7009897b2.pdf |
Capturing Label Characteristics in VAEs | https://openreview.net/forum?id=wQRlSUZ5V7B | https://openreview.net/forum?id=wQRlSUZ5V7B | Tom Joy,Sebastian Schmon,Philip Torr,Siddharth N,Tom Rainforth | ICLR 2021,Poster | We present a principled approach to incorporating labels in variational autoencoders (VAEs) that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to the intended effect of supervision in VAEs—capturing rich label characteristics with the latents. For example, we may want to capture the characteristics of a face that make it look young, rather than just the age of the person. To this end, we develop a novel VAE model, the characteristic capturing VAE (CCVAE), which “reparameterizes” supervision through auxiliary variables and a concomitant variational objective. Through judicious structuring of mappings between latent and auxiliary variables, we show that the CCVAE can effectively learn meaningful representations of the characteristics of interest across a variety of supervision schemes. In particular, we show that the CCVAE allows for more effective and more general interventions to be performed, such as smooth traversals within the characteristics for a given label, diverse conditional generation, and transferring characteristics across datapoints. | https://openreview.net/pdf/f58d5a4d19e174d578190ec9687a1904e52596b6.pdf |
Linear Mode Connectivity in Multitask and Continual Learning | https://openreview.net/forum?id=Fmg_fQYUejf | https://openreview.net/forum?id=Fmg_fQYUejf | Seyed Iman Mirzadeh,Mehrdad Farajtabar,Dilan Gorur,Razvan Pascanu,Hassan Ghasemzadeh | ICLR 2021,Poster | Continual (sequential) training and multitask (simultaneous) training are often attempting to solve the same overall objective: to find a solution that performs well on all considered tasks. The main difference is in the training regimes, where continual learning can only have access to one task at a time, which for neural networks typically leads to catastrophic forgetting. That is, the solution found for a subsequent task does not perform well on the previous ones anymore.
However, the relationship between the different minima that the two training regimes arrive at is not well understood. What sets them apart? Is there a local structure that could explain the difference in performance achieved by the two different schemes?
Motivated by recent work showing that different minima of the same task are typically connected by very simple curves of low error, we investigate whether multitask and continual solutions are similarly connected. We empirically find that indeed such connectivity can be reliably achieved and, more interestingly, it can be done by a linear path, conditioned on having the same initialization for both. We thoroughly analyze this observation and discuss its significance for the continual learning process.
Furthermore, we exploit this finding to propose an effective algorithm that constrains the sequentially learned minima to behave as the multitask solution. We show that our method outperforms several state of the art continual learning algorithms on various vision benchmarks. | https://openreview.net/pdf/258e0f0ad7124932b50cc607ded20cd020bfccf8.pdf |
Computational Separation Between Convolutional and Fully-Connected Networks | https://openreview.net/forum?id=hkMoYYEkBoI | https://openreview.net/forum?id=hkMoYYEkBoI | eran malach,Shai Shalev-Shwartz | ICLR 2021,Poster | Convolutional neural networks (CNN) exhibit unmatched performance in a multitude of computer vision tasks. However, the advantage of using convolutional networks over fully-connected networks is not understood from a theoretical perspective. In this work, we show how convolutional networks can leverage locality in the data, and thus achieve a computational advantage over fully-connected networks. Specifically, we show a class of problems that can be efficiently solved using convolutional networks trained with gradient-descent, but at the same time is hard to learn using a polynomial-size fully-connected network. | https://openreview.net/pdf/f6530436996abef24697ac8461be780c738d0b41.pdf |
Rethinking Embedding Coupling in Pre-trained Language Models | https://openreview.net/forum?id=xpFFI_NtgpW | https://openreview.net/forum?id=xpFFI_NtgpW | Hyung Won Chung,Thibault Fevry,Henry Tsai,Melvin Johnson,Sebastian Ruder | ICLR 2021,Poster | We re-evaluate the standard practice of sharing weights between input and output embeddings in state-of-the-art pre-trained language models. We show that decoupled embeddings provide increased modeling flexibility, allowing us to significantly improve the efficiency of parameter allocation in the input embedding of multilingual models. By reallocating the input embedding parameters in the Transformer layers, we achieve dramatically better performance on standard natural language understanding tasks with the same number of parameters during fine-tuning. We also show that allocating additional capacity to the output embedding provides benefits to the model that persist through the fine-tuning stage even though the output embedding is discarded after pre-training. Our analysis shows that larger output embeddings prevent the model's last layers from overspecializing to the pre-training task and encourage Transformer representations to be more general and more transferable to other tasks and languages. Harnessing these findings, we are able to train models that achieve strong performance on the XTREME benchmark without increasing the number of parameters at the fine-tuning stage. | https://openreview.net/pdf/adedfbb0966285d46a1b5e7fb42ed8f57385af9e.pdf |
Physics-aware, probabilistic model order reduction with guaranteed stability | https://openreview.net/forum?id=vyY0jnWG-tK | https://openreview.net/forum?id=vyY0jnWG-tK | Sebastian Kaltenbach,Phaedon Stelios Koutsourelakis | ICLR 2021,Poster | Given (small amounts of) time-series' data from a high-dimensional, fine-grained, multiscale dynamical system, we propose a generative framework for learning an effective, lower-dimensional, coarse-grained dynamical model that is predictive of the fine-grained system's long-term evolution but also of its behavior under different initial conditions.
We target fine-grained models as they arise in physical applications (e.g. molecular dynamics, agent-based models), the dynamics of which are strongly non-stationary but their transition to equilibrium is governed by unknown slow processes which are largely inaccessible by brute-force simulations.
Approaches based on domain knowledge heavily rely on physical insight in identifying temporally slow features and fail to enforce the long-term stability of the learned dynamics. On the other hand, purely statistical frameworks lack interpretability and rely on large amounts of expensive simulation data (long and multiple trajectories) as they cannot infuse domain knowledge.
The generative framework proposed achieves the aforementioned desiderata by employing a flexible prior on the complex plane for the latent, slow processes, and an intermediate layer of physics-motivated latent variables that reduces reliance on data and imbues inductive bias. In contrast to existing schemes, it does not require the a priori definition of projection operators from the fine-grained description and addresses simultaneously the tasks of dimensionality reduction and model estimation.
We demonstrate its efficacy and accuracy in multiscale physical systems of particle dynamics where probabilistic, long-term predictions of phenomena not contained in the training data are produced. | https://openreview.net/pdf/0dbc13eb90ca0605840fb7ee708d76db95df9cbd.pdf |
Disentangling 3D Prototypical Networks for Few-Shot Concept Learning | https://openreview.net/forum?id=-Lr-u0b42he | https://openreview.net/forum?id=-Lr-u0b42he | Mihir Prabhudesai,Shamit Lal,Darshan Patil,Hsiao-Yu Tung,Adam W Harley,Katerina Fragkiadaki | ICLR 2021,Poster | We present neural architectures that disentangle RGB-D images into objects’ shapes and styles and a map of the background scene, and explore their applications for few-shot 3D object detection and few-shot concept classification. Our networks incorporate architectural biases that reflect the image formation process, 3D geometry of the world scene, and shape-style interplay. They are trained end-to-end self-supervised by predicting views in static scenes, alongside a small number of 3D object boxes. Objects and scenes are represented in terms of 3D feature grids in the bottleneck of the network. We show the proposed 3D neural representations are compositional: they can generate novel 3D scene feature maps by mixing object shapes and styles, resizing and adding the resulting object 3D feature maps over background scene feature maps. We show object detectors trained on hallucinated 3D neural scenes generalize better to novel environments. We show classifiers for object categories, color, materials, and spatial relationships trained over the disentangled 3D feature sub-spaces generalize better with dramatically fewer exemplars over the current state-of-the-art, and enable a visual question answering system that uses them as its modules to generalize one-shot to novel objects in the scene. | https://openreview.net/pdf/b42e4e31403f7d4fdb789fb870cace1f71e6bb86.pdf |
LiftPool: Bidirectional ConvNet Pooling | https://openreview.net/forum?id=kE3vd639uRW | https://openreview.net/forum?id=kE3vd639uRW | Jiaojiao Zhao,Cees G. M. Snoek | ICLR 2021,Poster | Pooling is a critical operation in convolutional neural networks for increasing receptive fields and improving robustness to input variations. Most existing pooling operations downsample the feature maps, which is a lossy process. Moreover, they are not invertible: upsampling a downscaled feature map can not recover the lost information in the downsampling. By adopting the philosophy of the classical Lifting Scheme from signal processing, we propose LiftPool for bidirectional pooling layers, including LiftDownPool and LiftUpPool. LiftDownPool decomposes a feature map into various downsized sub-bands, each of which contains information with different frequencies. As the pooling function in LiftDownPool is perfectly invertible, by performing LiftDownPool backward, a corresponding up-pooling layer LiftUpPool is able to generate a refined upsampled feature map using the detail subbands, which is useful for image-to-image translation challenges. Experiments show the proposed methods achieve better results on image classification and semantic segmentation, using various backbones. Moreover, LiftDownPool offers better robustness to input corruptions and perturbations. | https://openreview.net/pdf/723c52d5e33d391f50b4913e512241a208596a0c.pdf |
Latent Convergent Cross Mapping | https://openreview.net/forum?id=4TSiOTkKe5P | https://openreview.net/forum?id=4TSiOTkKe5P | Edward De Brouwer,Adam Arany,Jaak Simm,Yves Moreau | ICLR 2021,Poster | Discovering causal structures of temporal processes is a major tool of scientific inquiry because it helps us better understand and explain the mechanisms driving a phenomenon of interest, thereby facilitating analysis, reasoning, and synthesis for such systems.
However, accurately inferring causal structures within a phenomenon based on observational data only is still an open problem. Indeed, this type of data usually consists in short time series with missing or noisy values for which causal inference is increasingly difficult. In this work, we propose a method to uncover causal relations in chaotic dynamical systems from short, noisy and sporadic time series (that is, incomplete observations at infrequent and irregular intervals) where the classical convergent cross mapping (CCM) fails. Our method works by learning a Neural ODE latent process modeling the state-space dynamics of the time series and by checking the existence of a continuous map between the resulting processes. We provide theoretical analysis and show empirically that Latent-CCM can reliably uncover the true causal pattern, unlike traditional methods. | https://openreview.net/pdf/973e4e487f91472cfee202c1353ca7932b83a942.pdf |
You Only Need Adversarial Supervision for Semantic Image Synthesis | https://openreview.net/forum?id=yvQKLaqNE6M | https://openreview.net/forum?id=yvQKLaqNE6M | Edgar Schönfeld,Vadim Sushko,Dan Zhang,Juergen Gall,Bernt Schiele,Anna Khoreva | ICLR 2021,Poster | Despite their recent successes, GAN models for semantic image synthesis still suffer from poor image quality when trained with only adversarial supervision. Historically, additionally employing the VGG-based perceptual loss has helped to overcome this issue, significantly improving the synthesis quality, but at the same time limiting the progress of GAN models for semantic image synthesis. In this work, we propose a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results. We re-design the discriminator as a semantic segmentation network, directly using the given semantic label maps as the ground truth for training. By providing stronger supervision to the discriminator as well as to the generator through spatially- and semantically-aware discriminator feedback, we are able to synthesize images of higher fidelity with better alignment to their input label maps, making the use of the perceptual loss superfluous. Moreover, we enable high-quality multi-modal image synthesis through global and local sampling of a 3D noise tensor injected into the generator, which allows complete or partial image change. We show that images synthesized by our model are more diverse and follow the color and texture distributions of real images more closely. We achieve an average improvement of $6$ FID and $5$ mIoU points over the state of the art across different datasets using only adversarial supervision. | https://openreview.net/pdf/296a08e6901d8e9191af10b50555200a0efb3fc4.pdf |
A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima | https://openreview.net/forum?id=wXgk_iCiYGo | https://openreview.net/forum?id=wXgk_iCiYGo | Zeke Xie,Issei Sato,Masashi Sugiyama | ICLR 2021,Poster | Stochastic Gradient Descent (SGD) and its variants are mainstream methods for training deep networks in practice. SGD is known to find a flat minimum that often generalizes well. However, it is mathematically unclear how deep learning can select a flat minimum among so many minima. To answer the question quantitatively, we develop a density diffusion theory to reveal how minima selection quantitatively depends on the minima sharpness and the hyperparameters. To the best of our knowledge, we are the first to theoretically and empirically prove that, benefited from the Hessian-dependent covariance of stochastic gradient noise, SGD favors flat minima exponentially more than sharp minima, while Gradient Descent (GD) with injected white noise favors flat minima only polynomially more than sharp minima. We also reveal that either a small learning rate or large-batch training requires exponentially many iterations to escape from minima in terms of the ratio of the batch size and learning rate. Thus, large-batch training cannot search flat minima efficiently in a realistic computational time. | https://openreview.net/pdf/8d09cb383c404f3ef7a8782e7e20297845235b60.pdf |
Robust Learning of Fixed-Structure Bayesian Networks in Nearly-Linear Time | https://openreview.net/forum?id=euDnVs0Ynts | https://openreview.net/forum?id=euDnVs0Ynts | Yu Cheng,Honghao Lin | ICLR 2021,Poster | We study the problem of learning Bayesian networks where an $\epsilon$-fraction of the samples are adversarially corrupted. We focus on the fully-observable case where the underlying graph structure is known. In this work, we present the first nearly-linear time algorithm for this problem with a dimension-independent error guarantee. Previous robust algorithms with comparable error guarantees are slower by at least a factor of $(d/\epsilon)$, where $d$ is the number of variables in the Bayesian network and $\epsilon$ is the fraction of corrupted samples.
Our algorithm and analysis are considerably simpler than those in previous work. We achieve this by establishing a direct connection between robust learning of Bayesian networks and robust mean estimation. As a subroutine in our algorithm, we develop a robust mean estimation algorithm whose runtime is nearly-linear in the number of nonzeros in the input samples, which may be of independent interest. | https://openreview.net/pdf/01c090bb63e775869f6bc2d003ebf3cd5e79df67.pdf |
Activation-level uncertainty in deep neural networks | https://openreview.net/forum?id=UvBPbpvHRj- | https://openreview.net/forum?id=UvBPbpvHRj- | Pablo Morales-Alvarez,Daniel Hernández-Lobato,Rafael Molina,José Miguel Hernández-Lobato | ICLR 2021,Poster | Current approaches for uncertainty estimation in deep learning often produce too confident results. Bayesian Neural Networks (BNNs) model uncertainty in the space of weights, which is usually high-dimensional and limits the quality of variational approximations. The more recent functional BNNs (fBNNs) address this only partially because, although the prior is specified in the space of functions, the posterior approximation is still defined in terms of stochastic weights. In this work we propose to move uncertainty from the weights (which are deterministic) to the activation function. Specifically, the activations are modelled with simple 1D Gaussian Processes (GP), for which a triangular kernel inspired by the ReLu non-linearity is explored. Our experiments show that activation-level stochasticity provides more reliable uncertainty estimates than BNN and fBNN, whereas it performs competitively in standard prediction tasks. We also study the connection with deep GPs, both theoretically and empirically. More precisely, we show that activation-level uncertainty requires fewer inducing points and is better suited for deep architectures. | https://openreview.net/pdf/3675d798eb4cc1b53b84850025e0a9edaee1ddcb.pdf |
SkipW: Resource Adaptable RNN with Strict Upper Computational Limit | https://openreview.net/forum?id=2CjEVW-RGOJ | https://openreview.net/forum?id=2CjEVW-RGOJ | Tsiry Mayet,Anne Lambert,Pascal Leguyadec,Francoise Le Bolzer,François Schnitzler | ICLR 2021,Poster | We introduce Skip-Window, a method to allow recurrent neural networks (RNNs) to trade off accuracy for computational cost during the analysis of a sequence. Similarly to existing approaches, Skip-Window extends existing RNN cells by adding a mechanism to encourage the model to process fewer inputs. Unlike existing approaches, Skip-Window is able to respect a strict computational budget, making this model more suitable for limited hardware. We evaluate this approach on two datasets: a human activity recognition task and adding task. Our results show that Skip-Window is able to exceed the accuracy of existing approaches for a lower computational cost while strictly limiting said cost. | https://openreview.net/pdf/6c45c14eaa50cfd7a61ea01da21211148f40eccf.pdf |
Wasserstein-2 Generative Networks | https://openreview.net/forum?id=bEoxzW_EXsa | https://openreview.net/forum?id=bEoxzW_EXsa | Alexander Korotin,Vage Egiazarian,Arip Asadulaev,Alexander Safin,Evgeny Burnaev | ICLR 2021,Poster | We propose a novel end-to-end non-minimax algorithm for training optimal transport mappings for the quadratic cost (Wasserstein-2 distance). The algorithm uses input convex neural networks and a cycle-consistency regularization to approximate Wasserstein-2 distance. In contrast to popular entropic and quadratic regularizers, cycle-consistency does not introduce bias and scales well to high dimensions. From the theoretical side, we estimate the properties of the generative mapping fitted by our algorithm. From the practical side, we evaluate our algorithm on a wide range of tasks: image-to-image color transfer, latent space optimal transport, image-to-image style transfer, and domain adaptation. | https://openreview.net/pdf/dbe3a9934dc8bb605cdc8c67d7e68c0a54cf4d38.pdf |
Group Equivariant Stand-Alone Self-Attention For Vision | https://openreview.net/forum?id=JkfYjnOEo6M | https://openreview.net/forum?id=JkfYjnOEo6M | David W. Romero,Jean-Baptiste Cordonnier | ICLR 2021,Poster | We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups. This is achieved by defining positional encodings that are invariant to the action of the group considered. Since the group acts on the positional encoding directly, group equivariant self-attention networks (GSA-Nets) are steerable by nature. Our experiments on vision benchmarks demonstrate consistent improvements of GSA-Nets over non-equivariant self-attention networks. | https://openreview.net/pdf/d8bac9d42bd7732afa503ae4fe5f83e1ace88bb2.pdf |
Continuous Wasserstein-2 Barycenter Estimation without Minimax Optimization | https://openreview.net/forum?id=3tFAs5E-Pe | https://openreview.net/forum?id=3tFAs5E-Pe | Alexander Korotin,Lingxiao Li,Justin Solomon,Evgeny Burnaev | ICLR 2021,Poster | Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport. In this paper, we present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures, which are not restricted to being discrete. While past approaches rely on entropic or quadratic regularization, we employ input convex neural networks and cycle-consistency regularization to avoid introducing bias. As a result, our approach does not resort to minimax optimization. We provide theoretical analysis on error bounds as well as empirical evidence of the effectiveness of the proposed approach in low-dimensional qualitative scenarios and high-dimensional quantitative experiments. | https://openreview.net/pdf/e0ff5cb89ad8da4cac3b85587213f35c465757fc.pdf |
RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs | https://openreview.net/forum?id=tGZu6DlbreV | https://openreview.net/forum?id=tGZu6DlbreV | Meng Qu,Junkun Chen,Louis-Pascal Xhonneux,Yoshua Bengio,Jian Tang | ICLR 2021,Poster | This paper studies learning logic rules for reasoning on knowledge graphs. Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks, and hence are critical to learn. Existing methods either suffer from the problem of searching in a large search space (e.g., neural logic programming) or ineffective optimization due to sparse rewards (e.g., techniques based on reinforcement learning). To address these limitations, this paper proposes a probabilistic model called RNNLogic. RNNLogic treats logic rules as a latent variable, and simultaneously trains a rule generator as well as a reasoning predictor with logic rules. We develop an EM-based algorithm for optimization. In each iteration, the reasoning predictor is updated to explore some generated logic rules for reasoning. Then in the E-step, we select a set of high-quality rules from all generated rules with both the rule generator and reasoning predictor via posterior inference; and in the M-step, the rule generator is updated with the rules selected in the E-step. Experiments on four datasets prove the effectiveness of RNNLogic. | https://openreview.net/pdf/847ad1169fb024508870737fba6927e2e34b9271.pdf |
Selective Classification Can Magnify Disparities Across Groups | https://openreview.net/forum?id=N0M_4BkQ05i | https://openreview.net/forum?id=N0M_4BkQ05i | Erik Jones,Shiori Sagawa,Pang Wei Koh,Ananya Kumar,Percy Liang | ICLR 2021,Poster | Selective classification, in which models can abstain on uncertain predictions, is a natural approach to improving accuracy in settings where errors are costly but abstentions are manageable. In this paper, we find that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities between various groups within a population, especially in the presence of spurious correlations. We observe this behavior consistently across five vision and NLP datasets. Surprisingly, increasing abstentions can even decrease accuracies on some groups. To better understand this phenomenon, we study the margin distribution, which captures the model’s confidences over all predictions. For symmetric margin distributions, we prove that whether selective classification monotonically improves or worsens accuracy is fully determined by the accuracy at full coverage (i.e., without any abstentions) and whether the distribution satisfies a property we call left-log-concavity. Our analysis also shows that selective classification tends to magnify full-coverage accuracy disparities. Motivated by our analysis, we train distributionally-robust models that achieve similar full-coverage accuracies across groups and show that selective classification uniformly improves each group on these models. Altogether, our results suggest that selective classification should be used with care and underscore the importance of training models to perform equally well across groups at full coverage. | https://openreview.net/pdf/b9ac6534faf7141a9138e3cfcfed7dbada0a6f36.pdf |
FedMix: Approximation of Mixup under Mean Augmented Federated Learning | https://openreview.net/forum?id=Ogga20D2HO- | https://openreview.net/forum?id=Ogga20D2HO- | Tehrim Yoon,Sumin Shin,Sung Ju Hwang,Eunho Yang | ICLR 2021,Poster | Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device, thus preserving privacy and eliminating the need to store data globally. While there are promising results under the assumption of independent and identically distributed (iid) local data, current state-of-the-art algorithms suffer a performance degradation as the heterogeneity of local data across clients increases. To resolve this issue, we propose a simple framework, \emph{Mean Augmented Federated Learning (MAFL)}, where clients send and receive \emph{averaged} local data, subject to the privacy requirements of target applications. Under our framework, we propose a new augmentation algorithm, named \emph{FedMix}, which is inspired by a phenomenal yet simple data augmentation method, Mixup, but does not require local raw data to be directly shared among devices. Our method shows greatly improved performance in the standard benchmark datasets of FL, under highly non-iid federated settings, compared to conventional algorithms. | https://openreview.net/pdf/0258da18459084a22b881d20dbd411e7184bb3d3.pdf |
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness | https://openreview.net/forum?id=jznizqvr15J | https://openreview.net/forum?id=jznizqvr15J | Sang Michael Xie,Ananya Kumar,Robbie Jones,Fereshte Khani,Tengyu Ma,Percy Liang | ICLR 2021,Poster | Consider a prediction setting with few in-distribution labeled examples and many unlabeled examples both in- and out-of-distribution (OOD). The goal is to learn a model which performs well both in-distribution and OOD. In these settings, auxiliary information is often cheaply available for every input. How should we best leverage this auxiliary information for the prediction task? Empirically across three image and time-series datasets, and theoretically in a multi-task linear regression setting, we show that (i) using auxiliary information as input features improves in-distribution error but can hurt OOD error; but (ii) using auxiliary information as outputs of auxiliary pre-training tasks improves OOD error. To get the best of both worlds, we introduce In-N-Out, which first trains a model with auxiliary inputs and uses it to pseudolabel all the in-distribution inputs, then pre-trains a model on OOD auxiliary outputs and fine-tunes this model with the pseudolabels (self-training). We show both theoretically and empirically that In-N-Out outperforms auxiliary inputs or outputs alone on both in-distribution and OOD error. | https://openreview.net/pdf/b003dea7a8dcfbb18d462cb7ce96b56a1a484fc6.pdf |
Sample-Efficient Automated Deep Reinforcement Learning | https://openreview.net/forum?id=hSjxQ3B7GWq | https://openreview.net/forum?id=hSjxQ3B7GWq | Jörg K.H. Franke,Gregor Koehler,André Biedenkapp,Frank Hutter | ICLR 2021,Poster | Despite significant progress in challenging problems across various domains, applying state-of-the-art deep reinforcement learning (RL) algorithms remains challenging due to their sensitivity to the choice of hyperparameters. This sensitivity can partly be attributed to the non-stationarity of the RL problem, potentially requiring different hyperparameter settings at various stages of the learning process. Additionally, in the RL setting, hyperparameter optimization (HPO) requires a large number of environment interactions, hindering the transfer of the successes in RL to real-world applications. In this work, we tackle the issues of sample-efficient and dynamic HPO in RL. We propose a population-based automated RL (AutoRL) framework to meta-optimize arbitrary off-policy RL algorithms. In this framework, we optimize the hyperparameters and also the neural architecture while simultaneously training the agent. By sharing the collected experience across the population, we substantially increase the sample efficiency of the meta-optimization. We demonstrate the capabilities of our sample-efficient AutoRL approach in a case study with the popular TD3 algorithm in the MuJoCo benchmark suite, where we reduce the number of environment interactions needed for meta-optimization by up to an order of magnitude compared to population-based training. | https://openreview.net/pdf/50e735ee784190b4976fe22036a75b2ac2feee2b.pdf |
A Temporal Kernel Approach for Deep Learning with Continuous-time Information | https://openreview.net/forum?id=whE31dn74cL | https://openreview.net/forum?id=whE31dn74cL | Da Xu,Chuanwei Ruan,Evren Korpeoglu,Sushant Kumar,Kannan Achan | ICLR 2021,Poster | Sequential deep learning models such as RNN, causal CNN and attention mechanism do not readily consume continuous-time information. Discretizing the temporal data, as we show, causes inconsistency even for simple continuous-time processes. Current approaches often handle time in a heuristic manner to be consistent with the existing deep learning architectures and implementations. In this paper, we provide a principled way to characterize continuous-time systems using deep learning tools. Notably, the proposed approach applies to all the major deep learning architectures and requires little modifications to the implementation. The critical insight is to represent the continuous-time system by composing neural networks with a temporal kernel, where we gain our intuition from the recent advancements in understanding deep learning with Gaussian process and neural tangent kernel. To represent the temporal kernel, we introduce the random feature approach and convert the kernel learning problem to spectral density estimation under reparameterization. We further prove the convergence and consistency results even when the temporal kernel is non-stationary, and the spectral density is misspecified. The simulations and real-data experiments demonstrate the empirical effectiveness of our temporal kernel approach in a broad range of settings. | https://openreview.net/pdf/40fc3e707f1a7db2333d5459c3b472809d4e33c1.pdf |
Convex Regularization behind Neural Reconstruction | https://openreview.net/forum?id=VErQxgyrbfn | https://openreview.net/forum?id=VErQxgyrbfn | Arda Sahiner,Morteza Mardani,Batu Ozturkler,Mert Pilanci,John M. Pauly | ICLR 2021,Poster | Neural networks have shown tremendous potential for reconstructing high-resolution images in inverse problems. The non-convex and opaque nature of neural networks, however, hinders their utility in sensitive applications such as medical imaging. To cope with this challenge, this paper advocates a convex duality framework that makes a two-layer fully-convolutional ReLU denoising network amenable to convex optimization. The convex dual network not only offers the optimum training with convex solvers, but also facilitates interpreting training and prediction. In particular, it implies training neural networks with weight decay regularization induces path sparsity while the prediction is piecewise linear filtering. A range of experiments with MNIST and fastMRI datasets confirm the efficacy of the dual network optimization problem. | https://openreview.net/pdf/cd9dfc05e045919a65b1eb93e132822e42d873e4.pdf |
Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms | https://openreview.net/forum?id=fGF8qAqpXXG | https://openreview.net/forum?id=fGF8qAqpXXG | Arda Sahiner,Tolga Ergen,John M. Pauly,Mert Pilanci | ICLR 2021,Poster | We describe the convex semi-infinite dual of the two-layer vector-output ReLU neural network training problem. This semi-infinite dual admits a finite dimensional representation, but its support is over a convex set which is difficult to characterize. In particular, we demonstrate that the non-convex neural network training problem is equivalent to a finite-dimensional convex copositive program. Our work is the first to identify this strong connection between the global optima of neural networks and those of copositive programs. We thus demonstrate how neural networks implicitly attempt to solve copositive programs via semi-nonnegative matrix factorization, and draw key insights from this formulation. We describe the first algorithms for provably finding the global minimum of the vector output neural network training problem, which are polynomial in the number of samples for a fixed data rank, yet exponential in the dimension. However, in the case of convolutional architectures, the computational complexity is exponential in only the filter size and polynomial in all other parameters. We describe the circumstances in which we can find the global optimum of this neural network training problem exactly with soft-thresholded SVD, and provide a copositive relaxation which is guaranteed to be exact for certain classes of problems, and which corresponds with the solution of Stochastic Gradient Descent in practice. | https://openreview.net/pdf/0222bcef2a87d75e3670c0707c8b848554ecbe31.pdf |
Learning Better Structured Representations Using Low-rank Adaptive Label Smoothing | https://openreview.net/forum?id=5NsEIflpbSv | https://openreview.net/forum?id=5NsEIflpbSv | Asish Ghoshal,Xilun Chen,Sonal Gupta,Luke Zettlemoyer,Yashar Mehdad | ICLR 2021,Poster | Training with soft targets instead of hard targets has been shown to improve performance and calibration of deep neural networks. Label smoothing is a popular way of computing soft targets, where one-hot encoding of a class is smoothed with a uniform distribution. Owing to its simplicity, label smoothing has found wide-spread use for training deep neural networks on a wide variety of tasks, ranging from image and text classification to machine translation and semantic parsing. Complementing recent empirical justification for label smoothing, we obtain PAC-Bayesian generalization bounds for label smoothing and show that the generalization error depends on the choice of the noise (smoothing) distribution. Then we propose low-rank adaptive label smoothing (LORAS): a simple yet novel method for training with learned soft targets that generalizes label smoothing and adapts to the latent structure of the label space in structured prediction tasks. Specifically, we evaluate our method on semantic parsing tasks and show that training with appropriately smoothed soft targets can significantly improve accuracy and model calibration, especially in low-resource settings. Used in conjunction with pre-trained sequence-to-sequence models, our method achieves state of the art performance on four semantic parsing data sets. LORAS can be used with any model, improves performance and implicit model calibration without increasing the number of model parameters, and can be scaled to problems with large label spaces containing tens of thousands of labels. | https://openreview.net/pdf/4538feaf2c0ace4bc3472484186d4cda25dc7c01.pdf |
Training GANs with Stronger Augmentations via Contrastive Discriminator | https://openreview.net/forum?id=eo6U4CAwVmg | https://openreview.net/forum?id=eo6U4CAwVmg | Jongheon Jeong,Jinwoo Shin | ICLR 2021,Poster | Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting. It is still unclear, however, that which augmentations could actually improve GANs, and in particular, how to apply a wider range of augmentations in training. In this paper, we propose a novel way to address these questions by incorporating a recent contrastive representation learning scheme into the GAN discriminator, coined ContraD. This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability, thereby preventing the discriminator overfitting issue in GANs more effectively. Even better, we observe that the contrastive learning itself also benefits from our GAN training, i.e., by maintaining discriminative features between real and fake samples, suggesting a strong coherence between the two worlds: good contrastive representations are also good for GAN discriminators, and vice versa. Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations, still maintaining highly discriminative features in the discriminator in terms of the linear evaluation. Finally, as a byproduct, we also show that our GANs trained in an unsupervised manner (without labels) can induce many conditional generative models via a simple latent sampling, leveraging the learned features of ContraD. Code is available at https://github.com/jh-jeong/ContraD. | https://openreview.net/pdf/2d308c93802630f8c000471788307eb87a9027fd.pdf |
Private Image Reconstruction from System Side Channels Using Generative Models | https://openreview.net/forum?id=y06VOYLcQXa | https://openreview.net/forum?id=y06VOYLcQXa | Yuanyuan Yuan,Shuai Wang,Junping Zhang | ICLR 2021,Poster | System side channels denote effects imposed on the underlying system and hardware when running a program, such as its accessed CPU cache lines. Side channel analysis (SCA) allows attackers to infer program secrets based on observed side channel signals. Given the ever-growing adoption of machine learning as a service (MLaaS), image analysis software on cloud platforms has been exploited by reconstructing private user images from system side channels. Nevertheless, to date, SCA is still highly challenging, requiring technical knowledge of victim software's internal operations. For existing SCA attacks, comprehending such internal operations requires heavyweight program analysis or manual efforts.
This research proposes an attack framework to reconstruct private user images processed by media software via system side channels. The framework forms an effective workflow by incorporating convolutional networks, variational autoencoders, and generative adversarial networks. Our evaluation of two popular side channels shows that the reconstructed images consistently match user inputs, making privacy leakage attacks more practical. We also show surprising results that even one-bit data read/write pattern side channels, which are deemed minimally informative, can be used to reconstruct quality images using our framework. | https://openreview.net/pdf/73fc8942e64baa03de7625e340fa3c6d84db3589.pdf |
Learning to Make Decisions via Submodular Regularization | https://openreview.net/forum?id=ac288vnG_7U | https://openreview.net/forum?id=ac288vnG_7U | Ayya Alieva,Aiden Aceves,Jialin Song,Stephen Mayo,Yisong Yue,Yuxin Chen | ICLR 2021,Poster | Many sequential decision making tasks can be viewed as combinatorial optimization problems over a large number of actions. When the cost of evaluating an action is high, even a greedy algorithm, which iteratively picks the best action given the history, is prohibitive to run. In this paper, we aim to learn a greedy heuristic for sequentially selecting actions as a surrogate for invoking the expensive oracle when evaluating an action. In particular, we focus on a class of combinatorial problems that can be solved via submodular maximization (either directly on the objective function or via submodular surrogates). We introduce a data-driven optimization framework based on the submodular-norm loss, a novel loss function that encourages the resulting objective to exhibit diminishing returns. Our framework outputs a surrogate objective that is efficient to train, approximately submodular, and can be made permutation-invariant. The latter two properties allow us to prove strong approximation guarantees for the learned greedy heuristic. Furthermore, we show that our model can be easily integrated with modern deep imitation learning pipelines for sequential prediction tasks. We demonstrate the performance of our algorithm on a variety of batched and sequential optimization tasks, including set cover, active learning, and Bayesian optimization for protein engineering. | https://openreview.net/pdf/1c1034956d2f523aa299974f4f639d1b8ecb0026.pdf |
The Recurrent Neural Tangent Kernel | https://openreview.net/forum?id=3T9iFICe0Y9 | https://openreview.net/forum?id=3T9iFICe0Y9 | Sina Alemohammad,Zichao Wang,Randall Balestriero,Richard Baraniuk | ICLR 2021,Poster | The study of deep neural networks (DNNs) in the infinite-width limit, via the so-called neural tangent kernel (NTK) approach, has provided new insights into the dynamics of learning, generalization, and the impact of initialization. One key DNN architecture remains to be kernelized, namely, the recurrent neural network (RNN). In this paper we introduce and study the Recurrent Neural Tangent Kernel (RNTK), which provides new insights into the behavior of overparametrized RNNs. A key property of the RNTK should greatly benefit practitioners is its ability to compare inputs of different length. To this end, we characterize how the RNTK weights different time steps to form its output under different initialization parameters and nonlinearity choices. A synthetic and 56 real-world data experiments demonstrate that the RNTK offers significant performance gains over other kernels, including standard NTKs, across a wide array of data sets. | https://openreview.net/pdf/0ede6a7293a24c88d58e7542b3c44d97270a2a0c.pdf |
Evaluation of Similarity-based Explanations | https://openreview.net/forum?id=9uvhpyQwzM_ | https://openreview.net/forum?id=9uvhpyQwzM_ | Kazuaki Hanawa,Sho Yokoi,Satoshi Hara,Kentaro Inui | ICLR 2021,Poster | Explaining the predictions made by complex machine learning models helps users to understand and accept the predicted outputs with confidence. One promising way is to use similarity-based explanation that provides similar instances as evidence to support model predictions. Several relevance metrics are used for this purpose. In this study, we investigated relevance metrics that can provide reasonable explanations to users. Specifically, we adopted three tests to evaluate whether the relevance metrics satisfy the minimal requirements for similarity-based explanation. Our experiments revealed that the cosine similarity of the gradients of the loss performs best, which would be a recommended choice in practice. In addition, we showed that some metrics perform poorly in our tests and analyzed the reasons of their failure. We expect our insights to help practitioners in selecting appropriate relevance metrics and also aid further researches for designing better relevance metrics for explanations. | https://openreview.net/pdf/ede4daa61cd87856ebce2c047d94f9fdc6149edf.pdf |
Adaptive Procedural Task Generation for Hard-Exploration Problems | https://openreview.net/forum?id=8xLkv08d70T | https://openreview.net/forum?id=8xLkv08d70T | Kuan Fang,Yuke Zhu,Silvio Savarese,L. Fei-Fei | ICLR 2021,Poster | We introduce Adaptive Procedural Task Generation (APT-Gen), an approach to progressively generate a sequence of tasks as curricula to facilitate reinforcement learning in hard-exploration problems. At the heart of our approach, a task generator learns to create tasks from a parameterized task space via a black-box procedural generation module. To enable curriculum learning in the absence of a direct indicator of learning progress, we propose to train the task generator by balancing the agent's performance in the generated tasks and the similarity to the target tasks. Through adversarial training, the task similarity is adaptively estimated by a task discriminator defined on the agent's experiences, allowing the generated tasks to approximate target tasks of unknown parameterization or outside of the predefined task space. Our experiments on the grid world and robotic manipulation task domains show that APT-Gen achieves substantially better performance than various existing baselines by generating suitable tasks of rich variations. | https://openreview.net/pdf/24bbbe680bd44c907aab36d5e18bae82a7a5a48f.pdf |
Linear Last-iterate Convergence in Constrained Saddle-point Optimization | https://openreview.net/forum?id=dx11_7vm5_r | https://openreview.net/forum?id=dx11_7vm5_r | Chen-Yu Wei,Chung-Wei Lee,Mengxiao Zhang,Haipeng Luo | ICLR 2021,Poster | Optimistic Gradient Descent Ascent (OGDA) and Optimistic Multiplicative Weights Update (OMWU) for saddle-point optimization have received growing attention due to their favorable last-iterate convergence. However, their behaviors for simple bilinear games over the probability simplex are still not fully understood --- previous analysis lacks explicit convergence rates, only applies to an exponentially small learning rate, or requires additional assumptions such as the uniqueness of the optimal solution.
In this work, we significantly expand the understanding of last-iterate convergence for OGDA and OMWU in the constrained setting. Specifically, for OMWU in bilinear games over the simplex, we show that when the equilibrium is unique, linear last-iterate convergence is achievable with a constant learning rate, which improves the result of (Daskalakis & Panageas, 2019) under the same assumption. We then significantly extend the results to more general objectives and feasible sets for the projected OGDA algorithm, by introducing a sufficient condition under which OGDA exhibits concrete last-iterate convergence rates with a constant learning rate. We show that bilinear games over any polytope satisfy this condition and OGDA converges exponentially fast even without the unique equilibrium assumption. Our condition also holds for strongly-convex-strongly-concave functions, recovering the result of (Hsieh et al., 2019). Finally, we provide experimental results to further support our theory. | https://openreview.net/pdf/80ab11841a700c095d09408aebe0552dc6c2c21f.pdf |
On Graph Neural Networks versus Graph-Augmented MLPs | https://openreview.net/forum?id=tiqI7w64JG2 | https://openreview.net/forum?id=tiqI7w64JG2 | Lei Chen,Zhengdao Chen,Joan Bruna | ICLR 2021,Poster | From the perspectives of expressive power and learning, this work compares multi-layer Graph Neural Networks (GNNs) with a simplified alternative that we call Graph-Augmented Multi-Layer Perceptrons (GA-MLPs), which first augments node features with certain multi-hop operators on the graph and then applies learnable node-wise functions. From the perspective of graph isomorphism testing, we show both theoretically and numerically that GA-MLPs with suitable operators can distinguish almost all non-isomorphic graphs, just like the Weisfeiler-Lehman (WL) test and GNNs. However, by viewing them as node-level functions and examining the equivalence classes they induce on rooted graphs, we prove a separation in expressive power between GA-MLPs and GNNs that grows exponentially in depth. In particular, unlike GNNs, GA-MLPs are unable to count the number of attributed walks. We also demonstrate via community detection experiments that GA-MLPs can be limited by their choice of operator family, whereas GNNs have higher flexibility in learning. | https://openreview.net/pdf/974857db041de4f514814723ec84f8c39aa35126.pdf |
Solving Compositional Reinforcement Learning Problems via Task Reduction | https://openreview.net/forum?id=9SS69KwomAM | https://openreview.net/forum?id=9SS69KwomAM | Yunfei Li,Yilin Wu,Huazhe Xu,Xiaolong Wang,Yi Wu | ICLR 2021,Poster | We propose a novel learning paradigm, Self-Imitation via Reduction (SIR), for solving compositional reinforcement learning problems. SIR is based on two core ideas: task reduction and self-imitation. Task reduction tackles a hard-to-solve task by actively reducing it to an easier task whose solution is known by the RL agent. Once the original hard task is successfully solved by task reduction, the agent naturally obtains a self-generated solution trajectory to imitate. By continuously collecting and imitating such demonstrations, the agent is able to progressively expand the solved subspace in the entire task space. Experiment results show that SIR can significantly accelerate and improve learning on a variety of challenging sparse-reward continuous-control problems with compositional structures. Code and videos are available at https://sites.google.com/view/sir-compositional. | https://openreview.net/pdf/77f78b692f36356e5e5bbddd012a3367bd821b29.pdf |
Conditional Generative Modeling via Learning the Latent Space | https://openreview.net/forum?id=VJnrYcnRc6 | https://openreview.net/forum?id=VJnrYcnRc6 | Sameera Ramasinghe,Kanchana Nisal Ranasinghe,Salman Khan,Nick Barnes,Stephen Gould | ICLR 2021,Poster | Although deep learning has achieved appealing results on several machine learning tasks, most of the models are deterministic at inference, limiting their application to single-modal settings. We propose a novel general-purpose framework for conditional generation in multimodal spaces, that uses latent variables to model generalizable learning patterns while minimizing a family of regression cost functions. At inference, the latent variables are optimized to find solutions corresponding to multiple output modes. Compared to existing generative solutions, our approach demonstrates faster and more stable convergence, and can learn better representations for downstream tasks. Importantly, it provides a simple generic model that can perform better than highly engineered pipelines tailored using domain expertise on a variety of tasks, while generating diverse outputs. Code available at https://github.com/samgregoost/cGML. | https://openreview.net/pdf/ad10b1238b8c96783d156228bbe0a955123a991c.pdf |
DialoGraph: Incorporating Interpretable Strategy-Graph Networks into Negotiation Dialogues | https://openreview.net/forum?id=kDnal_bbb-E | https://openreview.net/forum?id=kDnal_bbb-E | Rishabh Joshi,Vidhisha Balachandran,Shikhar Vashishth,Alan Black,Yulia Tsvetkov | ICLR 2021,Poster | To successfully negotiate a deal, it is not enough to communicate fluently: pragmatic planning of persuasive negotiation strategies is essential. While modern dialogue agents excel at generating fluent sentences, they still lack pragmatic grounding and cannot reason strategically. We present DialoGraph, a negotiation system that incorporates pragmatic strategies in a negotiation dialogue using graph neural networks. DialoGraph explicitly incorporates dependencies between sequences of strategies to enable improved and interpretable prediction of next optimal strategies, given the dialogue context. Our graph-based method outperforms prior state-of-the-art negotiation models both in the accuracy of strategy/dialogue act prediction and in the quality of downstream dialogue response generation. We qualitatively show further benefits of learned strategy-graphs in providing explicit associations between effective negotiation strategies over the course of the dialogue, leading to interpretable and strategic dialogues. | https://openreview.net/pdf/1f09e2eb0a2962d022f2fc8411de57bb2f420a25.pdf |
WaNet - Imperceptible Warping-based Backdoor Attack | https://openreview.net/forum?id=eEn8KTtJOx | https://openreview.net/forum?id=eEn8KTtJOx | Tuan Anh Nguyen,Anh Tuan Tran | ICLR 2021,Poster | With the thriving of deep learning and the widespread practice of using pre-trained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years. A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigger pattern appears. However, the existing backdoor attacks are all built on noise perturbation triggers, making them noticeable to humans. In this paper, we instead propose using warping-based triggers. The proposed backdoor outperforms the previous methods in a human inspection test by a wide margin, proving its stealthiness. To make such models undetectable by machine defenders, we propose a novel training mode, called the ``noise mode. The trained networks successfully attack and bypass the state-ofthe art defense methods on standard classification datasets, including MNIST, CIFAR-10, GTSRB, and CelebA. Behavior analyses show that our backdoors are transparent to network inspection, further proving this novel attack mechanism's efficiency. | https://openreview.net/pdf/db3277f5b47619abfe13880772b864960e98f643.pdf |
Nonseparable Symplectic Neural Networks | https://openreview.net/forum?id=B5VvQrI49Pa | https://openreview.net/forum?id=B5VvQrI49Pa | Shiying Xiong,Yunjin Tong,Xingzhe He,Shuqi Yang,Cheng Yang,Bo Zhu | ICLR 2021,Poster | Predicting the behaviors of Hamiltonian systems has been drawing increasing attention in scientific machine learning. However, the vast majority of the literature was focused on predicting separable Hamiltonian systems with their kinematic and potential energy terms being explicitly decoupled, while building data-driven paradigms to predict nonseparable Hamiltonian systems that are ubiquitous in fluid dynamics and quantum mechanics were rarely explored. The main computational challenge lies in the effective embedding of symplectic priors to describe the inherently coupled evolution of position and momentum, which typically exhibits intricate dynamics. To solve the problem, we propose a novel neural network architecture, Nonseparable Symplectic Neural Networks (NSSNNs), to uncover and embed the symplectic structure of a nonseparable Hamiltonian system from limited observation data. The enabling mechanics of our approach is an augmented symplectic time integrator to decouple the position and momentum energy terms and facilitate their evolution. We demonstrated the efficacy and versatility of our method by predicting a wide range of Hamiltonian systems, both separable and nonseparable, including chaotic vortical flows. We showed the unique computational merits of our approach to yield long-term, accurate, and robust predictions for large-scale Hamiltonian systems by rigorously enforcing symplectomorphism. | https://openreview.net/pdf/c9ab2e0778f4de8dcfb0a34ffd1c09aa50ceb3b8.pdf |
Discovering Diverse Multi-Agent Strategic Behavior via Reward Randomization | https://openreview.net/forum?id=lvRTC669EY_ | https://openreview.net/forum?id=lvRTC669EY_ | Zhenggang Tang,Chao Yu,Boyuan Chen,Huazhe Xu,Xiaolong Wang,Fei Fang,Simon Shaolei Du,Yu Wang,Yi Wu | ICLR 2021,Poster | We propose a simple, general and effective technique, Reward Randomization for discovering diverse strategic policies in complex multi-agent games. Combining reward randomization and policy gradient, we derive a new algorithm, Reward-Randomized Policy Gradient (RPG). RPG is able to discover a set of multiple distinctive human-interpretable strategies in challenging temporal trust dilemmas, including grid-world games and a real-world game Agar.io, where multiple equilibria exist but standard multi-agent policy gradient algorithms always converge to a fixed one with a sub-optimal payoff for every player even using state-of-the-art exploration techniques. Furthermore, with the set of diverse strategies from RPG, we can (1) achieve higher payoffs by fine-tuning the best policy from the set; and (2) obtain an adaptive agent by using this set of strategies as its training opponents. | https://openreview.net/pdf/2062fdf1e8a1dbc3c1d293239ad291f853463ba8.pdf |
Multi-timescale Representation Learning in LSTM Language Models | https://openreview.net/forum?id=9ITXiTrAoT | https://openreview.net/forum?id=9ITXiTrAoT | Shivangi Mahto,Vy Ai Vo,Javier S. Turek,Alexander Huth | ICLR 2021,Poster | Language models must capture statistical dependencies between words at timescales ranging from very short to very long. Earlier work has demonstrated that dependencies in natural language tend to decay with distance between words according to a power law. However, it is unclear how this knowledge can be used for analyzing or designing neural network language models. In this work, we derived a theory for how the memory gating mechanism in long short-term memory (LSTM) language models can capture power law decay. We found that unit timescales within an LSTM, which are determined by the forget gate bias, should follow an Inverse Gamma distribution. Experiments then showed that LSTM language models trained on natural English text learn to approximate this theoretical distribution. Further, we found that explicitly imposing the theoretical distribution upon the model during training yielded better language model perplexity overall, with particular improvements for predicting low-frequency (rare) words. Moreover, the explicit multi-timescale model selectively routes information about different types of words through units with different timescales, potentially improving model interpretability. These results demonstrate the importance of careful, theoretically-motivated analysis of memory and timescale in language models. | https://openreview.net/pdf/6faff0f37219bcee41b257a3d80d7eeb3df0e2d6.pdf |
Explaining the Efficacy of Counterfactually Augmented Data | https://openreview.net/forum?id=HHiiQKWsOcV | https://openreview.net/forum?id=HHiiQKWsOcV | Divyansh Kaushik,Amrith Setlur,Eduard H Hovy,Zachary Chase Lipton | ICLR 2021,Poster | In attempts to produce machine learning models less reliant on spurious patterns in NLP datasets, researchers have recently proposed curating counterfactually augmented data (CAD) via a human-in-the-loop process in which given some documents and their (initial) labels, humans must revise the text to make a counterfactual label applicable. Importantly, edits that are not necessary to flip the applicable label are prohibited. Models trained on the augmented (original and revised) data appear, empirically, to rely less on semantically irrelevant words and to generalize better out of domain. While this work draws loosely on causal thinking, the underlying causal model (even at an abstract level) and the principles underlying the observed out-of-domain improvements remain unclear. In this paper, we introduce a toy analog based on linear Gaussian models, observing interesting relationships between causal models, measurement noise, out-of-domain generalization, and reliance on spurious signals. Our analysis provides some insights that help to explain the efficacy of CAD. Moreover, we develop the hypothesis that while adding noise to causal features should degrade both in-domain and out-of-domain performance, adding noise to non-causal features should lead to relative improvements in out-of-domain performance. This idea inspires a speculative test for determining whether a feature attribution technique has identified the causal spans. If adding noise (e.g., by random word flips) to the highlighted spans degrades both in-domain and out-of-domain performance on a battery of challenge datasets, but adding noise to the complement gives improvements out-of-domain, this suggests we have identified causal spans. Thus, we present a large scale empirical study comparing spans edited to create CAD to those selected by attention and saliency maps. Across numerous challenge domains and models, we find that the hypothesized phenomenon is pronounced for CAD. | https://openreview.net/pdf/73361dc2c4d80cb501745448d7de1e3c99d2f2a8.pdf |
Revisiting Locally Supervised Learning: an Alternative to End-to-end Training | https://openreview.net/forum?id=fAbkE6ant2 | https://openreview.net/forum?id=fAbkE6ant2 | Yulin Wang,Zanlin Ni,Shiji Song,Le Yang,Gao Huang | ICLR 2021,Poster | Due to the need to store the intermediate activations for back-propagation, end-to-end (E2E) training of deep networks usually suffers from high GPUs memory footprint. This paper aims to address this problem by revisiting the locally supervised learning, where a network is split into gradient-isolated modules and trained with local supervision. We experimentally show that simply training local modules with E2E loss tends to collapse task-relevant information at early layers, and hence hurts the performance of the full model. To avoid this issue, we propose an information propagation (InfoPro) loss, which encourages local modules to preserve as much useful information as possible, while progressively discard task-irrelevant information. As InfoPro loss is difficult to compute in its original form, we derive a feasible upper bound as a surrogate optimization objective, yielding a simple but effective algorithm. In fact, we show that the proposed method boils down to minimizing the combination of a reconstruction loss and a normal cross-entropy/contrastive term. Extensive empirical results on five datasets (i.e., CIFAR, SVHN, STL-10, ImageNet and Cityscapes) validate that InfoPro is capable of achieving competitive performance with less than 40% memory footprint compared to E2E training, while allowing using training data with higher-resolution or larger batch sizes under the same GPU memory constraint. Our method also enables training local modules asynchronously for potential training acceleration. | https://openreview.net/pdf/ae46b2e0daac3e1e7af2c0b30ca3ed05b9675f66.pdf |
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks? | https://openreview.net/forum?id=fgd7we_uZa6 | https://openreview.net/forum?id=fgd7we_uZa6 | Zixiang Chen,Yuan Cao,Difan Zou,Quanquan Gu | ICLR 2021,Poster | A recent line of research on deep learning focuses on the extremely over-parameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size $n$ and the inverse of the target error $\epsilon^{-1}$, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees. Very recently, it is shown that under certain margin assumptions on the training data, a polylogarithmic width condition suffices for two-layer ReLU networks to converge and generalize (Ji and Telgarsky, 2020). However, whether deep neural networks can be learned with such a mild over-parameterization is still an open question. In this work, we answer this question affirmatively and establish sharper learning guarantees for deep ReLU networks trained by (stochastic) gradient descent. In specific, under certain assumptions made in previous work, our optimization and generalization guarantees hold with network width polylogarithmic in $n$ and $\epsilon^{-1}$. Our results push the study of over-parameterized deep neural networks towards more practical settings. | https://openreview.net/pdf/7d4b4fabf3654c85ec7bc9a41516a3fe17bbccd8.pdf |
Blending MPC & Value Function Approximation for Efficient Reinforcement Learning | https://openreview.net/forum?id=RqCC_00Bg7V | https://openreview.net/forum?id=RqCC_00Bg7V | Mohak Bhardwaj,Sanjiban Choudhury,Byron Boots | ICLR 2021,Poster | Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems that uses a model to make predictions about future behavior. For each state encountered, MPC solves an online optimization problem to choose a control action that will minimize future cost. This is a surprisingly effective strategy, but real-time performance requirements warrant the use of simple models. If the model is not sufficiently accurate, then the resulting controller can be biased, limiting performance. We present a framework for improving on MPC with model-free reinforcement learning (RL). The key insight is to view MPC as constructing a series of local Q-function approximations. We show that by using a parameter $\lambda$, similar to the trace decay parameter in TD($\lambda$), we can systematically trade-off learned value estimates against the local Q-function approximations. We present a theoretical analysis that shows how error from inaccurate models in MPC and value function estimation in RL can be balanced. We further propose an algorithm that changes $\lambda$ over time to reduce the dependence on MPC as our estimates of the value function improve, and test the efficacy our approach on challenging high-dimensional manipulation tasks with biased models in simulation. We demonstrate that our approach can obtain performance comparable with MPC with access to true dynamics even under severe model bias and is more sample efficient as compared to model-free RL. | https://openreview.net/pdf/50c99bb8be8ec7784b7ca8b4a8b59da987b66045.pdf |
Probabilistic Numeric Convolutional Neural Networks | https://openreview.net/forum?id=T1XmO8ScKim | https://openreview.net/forum?id=T1XmO8ScKim | Marc Anton Finzi,Roberto Bondesan,Max Welling | ICLR 2021,Poster | Continuous input signals like images and time series that are irregularly sampled or have missing values are challenging for existing deep learning methods. Coherently defined feature representations must depend on the values in unobserved regions of the input. Drawing from the work in probabilistic numerics, we propose Probabilistic Numeric Convolutional Neural Networks which represent features as Gaussian processes, providing a probabilistic description of discretization error. We then define a convolutional layer as the evolution of a PDE defined on this GP, followed by a nonlinearity. This approach also naturally admits steerable equivariant convolutions under e.g. the rotation group. In experiments we show that our approach yields a $3\times$ reduction of error from the previous state of the art on the SuperPixel-MNIST dataset and competitive performance on the medical time series dataset PhysioNet2012. | https://openreview.net/pdf/132819644044c301e530ea14a0a17e7e4d6756d7.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.