title
stringlengths
19
143
url
stringlengths
41
43
detail_url
stringlengths
41
43
authors
stringlengths
9
347
tags
stringclasses
3 values
abstract
stringlengths
457
2.38k
pdf
stringlengths
71
71
Calibration of Neural Networks using Splines
https://openreview.net/forum?id=eQe8DEWNN2W
https://openreview.net/forum?id=eQe8DEWNN2W
Kartik Gupta,Amir Rahimi,Thalaiyasingam Ajanthan,Thomas Mensink,Cristian Sminchisescu,Richard Hartley
ICLR 2021,Poster
Calibrating neural networks is of utmost importance when employing them in safety-critical applications where the downstream decision making depends on the predicted probabilities. Measuring calibration error amounts to comparing two empirical distributions. In this work, we introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test in which the main idea is to compare the respective cumulative probability distributions. From this, by approximating the empirical cumulative distribution using a differentiable function via splines, we obtain a recalibration function, which maps the network outputs to actual (calibrated) class assignment probabilities. The spline-fitting is performed using a held-out calibration set and the obtained recalibration function is evaluated on an unseen test set. We tested our method against existing calibration approaches on various image classification datasets and our spline-based recalibration approach consistently outperforms existing methods on KS error as well as other commonly used calibration measures. Code is available online at https://github.com/kartikgupta-at-anu/spline-calibration.
https://openreview.net/pdf/9b7a1091871efa36fec2c10f52c8cef289c89a5f.pdf
Probing BERT in Hyperbolic Spaces
https://openreview.net/forum?id=17VnwXYZyhH
https://openreview.net/forum?id=17VnwXYZyhH
Boli Chen,Yao Fu,Guangwei Xu,Pengjun Xie,Chuanqi Tan,Mosha Chen,Liping Jing
ICLR 2021,Poster
Recently, a variety of probing tasks are proposed to discover linguistic properties learned in contextualized word embeddings. Many of these works implicitly assume these embeddings lay in certain metric spaces, typically the Euclidean space. This work considers a family of geometrically special spaces, the hyperbolic spaces, that exhibit better inductive biases for hierarchical structures and may better reveal linguistic hierarchies encoded in contextualized representations. We introduce a $\textit{Poincaré probe}$, a structural probe projecting these embeddings into a Poincaré subspace with explicitly defined hierarchies. We focus on two probing objectives: (a) dependency trees where the hierarchy is defined as head-dependent structures; (b) lexical sentiments where the hierarchy is defined as the polarity of words (positivity and negativity). We argue that a key desideratum of a probe is its sensitivity to the existence of linguistic structures. We apply our probes on BERT, a typical contextualized embedding model. In a syntactic subspace, our probe better recovers tree structures than Euclidean probes, revealing the possibility that the geometry of BERT syntax may not necessarily be Euclidean. In a sentiment subspace, we reveal two possible meta-embeddings for positive and negative sentiments and show how lexically-controlled contextualization would change the geometric localization of embeddings. We demonstrate the findings with our Poincaré probe via extensive experiments and visualization. Our results can be reproduced at https://github.com/FranxYao/PoincareProbe
https://openreview.net/pdf/5787fd974583617cec6dae3f0a6f5eea632dad93.pdf
Refining Deep Generative Models via Discriminator Gradient Flow
https://openreview.net/forum?id=Zbc-ue9p_rE
https://openreview.net/forum?id=Zbc-ue9p_rE
Abdul Fatir Ansari,Ming Liang Ang,Harold Soh
ICLR 2021,Poster
Deep generative modeling has seen impressive advances in recent years, to the point where it is now commonplace to see simulated samples (e.g., images) that closely resemble real-world data. However, generation quality is generally inconsistent for any given model and can vary dramatically between samples. We introduce Discriminator Gradient $f$low (DG$f$low), a new technique that improves generated samples via the gradient flow of entropy-regularized $f$-divergences between the real and the generated data distributions. The gradient flow takes the form of a non-linear Fokker-Plank equation, which can be easily simulated by sampling from the equivalent McKean-Vlasov process. By refining inferior samples, our technique avoids wasteful sample rejection used by previous methods (DRS & MH-GAN). Compared to existing works that focus on specific GAN variants, we show our refinement approach can be applied to GANs with vector-valued critics and even other deep generative models such as VAEs and Normalizing Flows. Empirical results on multiple synthetic, image, and text datasets demonstrate that DG$f$low leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) and Discriminator Driven Latent Sampling (DDLS) methods.
https://openreview.net/pdf/d6a7f895abe8734d66a2bfbc890b2b87e9d69fc7.pdf
Coping with Label Shift via Distributionally Robust Optimisation
https://openreview.net/forum?id=BtZhsSGNRNi
https://openreview.net/forum?id=BtZhsSGNRNi
Jingzhao Zhang,Aditya Krishna Menon,Andreas Veit,Srinadh Bhojanapalli,Sanjiv Kumar,Suvrit Sra
ICLR 2021,Poster
The label shift problem refers to the supervised learning setting where the train and test label distributions do not match. Existing work addressing label shift usually assumes access to an unlabelled test sample. This sample may be used to estimate the test label distribution, and to then train a suitably re-weighted classifier. While approaches using this idea have proven effective, their scope is limited as it is not always feasible to access the target domain; further, they require repeated retraining if the model is to be deployed in multiple test environments. Can one instead learn a single classifier that is robust to arbitrary label shifts from a broad family? In this paper, we answer this question by proposing a model that minimises an objective based on distributionally robust optimisation (DRO). We then design and analyse a gradient descent-proximal mirror ascent algorithm tailored for large-scale problems to optimise the proposed objective. Finally, through experiments on CIFAR-100 and ImageNet, we show that our technique can significantly improve performance over a number of baselines in settings where label shift is present.
https://openreview.net/pdf/ffdac428e94d17c00dfa31898389daf7d800270c.pdf
Variational State-Space Models for Localisation and Dense 3D Mapping in 6 DoF
https://openreview.net/forum?id=XAS3uKeFWj
https://openreview.net/forum?id=XAS3uKeFWj
Atanas Mirchev,Baris Kayalibay,Patrick van der Smagt,Justin Bayer
ICLR 2021,Poster
We solve the problem of 6-DoF localisation and 3D dense reconstruction in spatial environments as approximate Bayesian inference in a deep state-space model. Our approach leverages both learning and domain knowledge from multiple-view geometry and rigid-body dynamics. This results in an expressive predictive model of the world, often missing in current state-of-the-art visual SLAM solutions. The combination of variational inference, neural networks and a differentiable raycaster ensures that our model is amenable to end-to-end gradient-based optimisation. We evaluate our approach on realistic unmanned aerial vehicle flight data, nearing the performance of state-of-the-art visual-inertial odometry systems. We demonstrate the applicability of the model to generative prediction and planning.
https://openreview.net/pdf/019e5868a3827882821801a1b45527fe1f30d7de.pdf
Few-Shot Bayesian Optimization with Deep Kernel Surrogates
https://openreview.net/forum?id=bJxgv5C3sYc
https://openreview.net/forum?id=bJxgv5C3sYc
Martin Wistuba,Josif Grabocka
ICLR 2021,Poster
Hyperparameter optimization (HPO) is a central pillar in the automation of machine learning solutions and is mainly performed via Bayesian optimization, where a parametric surrogate is learned to approximate the black box response function (e.g. validation error). Unfortunately, evaluating the response function is computationally intensive. As a remedy, earlier work emphasizes the need for transfer learning surrogates which learn to optimize hyperparameters for an algorithm from other tasks. In contrast to previous work, we propose to rethink HPO as a few-shot learning problem in which we train a shared deep surrogate model to quickly adapt (with few response evaluations) to the response function of a new task. We propose the use of a deep kernel network for a Gaussian process surrogate that is meta-learned in an end-to-end fashion in order to jointly approximate the response functions of a collection of training data sets. As a result, the novel few-shot optimization of our deep kernel surrogate leads to new state-of-the-art results at HPO compared to several recent methods on diverse metadata sets.
https://openreview.net/pdf/252b5d1425d46a3495def0a2323048c2884c57db.pdf
$i$-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning
https://openreview.net/forum?id=T6AxtOaWydQ
https://openreview.net/forum?id=T6AxtOaWydQ
Kibok Lee,Yian Zhu,Kihyuk Sohn,Chun-Liang Li,Jinwoo Shin,Honglak Lee
ICLR 2021,Poster
Contrastive representation learning has shown to be effective to learn representations from unlabeled data. However, much progress has been made in vision domains relying on data augmentations carefully designed using domain knowledge. In this work, we propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning. We cast contrastive learning as training a non-parametric classifier by assigning a unique virtual class to each data in a batch. Then, data instances are mixed in both the input and virtual label spaces, providing more augmented data during training. In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains, including image, speech, and tabular data. Furthermore, we confirm its regularization effect via extensive ablation studies across model and dataset sizes. The code is available at https://github.com/kibok90/imix.
https://openreview.net/pdf/c7fd4a731db5f1505f19ca2c4439421df41a8c7b.pdf
Graph Information Bottleneck for Subgraph Recognition
https://openreview.net/forum?id=bM4Iqfg8M2k
https://openreview.net/forum?id=bM4Iqfg8M2k
Junchi Yu,Tingyang Xu,Yu Rong,Yatao Bian,Junzhou Huang,Ran He
ICLR 2021,Poster
Given the input graph and its label/property, several key problems of graph learning, such as finding interpretable subgraphs, graph denoising and graph compression, can be attributed to the fundamental problem of recognizing a subgraph of the original one. This subgraph shall be as informative as possible, yet contains less redundant and noisy structure. This problem setting is closely related to the well-known information bottleneck (IB) principle, which, however, has less been studied for the irregular graph data and graph neural networks (GNNs). In this paper, we propose a framework of Graph Information Bottleneck (GIB) for the subgraph recognition problem in deep graph learning. Under this framework, one can recognize the maximally informative yet compressive subgraph, named IB-subgraph. However, the GIB objective is notoriously hard to optimize, mostly due to the intractability of the mutual information of irregular graph data and the unstable optimization process. In order to tackle these challenges, we propose: i) a GIB objective based-on a mutual information estimator for the irregular graph data; ii) a bi-level optimization scheme to maximize the GIB objective; iii) a connectivity loss to stabilize the optimization process. We evaluate the properties of the IB-subgraph in three application scenarios: improvement of graph classification, graph interpretation and graph denoising. Extensive experiments demonstrate that the information-theoretic IB-subgraph enjoys superior graph properties.
https://openreview.net/pdf/45a07fde0c34644e0b294e4bb7bb3c045bc3429a.pdf
Rethinking Positional Encoding in Language Pre-training
https://openreview.net/forum?id=09-528y2Fgf
https://openreview.net/forum?id=09-528y2Fgf
Guolin Ke,Di He,Tie-Yan Liu
ICLR 2021,Poster
In this work, we investigate the positional encoding methods used in language pre-training (e.g., BERT) and identify several problems in the existing formulations. First, we show that in the absolute positional encoding, the addition operation applied on positional embeddings and word embeddings brings mixed correlations between the two heterogeneous information resources. It may bring unnecessary randomness in the attention and further limit the expressiveness of the model. Second, we question whether treating the position of the symbol \texttt{[CLS]} the same as other words is a reasonable design, considering its special role (the representation of the entire sentence) in the downstream tasks. Motivated from above analysis, we propose a new positional encoding method called \textbf{T}ransformer with \textbf{U}ntied \textbf{P}ositional \textbf{E}ncoding (TUPE). In the self-attention module, TUPE computes the word contextual correlation and positional correlation separately with different parameterizations and then adds them together. This design removes the mixed and noisy correlations over heterogeneous embeddings and offers more expressiveness by using different projection matrices. Furthermore, TUPE unties the \texttt{[CLS]} symbol from other positions, making it easier to capture information from all positions. Extensive experiments and ablation studies on GLUE benchmark demonstrate the effectiveness of the proposed method. Codes and models are released at \url{https://github.com/guolinke/TUPE}.
https://openreview.net/pdf/33fed0683748564aa65aa880cab67c6104dfd26a.pdf
Practical Massively Parallel Monte-Carlo Tree Search Applied to Molecular Design
https://openreview.net/forum?id=6k7VdojAIK
https://openreview.net/forum?id=6k7VdojAIK
Xiufeng Yang,Tanuj Aasawat,Kazuki Yoshizoe
ICLR 2021,Poster
It is common practice to use large computational resources to train neural networks, known from many examples, such as reinforcement learning applications. However, while massively parallel computing is often used for training models, it is rarely used to search solutions for combinatorial optimization problems. This paper proposes a novel massively parallel Monte-Carlo Tree Search (MP-MCTS) algorithm that works efficiently for a 1,000 worker scale on a distributed memory environment using multiple compute nodes and applies it to molecular design. This paper is the first work that applies distributed MCTS to a real-world and non-game problem. Existing works on large-scale parallel MCTS show efficient scalability in terms of the number of rollouts up to 100 workers. Still, they suffer from the degradation in the quality of the solutions. MP-MCTS maintains the search quality at a larger scale. By running MP-MCTS on 256 CPU cores for only 10 minutes, we obtained candidate molecules with similar scores to non-parallel MCTS running for 42 hours. Moreover, our results based on parallel MCTS (combined with a simple RNN model) significantly outperform existing state-of-the-art work. Our method is generic and is expected to speed up other applications of MCTS.
https://openreview.net/pdf/87e21d56e100df25c3170406746c2f73d33dfc66.pdf
When does preconditioning help or hurt generalization?
https://openreview.net/forum?id=S724o4_WB3
https://openreview.net/forum?id=S724o4_WB3
Shun-ichi Amari,Jimmy Ba,Roger Baker Grosse,Xuechen Li,Atsushi Nitanda,Taiji Suzuki,Denny Wu,Ji Xu
ICLR 2021,Poster
While second order optimizers such as natural gradient descent (NGD) often speed up optimization, their effect on generalization has been called into question. This work presents a more nuanced view on how the \textit{implicit bias} of optimizers affects the comparison of generalization properties. We provide an exact asymptotic bias-variance decomposition of the generalization error of preconditioned ridgeless regression in the overparameterized regime, and consider the inverse population Fisher information matrix (used in NGD) as a particular example. We determine the optimal preconditioner $\boldsymbol{P}$ for both the bias and variance, and find that the relative generalization performance of different optimizers depends on label noise and ``shape'' of the signal (true parameters): when the labels are noisy, the model is misspecified, or the signal is misaligned with the features, NGD can achieve lower risk; conversely, GD generalizes better under clean labels, a well-specified model, or aligned signal. Based on this analysis, we discuss several approaches to manage the bias-variance tradeoff, and the potential benefit of interpolating between first- and second-order updates. We then extend our analysis to regression in the reproducing kernel Hilbert space and demonstrate that preconditioning can lead to more efficient decrease in the population risk. Lastly, we empirically compare the generalization error of first- and second-order optimizers in neural network experiments, and observe robust trends matching our theoretical analysis.
https://openreview.net/pdf/f3a56e608245c40252059cbf936c8e2ef23f8c8c.pdf
ARMOURED: Adversarially Robust MOdels using Unlabeled data by REgularizing Diversity
https://openreview.net/forum?id=JoCR4h9O3Ew
https://openreview.net/forum?id=JoCR4h9O3Ew
Kangkang Lu,Cuong Manh Nguyen,Xun Xu,Kiran Krishnamachari,Yu Jing Goh,Chuan-Sheng Foo
ICLR 2021,Poster
Adversarial attacks pose a major challenge for modern deep neural networks. Recent advancements show that adversarially robust generalization requires a large amount of labeled data for training. If annotation becomes a burden, can unlabeled data help bridge the gap? In this paper, we propose ARMOURED, an adversarially robust training method based on semi-supervised learning that consists of two components. The first component applies multi-view learning to simultaneously optimize multiple independent networks and utilizes unlabeled data to enforce labeling consistency. The second component reduces adversarial transferability among the networks via diversity regularizers inspired by determinantal point processes and entropy maximization. Experimental results show that under small perturbation budgets, ARMOURED is robust against strong adaptive adversaries. Notably, ARMOURED does not rely on generating adversarial samples during training. When used in combination with adversarial training, ARMOURED yields competitive performance with the state-of-the-art adversarially-robust benchmarks on SVHN and outperforms them on CIFAR-10, while offering higher clean accuracy.
https://openreview.net/pdf/e402c5559bfaf850b8b9316d0ada923c0a35e9b6.pdf
Learning Energy-Based Generative Models via Coarse-to-Fine Expanding and Sampling
https://openreview.net/forum?id=aD1_5zowqV
https://openreview.net/forum?id=aD1_5zowqV
Yang Zhao,Jianwen Xie,Ping Li
ICLR 2021,Poster
Energy-based models (EBMs) parameterized by neural networks can be trained by the Markov chain Monte Carlo (MCMC) sampling-based maximum likelihood estimation. Despite the recent significant success of EBMs in image generation, the current approaches to train EBMs are unstable and have difficulty synthesizing diverse and high-fidelity images. In this paper, we propose to train EBMs via a multistage coarse-to-fine expanding and sampling strategy, which starts with learning a coarse-level EBM from images at low resolution and then gradually transits to learn a finer-level EBM from images at higher resolution by expanding the energy function as the learning progresses. The proposed framework is computationally efficient with smooth learning and sampling. It achieves the best performance on image generation amongst all EBMs and is the first successful EBM to synthesize high-fidelity images at $512\times512$ resolution. It can also be useful for image restoration and out-of-distribution detection. Lastly, the proposed framework is further generalized to the one-sided unsupervised image-to-image translation and beats baseline methods in terms of model size and training budget. We also present a gradient-based generative saliency method to interpret the translation dynamics.
https://openreview.net/pdf/5748006768480684067398a25ba656f3ae383d26.pdf
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
https://openreview.net/forum?id=U_mat0b9iv
https://openreview.net/forum?id=U_mat0b9iv
James Diffenderfer,Bhavya Kailkhura
ICLR 2021,Poster
Recently, Frankle & Carbin (2019) demonstrated that randomly-initialized dense networks contain subnetworks that once found can be trained to reach test accuracy comparable to the trained dense network. However, finding these high performing trainable subnetworks is expensive, requiring iterative process of training and pruning weights. In this paper, we propose (and prove) a stronger Multi-Prize Lottery Ticket Hypothesis: A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i.e., binary weights and/or activation) (prize 3). This provides a new paradigm for learning compact yet highly accurate binary neural networks simply by pruning and quantizing randomly weighted full precision neural networks. We also propose an algorithm for finding multi-prize tickets (MPTs) and test it by performing a series of experiments on CIFAR-10 and ImageNet datasets. Empirical results indicate that as models grow deeper and wider, multi-prize tickets start to reach similar (and sometimes even higher) test accuracy compared to their significantly larger and full-precision counterparts that have been weight-trained. Without ever updating the weight values, our MPTs-1/32 not only set new binary weight network state-of-the-art (SOTA) Top-1 accuracy -- 94.8% on CIFAR-10 and 74.03% on ImageNet -- but also outperform their full-precision counterparts by 1.78% and 0.76%, respectively. Further, our MPT-1/1 achieves SOTA Top-1 accuracy (91.9%) for binary neural networks on CIFAR-10. Code and pre-trained models are available at: https://github.com/chrundle/biprop.
https://openreview.net/pdf/a893180f3b4ea79ef89fca056e7170bcd084f923.pdf
Learning Accurate Entropy Model with Global Reference for Image Compression
https://openreview.net/forum?id=cTbIjyrUVwJ
https://openreview.net/forum?id=cTbIjyrUVwJ
Yichen Qian,Zhiyu Tan,Xiuyu Sun,Ming Lin,Dongyang Li,Zhenhong Sun,Li Hao,Rong Jin
ICLR 2021,Poster
In recent deep image compression neural networks, the entropy model plays a critical role in estimating the prior distribution of deep image encodings. Existing methods combine hyperprior with local context in the entropy estimation function. This greatly limits their performance due to the absence of a global vision. In this work, we propose a novel Global Reference Model for image compression to effectively leverage both the local and the global context information, leading to an enhanced compression rate. The proposed method scans decoded latents and then finds the most relevant latent to assist the distribution estimating of the current latent. A by-product of this work is the innovation of a mean-shifting GDN module that further improves the performance. Experimental results demonstrate that the proposed model outperforms the rate-distortion performance of most of the state-of-the-art methods in the industry.
https://openreview.net/pdf/06784d9497e2c81c4f81b487b90f789b97d82af0.pdf
Convex Potential Flows: Universal Probability Distributions with Optimal Transport and Convex Optimization
https://openreview.net/forum?id=te7PVH1sPxJ
https://openreview.net/forum?id=te7PVH1sPxJ
Chin-Wei Huang,Ricky T. Q. Chen,Christos Tsirigotis,Aaron Courville
ICLR 2021,Poster
Flow-based models are powerful tools for designing probabilistic models with tractable density. This paper introduces Convex Potential Flows (CP-Flow), a natural and efficient parameterization of invertible models inspired by the optimal transport (OT) theory. CP-Flows are the gradient map of a strongly convex neural potential function. The convexity implies invertibility and allows us to resort to convex optimization to solve the convex conjugate for efficient inversion. To enable maximum likelihood training, we derive a new gradient estimator of the log-determinant of the Jacobian, which involves solving an inverse-Hessian vector product using the conjugate gradient method. The gradient estimator has constant-memory cost, and can be made effectively unbiased by reducing the error tolerance level of the convex optimization routine. Theoretically, we prove that CP-Flows are universal density approximators and are optimal in the OT sense. Our empirical results show that CP-Flow performs competitively on standard benchmarks of density estimation and variational inference.
https://openreview.net/pdf/434f61cddaaf58036729fa7ecb4dd5948ef13993.pdf
Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved Complexity
https://openreview.net/forum?id=6t_dLShIUyZ
https://openreview.net/forum?id=6t_dLShIUyZ
Shaocong Ma,Ziyi Chen,Yi Zhou,Shaofeng Zou
ICLR 2021,Poster
Greedy-GQ is a value-based reinforcement learning (RL) algorithm for optimal control. Recently, the finite-time analysis of Greedy-GQ has been developed under linear function approximation and Markovian sampling, and the algorithm is shown to achieve an $\epsilon$-stationary point with a sample complexity in the order of $\mathcal{O}(\epsilon^{-3})$. Such a high sample complexity is due to the large variance induced by the Markovian samples. In this paper, we propose a variance-reduced Greedy-GQ (VR-Greedy-GQ) algorithm for off-policy optimal control. In particular, the algorithm applies the SVRG-based variance reduction scheme to reduce the stochastic variance of the two time-scale updates. We study the finite-time convergence of VR-Greedy-GQ under linear function approximation and Markovian sampling and show that the algorithm achieves a much smaller bias and variance error than the original Greedy-GQ. In particular, we prove that VR-Greedy-GQ achieves an improved sample complexity that is in the order of $\mathcal{O}(\epsilon^{-2})$. We further compare the performance of VR-Greedy-GQ with that of Greedy-GQ in various RL experiments to corroborate our theoretical findings.
https://openreview.net/pdf/70ad48e9c8d6cee0c23e74dba794453ea0ba809d.pdf
Large Batch Simulation for Deep Reinforcement Learning
https://openreview.net/forum?id=cP5IcoAkfKa
https://openreview.net/forum?id=cP5IcoAkfKa
Brennan Shacklett,Erik Wijmans,Aleksei Petrenko,Manolis Savva,Dhruv Batra,Vladlen Koltun,Kayvon Fatahalian
ICLR 2021,Poster
We accelerate deep reinforcement learning-based training in visually complex 3D environments by two orders of magnitude over prior work, realizing end-to-end training speeds of over 19,000 frames of experience per second on a single GPU and up to 72,000 frames per second on a single eight-GPU machine. The key idea of our approach is to design a 3D renderer and embodied navigation simulator around the principle of “batch simulation”: accepting and executing large batches of requests simultaneously. Beyond exposing large amounts of work at once, batch simulation allows implementations to amortize in-memory storage of scene assets, rendering work, data loading, and synchronization costs across many simulation requests, dramatically improving the number of simulated agents per GPU and overall simulation throughput. To balance DNN inference and training costs with faster simulation, we also build a computationally efficient policy DNN that maintains high task performance, and modify training algorithms to maintain sample efficiency when training with large mini-batches. By combining batch simulation and DNN performance optimizations, we demonstrate that PointGoal navigation agents can be trained in complex 3D environments on a single GPU in 1.5 days to 97% of the accuracy of agents trained on a prior state-of-the-art system using a 64-GPU cluster over three days. We provide open-source reference implementations of our batch 3D renderer and simulator to facilitate incorporation of these ideas into RL systems.
https://openreview.net/pdf/623f84dd47e44c85099947df02e289ec8005ddc3.pdf
Hopper: Multi-hop Transformer for Spatiotemporal Reasoning
https://openreview.net/forum?id=MaZFq7bJif7
https://openreview.net/forum?id=MaZFq7bJif7
Honglu Zhou,Asim Kadav,Farley Lai,Alexandru Niculescu-Mizil,Martin Renqiang Min,Mubbasir Kapadia,Hans Peter Graf
ICLR 2021,Poster
This paper considers the problem of spatiotemporal object-centric reasoning in videos. Central to our approach is the notion of object permanence, i.e., the ability to reason about the location of objects as they move through the video while being occluded, contained or carried by other objects. Existing deep learning based approaches often suffer from spatiotemporal biases when applied to video reasoning problems. We propose Hopper, which uses a Multi-hop Transformer for reasoning object permanence in videos. Given a video and a localization query, Hopper reasons over image and object tracks to automatically hop over critical frames in an iterative fashion to predict the final position of the object of interest. We demonstrate the effectiveness of using a contrastive loss to reduce spatiotemporal biases. We evaluate over CATER dataset and find that Hopper achieves 73.2% Top-1 accuracy using just 1 FPS by hopping through just a few critical frames. We also demonstrate Hopper can perform long-term reasoning by building a CATER-h dataset that requires multi-step reasoning to localize objects of interest correctly.
https://openreview.net/pdf/fd019a0d8646666b9443ec59fefbb6ec4c82233b.pdf
Efficient Reinforcement Learning in Factored MDPs with Application to Constrained RL
https://openreview.net/forum?id=fmtSg8591Q
https://openreview.net/forum?id=fmtSg8591Q
Xiaoyu Chen,Jiachen Hu,Lihong Li,Liwei Wang
ICLR 2021,Poster
Reinforcement learning (RL) in episodic, factored Markov decision processes (FMDPs) is studied. We propose an algorithm called FMDP-BF, which leverages the factorization structure of FMDP. The regret of FMDP-BF is shown to be exponentially smaller than that of optimal algorithms designed for non-factored MDPs, and improves on the best previous result for FMDPs~\citep{osband2014near} by a factor of $\sqrt{nH|\mathcal{S}_i|}$, where $|\mathcal{S}_i|$ is the cardinality of the factored state subspace, $H$ is the planning horizon and $n$ is the number of factored transition. To show the optimality of our bounds, we also provide a lower bound for FMDP, which indicates that our algorithm is near-optimal w.r.t. timestep $T$, horizon $H$ and factored state-action subspace cardinality. Finally, as an application, we study a new formulation of constrained RL, known as RL with knapsack constraints (RLwK), and provides the first sample-efficient algorithm based on FMDP-BF.
https://openreview.net/pdf/a8950d55072da9151823a07d4c8c83043c445db5.pdf
Unbiased Teacher for Semi-Supervised Object Detection
https://openreview.net/forum?id=MJIve1zgR_
https://openreview.net/forum?id=MJIve1zgR_
Yen-Cheng Liu,Chih-Yao Ma,Zijian He,Chia-Wen Kuo,Kan Chen,Peizhao Zhang,Bichen Wu,Zsolt Kira,Peter Vajda
ICLR 2021,Poster
Semi-supervised learning, i.e., training networks with both labeled and unlabeled data, has made significant progress recently. However, existing works have primarily focused on image classification tasks and neglected object detection which requires more annotation effort. In this work, we revisit the Semi-Supervised Object Detection (SS-OD) and identify the pseudo-labeling bias issue in SS-OD. To address this, we introduce Unbiased Teacher, a simple yet effective approach that jointly trains a student and a gradually progressing teacher in a mutually-beneficial manner. Together with a class-balance loss to downweight overly confident pseudo-labels, Unbiased Teacher consistently improved state-of-the-art methods by significant margins on COCO-standard, COCO-additional, and VOC datasets. Specifically, Unbiased Teacher achieves 6.8 absolute mAP improvements against state-of-the-art method when using 1% of labeled data on MS-COCO, achieves around 10 mAP improvements against the supervised baseline when using only 0.5, 1, 2% of labeled data on MS-COCO.
https://openreview.net/pdf/1dd575a67ecfb86555b7d78bf428003194aa1e8e.pdf
MELR: Meta-Learning via Modeling Episode-Level Relationships for Few-Shot Learning
https://openreview.net/forum?id=D3PcGLdMx0
https://openreview.net/forum?id=D3PcGLdMx0
Nanyi Fei,Zhiwu Lu,Tao Xiang,Songfang Huang
ICLR 2021,Poster
Most recent few-shot learning (FSL) approaches are based on episodic training whereby each episode samples few training instances (shots) per class to imitate the test condition. However, this strict adhering to test condition has a negative side effect, that is, the trained model is susceptible to the poor sampling of few shots. In this work, for the first time, this problem is addressed by exploiting inter-episode relationships. Specifically, a novel meta-learning via modeling episode-level relationships (MELR) framework is proposed. By sampling two episodes containing the same set of classes for meta-training, MELR is designed to ensure that the meta-learned model is robust against the presence of poorly-sampled shots in the meta-test stage. This is achieved through two key components: (1) a Cross-Episode Attention Module (CEAM) to improve the ability of alleviating the effects of poorly-sampled shots, and (2) a Cross-Episode Consistency Regularization (CECR) to enforce that the two classifiers learned from the two episodes are consistent even when there are unrepresentative instances. Extensive experiments for non-transductive standard FSL on two benchmarks show that our MELR achieves 1.0%-5.0% improvements over the baseline (i.e., ProtoNet) used for FSL in our model and outperforms the latest competitors under the same settings.
https://openreview.net/pdf/b13008d5731f5acac5931a7669386147ba3088da.pdf
Partitioned Learned Bloom Filters
https://openreview.net/forum?id=6BRLOfrMhW
https://openreview.net/forum?id=6BRLOfrMhW
Kapil Vaidya,Eric Knorr,Michael Mitzenmacher,Tim Kraska
ICLR 2021,Poster
Bloom filters are space-efficient probabilistic data structures that are used to test whether an element is a member of a set, and may return false positives. Recently, variations referred to as learned Bloom filters were developed that can provide improved performance in terms of the rate of false positives, by using a learned model for the represented set. However, previous methods for learned Bloom filters do not take full advantage of the learned model. Here we show how to frame the problem of optimal model utilization as an optimization problem, and using our framework derive algorithms that can achieve near-optimal performance in many cases.
https://openreview.net/pdf/9d3c15624a3da1883d53dc7d7e286e835c51a105.pdf
Wasserstein Embedding for Graph Learning
https://openreview.net/forum?id=AAes_3W-2z
https://openreview.net/forum?id=AAes_3W-2z
Soheil Kolouri,Navid Naderializadeh,Gustavo K. Rohde,Heiko Hoffmann
ICLR 2021,Poster
We present Wasserstein Embedding for Graph Learning (WEGL), a novel and fast framework for embedding entire graphs in a vector space, in which various machine learning models are applicable for graph-level prediction tasks. We leverage new insights on defining similarity between graphs as a function of the similarity between their node embedding distributions. Specifically, we use the Wasserstein distance to measure the dissimilarity between node embeddings of different graphs. Unlike prior work, we avoid pairwise calculation of distances between graphs and reduce the computational complexity from quadratic to linear in the number of graphs. WEGL calculates Monge maps from a reference distribution to each node embedding and, based on these maps, creates a fixed-sized vector representation of the graph. We evaluate our new graph embedding approach on various benchmark graph-property prediction tasks, showing state-of-the-art classification performance while having superior computational efficiency. The code is available at https://github.com/navid-naderi/WEGL.
https://openreview.net/pdf/91a2b065854f096c0ed827b88b9fc26dff36f359.pdf
High-Capacity Expert Binary Networks
https://openreview.net/forum?id=MxaY4FzOTa
https://openreview.net/forum?id=MxaY4FzOTa
Adrian Bulat,Brais Martinez,Georgios Tzimiropoulos
ICLR 2021,Poster
Network binarization is a promising hardware-aware direction for creating efficient deep models. Despite its memory and computational advantages, reducing the accuracy gap between binary models and their real-valued counterparts remains an unsolved challenging research problem. To this end, we make the following 3 contributions: (a) To increase model capacity, we propose Expert Binary Convolution, which, for the first time, tailors conditional computing to binary networks by learning to select one data-specific expert binary filter at a time conditioned on input features. (b) To increase representation capacity, we propose to address the inherent information bottleneck in binary networks by introducing an efficient width expansion mechanism which keeps the binary operations within the same budget. (c) To improve network design, we propose a principled binary network growth mechanism that unveils a set of network topologies of favorable properties. Overall, our method improves upon prior work, with no increase in computational cost, by $\sim6 \%$, reaching a groundbreaking $\sim 71\%$ on ImageNet classification. Code will be made available $\href{https://www.adrianbulat.com/binary-networks}{here}$.
https://openreview.net/pdf/b7f4b61804fcc0b13e0b45a6c8245bda3556c8ec.pdf
SAFENet: A Secure, Accurate and Fast Neural Network Inference
https://openreview.net/forum?id=Cz3dbFm5u-
https://openreview.net/forum?id=Cz3dbFm5u-
Qian Lou,Yilin Shen,Hongxia Jin,Lei Jiang
ICLR 2021,Poster
The advances in neural networks have driven many companies to provide prediction services to users in a wide range of applications. However, current prediction systems raise privacy concerns regarding the user's private data. A cryptographic neural network inference service is an efficient way to allow two parties to execute neural network inference without revealing either party’s data or model. Nevertheless, existing cryptographic neural network inference services suffer from huge running latency; in particular, the latency of communication-expensive cryptographic activation function is 3 orders of magnitude higher than plaintext-domain activation function. And activations are the necessary components of the modern neural networks. Therefore, slow cryptographic activation has become the primary obstacle of efficient cryptographic inference. In this paper, we propose a new technique, called SAFENet, to enable a Secure, Accurate and Fast nEural Network inference service. To speedup secure inference and guarantee inference accuracy, SAFENet includes channel-wise activation approximation with multiple-degree options. This is implemented by keeping the most useful activation channels and replacing the remaining, less useful, channels with various-degree polynomials. SAFENet also supports mixed-precision activation approximation by automatically assigning different replacement ratios to various layer; further increasing the approximation ratio and reducing inference latency. Our experimental results show SAFENet obtains the state-of-the-art inference latency and performance, reducing latency by $38\% \sim 61\%$ or improving accuracy by $1.8\% \sim 4\%$ over prior techniques on various encrypted datasets.
https://openreview.net/pdf/97de03f7c5d063182fff81551e472cc846458389.pdf
Learning Manifold Patch-Based Representations of Man-Made Shapes
https://openreview.net/forum?id=Gu5WqN9J3Fn
https://openreview.net/forum?id=Gu5WqN9J3Fn
Dmitriy Smirnov,Mikhail Bessmeltsev,Justin Solomon
ICLR 2021,Poster
Choosing the right representation for geometry is crucial for making 3D models compatible with existing applications. Focusing on piecewise-smooth man-made shapes, we propose a new representation that is usable in conventional CAD modeling pipelines and can also be learned by deep neural networks. We demonstrate its benefits by applying it to the task of sketch-based modeling. Given a raster image, our system infers a set of parametric surfaces that realize the input in 3D. To capture piecewise smooth geometry, we learn a special shape representation: a deformable parametric template composed of Coons patches. Naively training such a system, however, is hampered by non-manifold artifacts in the parametric shapes and by a lack of data. To address this, we introduce loss functions that bias the network to output non-self-intersecting shapes and implement them as part of a fully self-supervised system, automatically generating both shape templates and synthetic training data. We develop a testbed for sketch-based modeling, demonstrate shape interpolation, and provide comparison to related work.
https://openreview.net/pdf/5eb29d9e7a2ab3fa7e22c9926fc3d1c04898b5f5.pdf
Universal approximation power of deep residual neural networks via nonlinear control theory
https://openreview.net/forum?id=-IXhmY16R3M
https://openreview.net/forum?id=-IXhmY16R3M
Paulo Tabuada,Bahman Gharesifard
ICLR 2021,Poster
In this paper, we explain the universal approximation capabilities of deep residual neural networks through geometric nonlinear control. Inspired by recent work establishing links between residual networks and control systems, we provide a general sufficient condition for a residual network to have the power of universal approximation by asking the activation function, or one of its derivatives, to satisfy a quadratic differential equation. Many activation functions used in practice satisfy this assumption, exactly or approximately, and we show this property to be sufficient for an adequately deep neural network with $n+1$ neurons per layer to approximate arbitrarily well, on a compact set and with respect to the supremum norm, any continuous function from $\mathbb{R}^n$ to $\mathbb{R}^n$. We further show this result to hold for very simple architectures for which the weights only need to assume two values. The first key technical contribution consists of relating the universal approximation problem to controllability of an ensemble of control systems corresponding to a residual network and to leverage classical Lie algebraic techniques to characterize controllability. The second technical contribution is to identify monotonicity as the bridge between controllability of finite ensembles and uniform approximability on compact sets.
https://openreview.net/pdf/dcf3b351ea496066a9c49cced5d389f002dc9caf.pdf
Learning Neural Event Functions for Ordinary Differential Equations
https://openreview.net/forum?id=kW_zpEmMLdP
https://openreview.net/forum?id=kW_zpEmMLdP
Ricky T. Q. Chen,Brandon Amos,Maximilian Nickel
ICLR 2021,Poster
The existing Neural ODE formulation relies on an explicit knowledge of the termination time. We extend Neural ODEs to implicitly defined termination criteria modeled by neural event functions, which can be chained together and differentiated through. Neural Event ODEs are capable of modeling discrete and instantaneous changes in a continuous-time system, without prior knowledge of when these changes should occur or how many such changes should exist. We test our approach in modeling hybrid discrete- and continuous- systems such as switching dynamical systems and collision in multi-body systems, and we propose simulation-based training of point processes with applications in discrete control.
https://openreview.net/pdf/8a62ee7253a08f9f25eb894551c8d60ab24ab39a.pdf
Neural Spatio-Temporal Point Processes
https://openreview.net/forum?id=XQQA6-So14
https://openreview.net/forum?id=XQQA6-So14
Ricky T. Q. Chen,Brandon Amos,Maximilian Nickel
ICLR 2021,Poster
We propose a new class of parameterizations for spatio-temporal point processes which leverage Neural ODEs as a computational method and enable flexible, high-fidelity models of discrete events that are localized in continuous time and space. Central to our approach is a combination of continuous-time neural networks with two novel neural architectures, \ie, Jump and Attentive Continuous-time Normalizing Flows. This approach allows us to learn complex distributions for both the spatial and temporal domain and to condition non-trivially on the observed event history. We validate our models on data sets from a wide variety of contexts such as seismology, epidemiology, urban mobility, and neuroscience.
https://openreview.net/pdf/668da7eb2c6955f36c010d76bb62d8a0cea81a06.pdf
Proximal Gradient Descent-Ascent: Variable Convergence under KŁ Geometry
https://openreview.net/forum?id=LVotkZmYyDi
https://openreview.net/forum?id=LVotkZmYyDi
Ziyi Chen,Yi Zhou,Tengyu Xu,Yingbin Liang
ICLR 2021,Poster
The gradient descent-ascent (GDA) algorithm has been widely applied to solve minimax optimization problems. In order to achieve convergent policy parameters for minimax optimization, it is important that GDA generates convergent variable sequences rather than convergent sequences of function value or gradient norm. However, the variable convergence of GDA has been proved only under convexity geometries, and it is lack of understanding in general nonconvex minimax optimization. This paper fills such a gap by studying the convergence of a more general proximal-GDA for regularized nonconvex-strongly-concave minimax optimization. Specifically, we show that proximal-GDA admits a novel Lyapunov function, which monotonically decreases in the minimax optimization process and drives the variable sequences to a critical point. By leveraging this Lyapunov function and the KL geometry that parameterizes the local geometries of general nonconvex functions, we formally establish the variable convergence of proximal-GDA to a certain critical point $x^*$, i.e., $x_t\to x^*, y_t\to y^*(x^*)$. Furthermore, over the full spectrum of the KL-parameterized geometry, we show that proximal-GDA achieves different types of convergence rates ranging from sublinear convergence up to finite-step convergence, depending on the geometry associated with the KL parameter. This is the first theoretical result on the variable convergence for nonconvex minimax optimization.
https://openreview.net/pdf/90bcff75034e5c5afa8e62699d3c10be70856cd1.pdf
Adaptive Universal Generalized PageRank Graph Neural Network
https://openreview.net/forum?id=n6jl7fLxrP
https://openreview.net/forum?id=n6jl7fLxrP
Eli Chien,Jianhao Peng,Pan Li,Olgica Milenkovic
ICLR 2021,Poster
In many important graph data processing applications the acquired information includes both node features and observations of the graph topology. Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility and integrate them in a manner that is also universal. Here, universality refers to independence on homophily or heterophily graph assumptions. We address these issues by introducing a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic. Learned GPR weights automatically adjust to the node label pattern, irrelevant on the type of initialization, and thereby guarantee excellent learning performance for label patterns that are usually hard to handle. Furthermore, they allow one to avoid feature over-smoothing, a process which renders feature information nondiscriminative, without requiring the network to be shallow. Our accompanying theoretical analysis of the GPR-GNN method is facilitated by novel synthetic benchmark datasets generated by the so-called contextual stochastic block model. We also compare the performance of our GNN architecture with that of several state-of-the-art GNNs on the problem of node-classification, using well-known benchmark homophilic and heterophilic datasets. The results demonstrate that GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data.
https://openreview.net/pdf/3fd51494885a4f0252dd144ae51025065fef2186.pdf
Open Question Answering over Tables and Text
https://openreview.net/forum?id=MmCRswl1UYl
https://openreview.net/forum?id=MmCRswl1UYl
Wenhu Chen,Ming-Wei Chang,Eva Schlinger,William Yang Wang,William W. Cohen
ICLR 2021,Poster
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question. Most open QA systems have considered only retrieving information from unstructured text. Here we consider for the first time open QA over {\em both} tabular and textual data and present a new large-scale dataset \emph{Open Table-and-Text Question Answering} (OTT-QA) to evaluate performance on this task. Most questions in OTT-QA require multi-hop inference across tabular data and unstructured text, and the evidence required to answer a question can be distributed in different ways over these two types of input, making evidence retrieval challenging---our baseline model using an iterative retriever and BERT-based reader achieves an exact match score less than 10\%. We then propose two novel techniques to address the challenge of retrieving and aggregating evidence for OTT-QA. The first technique is to use ``early fusion'' to group multiple highly relevant tabular and textual units into a fused block, which provides more context for the retriever to search for. The second technique is to use a cross-block reader to model the cross-dependency between multiple retrieved evidence with global-local sparse attention. Combining these two techniques improves the score significantly, to above 27\%.
https://openreview.net/pdf/6efd9eab0db73088a48f58c2b76aff5b828c7471.pdf
Text Generation by Learning from Demonstrations
https://openreview.net/forum?id=RovX-uQ1Hua
https://openreview.net/forum?id=RovX-uQ1Hua
Richard Yuanzhe Pang,He He
ICLR 2021,Poster
Current approaches to text generation largely rely on autoregressive models and maximum likelihood estimation. This paradigm leads to (i) diverse but low-quality samples due to mismatched learning objective and evaluation metric (likelihood vs. quality) and (ii) exposure bias due to mismatched history distributions (gold vs. model-generated). To alleviate these problems, we frame text generation as an offline reinforcement learning (RL) problem with expert demonstrations (i.e., the reference), where the goal is to maximize quality given model-generated histories. We propose GOLD (generation by off-policy learning from demonstrations): an easy-to-optimize algorithm that learns from the demonstrations by importance weighting. Intuitively, GOLD upweights confident tokens and downweights unconfident ones in the reference during training, avoiding optimization issues faced by prior RL approaches that rely on online data collection. According to both automatic and human evaluation, models trained by GOLD outperform those trained by MLE and policy gradient on summarization, question generation, and machine translation. Further, our models are less sensitive to decoding algorithms and alleviate exposure bias.
https://openreview.net/pdf/b85a24ba77b30aff76c1f56b4e90e23fea31f402.pdf
Tilted Empirical Risk Minimization
https://openreview.net/forum?id=K5YasWXZT3O
https://openreview.net/forum?id=K5YasWXZT3O
Tian Li,Ahmad Beirami,Maziar Sanjabi,Virginia Smith
ICLR 2021,Poster
Empirical risk minimization (ERM) is typically designed to perform well on the average loss, which can result in estimators that are sensitive to outliers, generalize poorly, or treat subgroups unfairly. While many methods aim to address these problems individually, in this work, we explore them through a unified framework---tilted empirical risk minimization (TERM). In particular, we show that it is possible to flexibly tune the impact of individual losses through a straightforward extension to ERM using a hyperparameter called the tilt. We provide several interpretations of the resulting framework: We show that TERM can increase or decrease the influence of outliers, respectively, to enable fairness or robustness; has variance-reduction properties that can benefit generalization; and can be viewed as a smooth approximation to a superquantile method. We develop batch and stochastic first-order optimization methods for solving TERM, and show that the problem can be efficiently solved relative to common alternatives. Finally, we demonstrate that TERM can be used for a multitude of applications, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance. TERM is not only competitive with existing solutions tailored to these individual problems, but can also enable entirely new applications, such as simultaneously addressing outliers and promoting fairness.
https://openreview.net/pdf/292aa65cc32390e3e8557ebe28fa70380977f416.pdf
Explainable Subgraph Reasoning for Forecasting on Temporal Knowledge Graphs
https://openreview.net/forum?id=pGIHq1m7PU
https://openreview.net/forum?id=pGIHq1m7PU
Zhen Han,Peng Chen,Yunpu Ma,Volker Tresp
ICLR 2021,Poster
Modeling time-evolving knowledge graphs (KGs) has recently gained increasing interest. Here, graph representation learning has become the dominant paradigm for link prediction on temporal KGs. However, the embedding-based approaches largely operate in a black-box fashion, lacking the ability to interpret their predictions. This paper provides a link forecasting framework that reasons over query-relevant subgraphs of temporal KGs and jointly models the structural dependencies and the temporal dynamics. Especially, we propose a temporal relational attention mechanism and a novel reverse representation update scheme to guide the extraction of an enclosing subgraph around the query. The subgraph is expanded by an iterative sampling of temporal neighbors and by attention propagation. Our approach provides human-understandable evidence explaining the forecast. We evaluate our model on four benchmark temporal knowledge graphs for the link forecasting task. While being more explainable, our model obtains a relative improvement of up to 20 $\%$ on Hits@1 compared to the previous best temporal KG forecasting method. We also conduct a survey with 53 respondents, and the results show that the evidence extracted by the model for link forecasting is aligned with human understanding.
https://openreview.net/pdf/0ab0ca1b52f6655da73e49f5bd22facb0665152b.pdf
Bayesian Context Aggregation for Neural Processes
https://openreview.net/forum?id=ufZN2-aehFa
https://openreview.net/forum?id=ufZN2-aehFa
Michael Volpp,Fabian Flürenbrock,Lukas Grossberger,Christian Daniel,Gerhard Neumann
ICLR 2021,Poster
Formulating scalable probabilistic regression models with reliable uncertainty estimates has been a long-standing challenge in machine learning research. Recently, casting probabilistic regression as a multi-task learning problem in terms of conditional latent variable (CLV) models such as the Neural Process (NP) has shown promising results. In this paper, we focus on context aggregation, a central component of such architectures, which fuses information from multiple context data points. So far, this aggregation operation has been treated separately from the inference of a latent representation of the target function in CLV models. Our key contribution is to combine these steps into one holistic mechanism by phrasing context aggregation as a Bayesian inference problem. The resulting Bayesian Aggregation (BA) mechanism enables principled handling of task ambiguity, which is key for efficiently processing context information. We demonstrate on a range of challenging experiments that BA consistently improves upon the performance of traditional mean aggregation while remaining computationally efficient and fully compatible with existing NP-based models.
https://openreview.net/pdf/e30c747da671c808211da380e77feb9735c74530.pdf
Conformation-Guided Molecular Representation with Hamiltonian Neural Networks
https://openreview.net/forum?id=q-cnWaaoUTH
https://openreview.net/forum?id=q-cnWaaoUTH
Ziyao Li,Shuwen Yang,Guojie Song,Lingsheng Cai
ICLR 2021,Poster
Well-designed molecular representations (fingerprints) are vital to combine medical chemistry and deep learning. Whereas incorporating 3D geometry of molecules (i.e. conformations) in their representations seems beneficial, current 3D algorithms are still in infancy. In this paper, we propose a novel molecular representation algorithm which preserves 3D conformations of molecules with a Molecular Hamiltonian Network (HamNet). In HamNet, implicit positions and momentums of atoms in a molecule interact in the Hamiltonian Engine following the discretized Hamiltonian equations. These implicit coordinations are supervised with real conformations with translation- & rotation-invariant losses, and further used as inputs to the Fingerprint Generator, a message-passing neural network. Experiments show that the Hamiltonian Engine can well preserve molecular conformations, and that the fingerprints generated by HamNet achieve state-of-the-art performances on MoleculeNet, a standard molecular machine learning benchmark.
https://openreview.net/pdf/7a5a25fdbe36c7b0286d17dafd233f47bf7dd30c.pdf
Learning with AMIGo: Adversarially Motivated Intrinsic Goals
https://openreview.net/forum?id=ETBc_MIMgoX
https://openreview.net/forum?id=ETBc_MIMgoX
Andres Campero,Roberta Raileanu,Heinrich Kuttler,Joshua B. Tenenbaum,Tim Rocktäschel,Edward Grefenstette
ICLR 2021,Poster
A key challenge for reinforcement learning (RL) consists of learning in environments with sparse extrinsic rewards. In contrast to current RL methods, humans are able to learn new skills with little or no reward by using various forms of intrinsic motivation. We propose AMIGo, a novel agent incorporating -- as form of meta-learning -- a goal-generating teacher that proposes Adversarially Motivated Intrinsic Goals to train a goal-conditioned "student" policy in the absence of (or alongside) environment reward. Specifically, through a simple but effective "constructively adversarial" objective, the teacher learns to propose increasingly challenging -- yet achievable -- goals that allow the student to learn general skills for acting in a new environment, independent of the task to be solved. We show that our method generates a natural curriculum of self-proposed goals which ultimately allows the agent to solve challenging procedurally-generated tasks where other forms of intrinsic motivation and state-of-the-art RL methods fail.
https://openreview.net/pdf/2424de49f541b07d5245c6d926cf266b520cb248.pdf
Training with Quantization Noise for Extreme Model Compression
https://openreview.net/forum?id=dV19Yyi1fS3
https://openreview.net/forum?id=dV19Yyi1fS3
Pierre Stock,Angela Fan,Benjamin Graham,Edouard Grave,Rémi Gribonval,Herve Jegou,Armand Joulin
ICLR 2021,Poster
We tackle the problem of producing compact models, maximizing their accuracy for a given model size. A standard solution is to train networks with Quantization Aware Training, where the weights are quantized during training and the gradients approximated with the Straight-Through Estimator. In this paper, we extend this approach to work with extreme compression methods where the approximations introduced by STE are severe. Our proposal is to only quantize a different random subset of weights during each forward, allowing for unbiased gradients to flow through the other weights. Controlling the amount of noise and its form allows for extreme compression rates while maintaining the performance of the original model. As a result we establish new state-of-the-art compromises between accuracy and model size both in natural language processing and image classification. For example, applying our method to state-of-the-art Transformer and ConvNet architectures, we can achieve 82.5% accuracy on MNLI by compressing RoBERTa to 14 MB and 80.0% top-1 accuracy on ImageNet by compressing an EfficientNet-B3 to 3.3 MB.
https://openreview.net/pdf/e7c435ae8b65ac9199efd2c6f55258018a8a229b.pdf
Interpreting and Boosting Dropout from a Game-Theoretic View
https://openreview.net/forum?id=Jacdvfjicf7
https://openreview.net/forum?id=Jacdvfjicf7
Hao Zhang,Sen Li,YinChao Ma,Mingjie Li,Yichen Xie,Quanshi Zhang
ICLR 2021,Poster
This paper aims to understand and improve the utility of the dropout operation from the perspective of game-theoretical interactions. We prove that dropout can suppress the strength of interactions between input variables of deep neural networks (DNNs). The theoretical proof is also verified by various experiments. Furthermore, we find that such interactions were strongly related to the over-fitting problem in deep learning. So, the utility of dropout can be regarded as decreasing interactions to alleviating the significance of over-fitting. Based on this understanding, we propose the interaction loss to further improve the utility of dropout. Experimental results on various DNNs and datasets have shown that the interaction loss can effectively improve the utility of dropout and boost the performance of DNNs.
https://openreview.net/pdf/21165b3f3948c92ac8a6a60e5de44f9411235f53.pdf
VTNet: Visual Transformer Network for Object Goal Navigation
https://openreview.net/forum?id=DILxQP08O3B
https://openreview.net/forum?id=DILxQP08O3B
Heming Du,Xin Yu,Liang Zheng
ICLR 2021,Poster
Object goal navigation aims to steer an agent towards a target object based on observations of the agent. It is of pivotal importance to design effective visual representations of the observed scene in determining navigation actions. In this paper, we introduce a Visual Transformer Network (VTNet) for learning informative visual representation in navigation. VTNet is a highly effective structure that embodies two key properties for visual representations: First, the relationships among all the object instances in a scene are exploited; Second, the spatial locations of objects and image regions are emphasized so that directional navigation signals can be learned. Furthermore, we also develop a pre-training scheme to associate the visual representations with navigation signals, and thus facilitate navigation policy learning. In a nutshell, VTNet embeds object and region features with their location cues as spatial-aware descriptors and then incorporates all the encoded descriptors through attention operations to achieve informative representation for navigation. Given such visual representations, agents are able to explore the correlations between visual observations and navigation actions. For example, an agent would prioritize ``turning right'' over ``turning left'' when the visual representation emphasizes on the right side of activation map. Experiments in the artificial environment AI2-Thor demonstrate that VTNet significantly outperforms state-of-the-art methods in unseen testing environments.
https://openreview.net/pdf/e1c5a2f2e9fd64005c3b944fd743140b5c02bc74.pdf
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
https://openreview.net/forum?id=QO9-y8also-
https://openreview.net/forum?id=QO9-y8also-
Judy Borowski,Roland Simon Zimmermann,Judith Schepers,Robert Geirhos,Thomas S. A. Wallis,Matthias Bethge,Wieland Brendel
ICLR 2021,Poster
Feature visualizations such as synthetic maximally activating images are a widely used explanation method to better understand the information processing of convolutional neural networks (CNNs). At the same time, there are concerns that these visualizations might not accurately represent CNNs' inner workings. Here, we measure how much extremely activating images help humans to predict CNN activations. Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. (2017) with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map. Given either synthetic or natural reference images, human participants choose which of two query images leads to strong positive activation. The experiment is designed to maximize participants' performance, and is the first to probe intermediate instead of final layer representations. We find that synthetic images indeed provide helpful information about feature map activations ($82\pm4\%$ accuracy; chance would be $50\%$). However, natural images --- originally intended to be a baseline --- outperform these synthetic images by a wide margin ($92\pm2\%$). Additionally, participants are faster and more confident for natural images, whereas subjective impressions about the interpretability of the feature visualizations by Olah et al. (2017) are mixed. The higher informativeness of natural images holds across most layers, for both expert and lay participants as well as for hand- and randomly-picked feature visualizations. Even if only a single reference image is given, synthetic images provide less information than natural images ($65\pm5\%$ vs. $73\pm4\%$). In summary, synthetic images from a popular feature visualization method are significantly less informative for assessing CNN activations than natural images. We argue that visualization methods should improve over this simple baseline.
https://openreview.net/pdf/915f3d2aa2618205e86bad4d630fe286139f8796.pdf
Multi-Class Uncertainty Calibration via Mutual Information Maximization-based Binning
https://openreview.net/forum?id=AICNpd8ke-m
https://openreview.net/forum?id=AICNpd8ke-m
Kanil Patel,William H. Beluch,Bin Yang,Michael Pfeiffer,Dan Zhang
ICLR 2021,Poster
Post-hoc multi-class calibration is a common approach for providing high-quality confidence estimates of deep neural network predictions. Recent work has shown that widely used scaling methods underestimate their calibration error, while alternative Histogram Binning (HB) methods often fail to preserve classification accuracy. When classes have small prior probabilities, HB also faces the issue of severe sample-inefficiency after the conversion into K one-vs-rest class-wise calibration problems. The goal of this paper is to resolve the identified issues of HB in order to provide calibrated confidence estimates using only a small holdout calibration dataset for bin optimization while preserving multi-class ranking accuracy. From an information-theoretic perspective, we derive the I-Max concept for binning, which maximizes the mutual information between labels and quantized logits. This concept mitigates potential loss in ranking performance due to lossy quantization, and by disentangling the optimization of bin edges and representatives allows simultaneous improvement of ranking and calibration performance. To improve the sample efficiency and estimates from a small calibration set, we propose a shared class-wise (sCW) calibration strategy, sharing one calibrator among similar classes (e.g., with similar class priors) so that the training sets of their class-wise calibration problems can be merged to train the single calibrator. The combination of sCW and I-Max binning outperforms the state of the art calibration methods on various evaluation metrics across different benchmark datasets and models, using a small calibration set (e.g., 1k samples for ImageNet).
https://openreview.net/pdf/e278ab3baccd3cdfc22ecd5e2c3951904f7a70a0.pdf
A Discriminative Gaussian Mixture Model with Sparsity
https://openreview.net/forum?id=-_Zp7r2-cGK
https://openreview.net/forum?id=-_Zp7r2-cGK
Hideaki Hayashi,Seiichi Uchida
ICLR 2021,Poster
In probabilistic classification, a discriminative model based on the softmax function has a potential limitation in that it assumes unimodality for each class in the feature space. The mixture model can address this issue, although it leads to an increase in the number of parameters. We propose a sparse classifier based on a discriminative GMM, referred to as a sparse discriminative Gaussian mixture (SDGM). In the SDGM, a GMM-based discriminative model is trained via sparse Bayesian learning. Using this sparse learning framework, we can simultaneously remove redundant Gaussian components and reduce the number of parameters used in the remaining components during learning; this learning method reduces the model complexity, thereby improving the generalization capability. Furthermore, the SDGM can be embedded into neural networks (NNs), such as convolutional NNs, and can be trained in an end-to-end manner. Experimental results demonstrated that the proposed method outperformed the existing softmax-based discriminative models.
https://openreview.net/pdf/6fecf8af857ca0e108abee0d2dd9710cf7c3ad37.pdf
Trusted Multi-View Classification
https://openreview.net/forum?id=OOsR8BzCnl5
https://openreview.net/forum?id=OOsR8BzCnl5
Zongbo Han,Changqing Zhang,Huazhu Fu,Joey Tianyi Zhou
ICLR 2021,Poster
Multi-view classification (MVC) generally focuses on improving classification accuracy by using information from different views, typically integrating them into a unified comprehensive representation for downstream tasks. However, it is also crucial to dynamically assess the quality of a view for different samples in order to provide reliable uncertainty estimations, which indicate whether predictions can be trusted. To this end, we propose a novel multi-view classification method, termed trusted multi-view classification, which provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The algorithm jointly utilizes multiple views to promote both classification reliability (uncertainty estimation during testing) and robustness (out-of-distribution-awareness during training) by integrating evidence from each view. To achieve this, the Dirichlet distribution is used to model the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness for out-of-distribution samples. Extensive experimental results validate the effectiveness of the proposed model in accuracy, reliability and robustness.
https://openreview.net/pdf/4ae336db914c13c1db09afbb3dea3d948ad4aa37.pdf
IEPT: Instance-Level and Episode-Level Pretext Tasks for Few-Shot Learning
https://openreview.net/forum?id=xzqLpqRzxLq
https://openreview.net/forum?id=xzqLpqRzxLq
Manli Zhang,Jianhong Zhang,Zhiwu Lu,Tao Xiang,Mingyu Ding,Songfang Huang
ICLR 2021,Poster
The need of collecting large quantities of labeled training data for each new task has limited the usefulness of deep neural networks. Given data from a set of source tasks, this limitation can be overcome using two transfer learning approaches: few-shot learning (FSL) and self-supervised learning (SSL). The former aims to learn `how to learn' by designing learning episodes using source tasks to simulate the challenge of solving the target new task with few labeled samples. In contrast, the latter exploits an annotation-free pretext task across all source tasks in order to learn generalizable feature representations. In this work, we propose a novel Instance-level and Episode-level Pretext Task (IEPT) framework that seamlessly integrates SSL into FSL. Specifically, given an FSL episode, we first apply geometric transformations to each instance to generate extended episodes. At the instance-level, transformation recognition is performed as per standard SSL. Importantly, at the episode-level, two SSL-FSL hybrid learning objectives are devised: (1) The consistency across the predictions of an FSL classifier from different extended episodes is maximized as an episode-level pretext task. (2) The features extracted from each instance across different episodes are integrated to construct a single FSL classifier for meta-learning. Extensive experiments show that our proposed model (i.e., FSL with IEPT) achieves the new state-of-the-art.
https://openreview.net/pdf/a68102247933495b5b77811b3b5299cf97a108f4.pdf
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation
https://openreview.net/forum?id=KpfasTaLUpq
https://openreview.net/forum?id=KpfasTaLUpq
Jungo Kasai,Nikolaos Pappas,Hao Peng,James Cross,Noah Smith
ICLR 2021,Poster
Much recent effort has been invested in non-autoregressive neural machine translation, which appears to be an efficient alternative to state-of-the-art autoregressive machine translation on modern GPUs. In contrast to the latter, where generation is sequential, the former allows generation to be parallelized across target token positions. Some of the latest non-autoregressive models have achieved impressive translation quality-speed tradeoffs compared to autoregressive baselines. In this work, we reexamine this tradeoff and argue that autoregressive baselines can be substantially sped up without loss in accuracy. Specifically, we study autoregressive models with encoders and decoders of varied depths. Our extensive experiments show that given a sufficiently deep encoder, a single-layer autoregressive decoder can substantially outperform strong non-autoregressive models with comparable inference speed. We show that the speed disadvantage for autoregressive baselines compared to non-autoregressive methods has been overestimated in three aspects: suboptimal layer allocation, insufficient speed measurement, and lack of knowledge distillation. Our results establish a new protocol for future research toward fast, accurate machine translation. Our code is available at https://github.com/jungokasai/deep-shallow.
https://openreview.net/pdf/860e9a0a21e4953baf28e2042e96b83bfedd8bff.pdf
Effective Abstract Reasoning with Dual-Contrast Network
https://openreview.net/forum?id=ldxlzGYWDmW
https://openreview.net/forum?id=ldxlzGYWDmW
Tao Zhuo,Mohan Kankanhalli
ICLR 2021,Poster
As a step towards improving the abstract reasoning capability of machines, we aim to solve Raven’s Progressive Matrices (RPM) with neural networks, since solving RPM puzzles is highly correlated with human intelligence. Unlike previous methods that use auxiliary annotations or assume hidden rules to produce appropriate feature representation, we only use the ground truth answer of each question for model learning, aiming for an intelligent agent to have a strong learning capability with a small amount of supervision. Based on the RPM problem formulation, the correct answer filled into the missing entry of the third row/column has to best satisfy the same rules shared between the first two rows/columns.Thus we design a simple yet effective Dual-Contrast Network (DCNet) to exploit the inherent structure of RPM puzzles. Specifically, a rule contrast module is designed to compare the latent rules between the filled row/column and the first two rows/columns; a choice contrast module is designed to increase the relative differences between candidate choices. Experimental results on the RAVEN and PGM datasets show that DCNet outperforms the state-of-the-art methods by a large margin of 5.77%. Further experiments on few training samples and model generalization also show the effectiveness of DCNet. Code is available at https://github.com/visiontao/dcnet.
https://openreview.net/pdf/130f155d219a92a6ce0511bf6e936499ff17abdd.pdf
On Position Embeddings in BERT
https://openreview.net/forum?id=onxoVA9FxMw
https://openreview.net/forum?id=onxoVA9FxMw
Benyou Wang,Lifeng Shang,Christina Lioma,Xin Jiang,Hao Yang,Qun Liu,Jakob Grue Simonsen
ICLR 2021,Poster
Various Position Embeddings (PEs) have been proposed in Transformer based architectures~(e.g. BERT) to model word order. These are empirically-driven and perform well, but no formal framework exists to systematically study them. To address this, we present three properties of PEs that capture word distance in vector space: translation invariance, monotonicity, and symmetry. These properties formally capture the behaviour of PEs and allow us to reinterpret sinusoidal PEs in a principled way. Moreover, we propose a new probing test (called `identical word probing') and mathematical indicators to quantitatively detect the general attention patterns with respect to the above properties. An empirical evaluation of seven PEs (and their combinations) for classification (GLUE) and span prediction (SQuAD) shows that: (1) both classification and span prediction benefit from translation invariance and local monotonicity, while symmetry slightly decreases performance; (2) The fully-learnable absolute PE performs better in classification, while relative PEs perform better in span prediction. We contribute the first formal and quantitative analysis of desiderata for PEs, and a principled discussion about their correlation to the performance of typical downstream tasks.
https://openreview.net/pdf/be0283e323f1b118c975dbc46f7f75c59b467fe0.pdf
Neural Pruning via Growing Regularization
https://openreview.net/forum?id=o966_Is_nPA
https://openreview.net/forum?id=o966_Is_nPA
Huan Wang,Can Qin,Yulun Zhang,Yun Fu
ICLR 2021,Poster
Regularization has long been utilized to learn sparsity in deep neural network pruning. However, its role is mainly explored in the small penalty strength regime. In this work, we extend its application to a new scenario where the regularization grows large gradually to tackle two central problems of pruning: pruning schedule and weight importance scoring. (1) The former topic is newly brought up in this work, which we find critical to the pruning performance while receives little research attention. Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains compared with its one-shot counterpart, even when the same weights are removed. (2) The growing penalty scheme also brings us an approach to exploit the Hessian information for more accurate pruning without knowing their specific values, thus not bothered by the common Hessian approximation problems. Empirically, the proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning. Their effectiveness is demonstrated with modern deep neural networks on the CIFAR and ImageNet datasets, achieving competitive results compared to many state-of-the-art algorithms. Our code and trained models are publicly available at https://github.com/mingsun-tse/regularization-pruning.
https://openreview.net/pdf/fc6d04c3b9fc74c91c68bc4f55b02db36753f98c.pdf
Mixed-Features Vectors and Subspace Splitting
https://openreview.net/forum?id=l-LGlk4Yl6G
https://openreview.net/forum?id=l-LGlk4Yl6G
Alejandro Pimentel-Alarcón,Daniel L. Pimentel-Alarcón
ICLR 2021,Poster
Motivated by metagenomics, recommender systems, dictionary learning, and related problems, this paper introduces subspace splitting(SS): the task of clustering the entries of what we call amixed-features vector, that is, a vector whose subsets of coordinates agree with a collection of subspaces. We derive precise identifiability conditions under which SS is well-posed, thus providing the first fundamental theory for this problem. We also propose the first three practical SS algorithms, each with advantages and disadvantages: a random sampling method , a projection-based greedy heuristic , and an alternating Lloyd-type algorithm ; all allow noise, outliers, and missing data. Our extensive experiments outline the performance of our algorithms, and in lack of other SS algorithms, for reference we compare against methods for tightly related problems, like robust matched subspace detection and maximum feasible subsystem, which are special simpler cases of SS.
https://openreview.net/pdf/5d2183584013a66871a20dd3988dc4e8acf94ea4.pdf
Hierarchical Reinforcement Learning by Discovering Intrinsic Options
https://openreview.net/forum?id=r-gPPHEjpmw
https://openreview.net/forum?id=r-gPPHEjpmw
Jesse Zhang,Haonan Yu,Wei Xu
ICLR 2021,Poster
We propose a hierarchical reinforcement learning method, HIDIO, that can learn task-agnostic options in a self-supervised manner while jointly learning to utilize them to solve sparse-reward tasks. Unlike current hierarchical RL approaches that tend to formulate goal-reaching low-level tasks or pre-define ad hoc lower-level policies, HIDIO encourages lower-level option learning that is independent of the task at hand, requiring few assumptions or little knowledge about the task structure. These options are learned through an intrinsic entropy minimization objective conditioned on the option sub-trajectories. The learned options are diverse and task-agnostic. In experiments on sparse-reward robotic manipulation and navigation tasks, HIDIO achieves higher success rates with greater sample efficiency than regular RL baselines and two state-of-the-art hierarchical RL methods. Code at: https://github.com/jesbu1/hidio.
https://openreview.net/pdf/8ab82acd2672b63eb1d694fcb5fc26a32c2f6d74.pdf
Sharper Generalization Bounds for Learning with Gradient-dominated Objective Functions
https://openreview.net/forum?id=r28GdiQF7vM
https://openreview.net/forum?id=r28GdiQF7vM
Yunwen Lei,Yiming Ying
ICLR 2021,Poster
Stochastic optimization has become the workhorse behind many successful machine learning applications, which motivates a lot of theoretical analysis to understand its empirical behavior. As a comparison, there is far less work to study the generalization behavior especially in a non-convex learning setting. In this paper, we study the generalization behavior of stochastic optimization by leveraging the algorithmic stability for learning with $\beta$-gradient-dominated objective functions. We develop generalization bounds of the order $O(1/(n\beta))$ plus the convergence rate of the optimization algorithm, where $n$ is the sample size. Our stability analysis significantly improves the existing non-convex analysis by removing the bounded gradient assumption and implying better generalization bounds. We achieve this improvement by exploiting the smoothness of loss functions instead of the Lipschitz condition in Charles & Papailiopoulos (2018). We apply our general results to various stochastic optimization algorithms, which show clearly how the variance-reduction techniques improve not only training but also generalization. Furthermore, our discussion explains how interpolation helps generalization for highly expressive models.
https://openreview.net/pdf/43ac8e3a506c7ad54162957c4698675f659e050a.pdf
Representation Learning for Sequence Data with Deep Autoencoding Predictive Components
https://openreview.net/forum?id=Naqw7EHIfrv
https://openreview.net/forum?id=Naqw7EHIfrv
Junwen Bai,Weiran Wang,Yingbo Zhou,Caiming Xiong
ICLR 2021,Poster
We propose Deep Autoencoding Predictive Components (DAPC) -- a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space. We encourage this latent structure by maximizing an estimate of \emph{predictive information} of latent feature sequences, which is the mutual information between the past and future windows at each time step. In contrast to the mutual information lower bound commonly used by contrastive learning, the estimate of predictive information we adopt is exact under a Gaussian assumption. Additionally, it can be computed without negative sampling. To reduce the degeneracy of the latent space extracted by powerful encoders and keep useful information from the inputs, we regularize predictive information learning with a challenging masked reconstruction loss. We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
https://openreview.net/pdf/1d9efef224111e20fa66c34f1102165be8afc889.pdf
Average-case Acceleration for Bilinear Games and Normal Matrices
https://openreview.net/forum?id=H0syOoy3Ash
https://openreview.net/forum?id=H0syOoy3Ash
Carles Domingo-Enrich,Fabian Pedregosa,Damien Scieur
ICLR 2021,Poster
Advances in generative modeling and adversarial learning have given rise to renewed interest in smooth games. However, the absence of symmetry in the matrix of second derivatives poses challenges that are not present in the classical minimization framework. While a rich theory of average-case analysis has been developed for minimization problems, little is known in the context of smooth games. In this work we take a first step towards closing this gap by developing average-case optimal first-order methods for a subset of smooth games. We make the following three main contributions. First, we show that for zero-sum bilinear games the average-case optimal method is the optimal method for the minimization of the Hamiltonian. Second, we provide an explicit expression for the optimal method corresponding to normal matrices, potentially non-symmetric. Finally, we specialize it to matrices with eigenvalues located in a disk and show a provable speed-up compared to worst-case optimal algorithms. We illustrate our findings through benchmarks with a varying degree of mismatch with our assumptions.
https://openreview.net/pdf/fe8bbfea3f4bea0de75956043f18ca370ff6f502.pdf
Learning Task-General Representations with Generative Neuro-Symbolic Modeling
https://openreview.net/forum?id=qzBUIzq5XR2
https://openreview.net/forum?id=qzBUIzq5XR2
Reuben Feinman,Brenden M. Lake
ICLR 2021,Poster
People can learn rich, general-purpose conceptual representations from only raw perceptual inputs. Current machine learning approaches fall well short of these human standards, although different modeling traditions often have complementary strengths. Symbolic models can capture the compositional and causal knowledge that enables flexible generalization, but they struggle to learn from raw inputs, relying on strong abstractions and simplifying assumptions. Neural network models can learn directly from raw data, but they struggle to capture compositional and causal structure and typically must retrain to tackle new tasks. We bring together these two traditions to learn generative models of concepts that capture rich compositional and causal structure, while learning from raw data. We develop a generative neuro-symbolic (GNS) model of handwritten character concepts that uses the control flow of a probabilistic program, coupled with symbolic stroke primitives and a symbolic image renderer, to represent the causal and compositional processes by which characters are formed. The distributions of parts (strokes), and correlations between parts, are modeled with neural network subroutines, allowing the model to learn directly from raw data and express nonparametric statistical relationships. We apply our model to the Omniglot challenge of human-level concept learning, using a background set of alphabets to learn an expressive prior distribution over character drawings. In a subsequent evaluation, our GNS model uses probabilistic inference to learn rich conceptual representations from a single training image that generalize to 4 unique tasks, succeeding where previous work has fallen short.
https://openreview.net/pdf/6431925bd616a25a9a413f303a0b0d9302b580eb.pdf
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study
https://openreview.net/forum?id=PObuuGVrGaZ
https://openreview.net/forum?id=PObuuGVrGaZ
Zhiqiang Shen,Zechun Liu,Dejia Xu,Zitian Chen,Kwang-Ting Cheng,Marios Savvides
ICLR 2021,Poster
This work aims to empirically clarify a recently discovered perspective that label smoothing is incompatible with knowledge distillation. We begin by introducing the motivation behind on how this incompatibility is raised, i.e., label smoothing erases relative information between teacher logits. We provide a novel connection on how label smoothing affects distributions of semantically similar and dissimilar classes. Then we propose a metric to quantitatively measure the degree of erased information in sample's representation. After that, we study its one-sidedness and imperfection of the incompatibility view through massive analyses, visualizations and comprehensive experiments on Image Classification, Binary Networks, and Neural Machine Translation. Finally, we broadly discuss several circumstances wherein label smoothing will indeed lose its effectiveness.
https://openreview.net/pdf/eef0d3feb201e5b62ae6a912c1b6f67a7c531e39.pdf
The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods
https://openreview.net/forum?id=aYuZO9DIdnn
https://openreview.net/forum?id=aYuZO9DIdnn
Louis THIRY,Michael Arbel,Eugene Belilovsky,Edouard Oyallon
ICLR 2021,Poster
A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while being more amenable to theoretical analysis. In this work, we highlight the importance of a data-dependent feature extraction step that is key to the obtain good performance in convolutional kernel methods. This step typically corresponds to a whitened dictionary of patches, and gives rise to a data-driven convolutional kernel methods.We extensively study its effect, demonstrating it is the key ingredient for high performance of these methods. Specifically, we show that one of the simplest instances of such kernel methods, based on a single layer of image patches followed by a linear classifier is already obtaining classification accuracies on CIFAR-10 in the same range as previous more sophisticated convolutional kernel methods. We scale this method to the challenging ImageNet dataset, showing such a simple approach can exceed all existing non-learned representation methods. This is a new baseline for object recognition without representation learning methods, that initiates the investigation of convolutional kernel models on ImageNet. We conduct experiments to analyze the dictionary that we used, our ablations showing they exhibit low-dimensional properties.
https://openreview.net/pdf/a4eeda4ce2da499c9d47258922e4e8d57aaceb28.pdf
Graph Coarsening with Neural Networks
https://openreview.net/forum?id=uxpzitPEooJ
https://openreview.net/forum?id=uxpzitPEooJ
Chen Cai,Dingkang Wang,Yusu Wang
ICLR 2021,Poster
As large scale-graphs become increasingly more prevalent, it poses significant computational challenges to process, extract and analyze large graph data. Graph coarsening is one popular technique to reduce the size of a graph while maintaining essential properties. Despite rich graph coarsening literature, there is only limited exploration of data-driven method in the field. In this work, we leverage the recent progress of deep learning on graphs for graph coarsening. We first propose a framework for measuring the quality of coarsening algorithm and show that depending on the goal, we need to carefully choose the Laplace operator on the coarse graph and associated projection/lift operators. Motivated by the observation that the current choice of edge weight for the coarse graph may be sub-optimal, we parametrize the weight assignment map with graph neural networks and train it to improve the coarsening quality in an unsupervised way. Through extensive experiments on both synthetic and real networks, we demonstrate that our method significantly improves common graph coarsening methods under various metrics, reduction ratios, graph sizes, and graph types. It generalizes to graphs of larger size (more than $25\times$ of training graphs), adaptive to different losses (both differentiable and non-differentiable), and scales to much larger graphs than previous work.
https://openreview.net/pdf/5b0ab91f0078b083f6e927f1ca10b45bdc01729c.pdf
On the Universality of the Double Descent Peak in Ridgeless Regression
https://openreview.net/forum?id=0IO5VdnSAaH
https://openreview.net/forum?id=0IO5VdnSAaH
David Holzmüller
ICLR 2021,Poster
We prove a non-asymptotic distribution-independent lower bound for the expected mean squared generalization error caused by label noise in ridgeless linear regression. Our lower bound generalizes a similar known result to the overparameterized (interpolating) regime. In contrast to most previous works, our analysis applies to a broad class of input distributions with almost surely full-rank feature matrices, which allows us to cover various types of deterministic or random feature maps. Our lower bound is asymptotically sharp and implies that in the presence of label noise, ridgeless linear regression does not perform well around the interpolation threshold for any of these feature maps. We analyze the imposed assumptions in detail and provide a theory for analytic (random) feature maps. Using this theory, we can show that our assumptions are satisfied for input distributions with a (Lebesgue) density and feature maps given by random deep neural networks with analytic activation functions like sigmoid, tanh, softplus or GELU. As further examples, we show that feature maps from random Fourier features and polynomial kernels also satisfy our assumptions. We complement our theory with further experimental and analytic results.
https://openreview.net/pdf/aa0b9683f2ce0dc2584f031c8d4e614c36489280.pdf
Deep Repulsive Clustering of Ordered Data Based on Order-Identity Decomposition
https://openreview.net/forum?id=Yz-XtK5RBxB
https://openreview.net/forum?id=Yz-XtK5RBxB
Seon-Ho Lee,Chang-Su Kim
ICLR 2021,Poster
We propose the deep repulsive clustering (DRC) algorithm of ordered data for effective order learning. First, we develop the order-identity decomposition (ORID) network to divide the information of an object instance into an order-related feature and an identity feature. Then, we group object instances into clusters according to their identity features using a repulsive term. Moreover, we estimate the rank of a test instance, by comparing it with references within the same cluster. Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance.
https://openreview.net/pdf/6ef111b17ac724c39a9b25c8b3761363cc5e0bd3.pdf
Rapid Task-Solving in Novel Environments
https://openreview.net/forum?id=F-mvpFpn_0q
https://openreview.net/forum?id=F-mvpFpn_0q
Samuel Ritter,Ryan Faulkner,Laurent Sartran,Adam Santoro,Matthew Botvinick,David Raposo
ICLR 2021,Poster
We propose the challenge of rapid task-solving in novel environments (RTS), wherein an agent must solve a series of tasks as rapidly as possible in an unfamiliar environment. An effective RTS agent must balance between exploring the unfamiliar environment and solving its current task, all while building a model of the new environment over which it can plan when faced with later tasks. While modern deep RL agents exhibit some of these abilities in isolation, none are suitable for the full RTS challenge. To enable progress toward RTS, we introduce two challenge domains: (1) a minimal RTS challenge called the Memory&Planning Game and (2) One-Shot StreetLearn Navigation, which introduces scale and complexity from real-world data. We demonstrate that state-of-the-art deep RL agents fail at RTS in both domains, and that this failure is due to an inability to plan over gathered knowledge. We develop Episodic Planning Networks (EPNs) and show that deep-RL agents with EPNs excel at RTS, outperforming the nearest baseline by factors of 2-3 and learning to navigate held-out StreetLearn maps within a single episode. We show that EPNs learn to execute a value iteration-like planning algorithm and that they generalize to situations beyond their training experience.
https://openreview.net/pdf/2a2a34541a2b4e34e92e1050f5935a08cca0163b.pdf
DINO: A Conditional Energy-Based GAN for Domain Translation
https://openreview.net/forum?id=WAISmwsqDsb
https://openreview.net/forum?id=WAISmwsqDsb
Konstantinos Vougioukas,Stavros Petridis,Maja Pantic
ICLR 2021,Poster
Domain translation is the process of transforming data from one domain to another while preserving the common semantics. Some of the most popular domain translation systems are based on conditional generative adversarial networks, which use source domain data to drive the generator and as an input to the discriminator. However, this approach does not enforce the preservation of shared semantics since the conditional input can often be ignored by the discriminator. We propose an alternative method for conditioning and present a new framework, where two networks are simultaneously trained, in a supervised manner, to perform domain translation in opposite directions. Our method is not only better at capturing the shared information between two domains but is more generic and can be applied to a broader range of problems. The proposed framework performs well even in challenging cross-modal translations, such as video-driven speech reconstruction, for which other systems struggle to maintain correspondence.
https://openreview.net/pdf/1770fc1a0716d2fde0cefb49d59d540311331789.pdf
Removing Undesirable Feature Contributions Using Out-of-Distribution Data
https://openreview.net/forum?id=eIHYL6fpbkA
https://openreview.net/forum?id=eIHYL6fpbkA
Saehyung Lee,Changhwa Park,Hyungyu Lee,Jihun Yi,Jonghyun Lee,Sungroh Yoon
ICLR 2021,Poster
Several data augmentation methods deploy unlabeled-in-distribution (UID) data to bridge the gap between the training and inference of neural networks. However, these methods have clear limitations in terms of availability of UID data and dependence of algorithms on pseudo-labels. Herein, we propose a data augmentation method to improve generalization in both adversarial and standard learning by using out-of-distribution (OOD) data that are devoid of the abovementioned issues. We show how to improve generalization theoretically using OOD data in each learning scenario and complement our theoretical analysis with experiments on CIFAR-10, CIFAR-100, and a subset of ImageNet. The results indicate that undesirable features are shared even among image data that seem to have little correlation from a human point of view. We also present the advantages of the proposed method through comparison with other data augmentation methods, which can be used in the absence of UID data. Furthermore, we demonstrate that the proposed method can further improve the existing state-of-the-art adversarial training.
https://openreview.net/pdf/ad4c27af4a6e3b2fdbb296b3a6f43b65b85aab49.pdf
Accurate Learning of Graph Representations with Graph Multiset Pooling
https://openreview.net/forum?id=JHcqXGaqiGn
https://openreview.net/forum?id=JHcqXGaqiGn
Jinheon Baek,Minki Kang,Sung Ju Hwang
ICLR 2021,Poster
Graph neural networks have been widely used on modeling graph data, achieving impressive results on node classification and link prediction tasks. Yet, obtaining an accurate representation for a graph further requires a pooling function that maps a set of node representations into a compact form. A simple sum or average over all node representations considers all node features equally without consideration of their task relevance, and any structural dependencies among them. Recently proposed hierarchical graph pooling methods, on the other hand, may yield the same representation for two different graphs that are distinguished by the Weisfeiler-Lehman test, as they suboptimally preserve information from the node features. To tackle these limitations of existing graph pooling methods, we first formulate the graph pooling problem as a multiset encoding problem with auxiliary information about the graph structure, and propose a Graph Multiset Transformer (GMT) which is a multi-head attention based global pooling layer that captures the interaction between nodes according to their structural dependencies. We show that GMT satisfies both injectiveness and permutation invariance, such that it is at most as powerful as the Weisfeiler-Lehman graph isomorphism test. Moreover, our methods can be easily extended to the previous node clustering approaches for hierarchical graph pooling. Our experimental results show that GMT significantly outperforms state-of-the-art graph pooling methods on graph classification benchmarks with high memory and time efficiency, and obtains even larger performance gain on graph reconstruction and generation tasks.
https://openreview.net/pdf/d806543cd6401134d798bb7a6f0a2e33a9823858.pdf
Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning
https://openreview.net/forum?id=ce6CFXBh30h
https://openreview.net/forum?id=ce6CFXBh30h
Wonyong Jeong,Jaehong Yoon,Eunho Yang,Sung Ju Hwang
ICLR 2021,Poster
While existing federated learning approaches mostly require that clients have fully-labeled data to train on, in realistic settings, data obtained at the client-side often comes without any accompanying labels. Such deficiency of labels may result from either high labeling cost, or difficulty of annotation due to the requirement of expert knowledge. Thus the private data at each client may be either partly labeled, or completely unlabeled with labeled data being available only at the server, which leads us to a new practical federated learning problem, namely Federated Semi-Supervised Learning (FSSL). In this work, we study two essential scenarios of FSSL based on the location of the labeled data. The first scenario considers a conventional case where clients have both labeled and unlabeled data (labels-at-client), and the second scenario considers a more challenging case, where the labeled data is only available at the server (labels-at-server). We then propose a novel method to tackle the problems, which we refer to as Federated Matching (FedMatch). FedMatch improves upon naive combinations of federated learning and semi-supervised learning approaches with a new inter-client consistency loss and decomposition of the parameters for disjoint learning on labeled and unlabeled data. Through extensive experimental validation of our method in the two different scenarios, we show that our method outperforms both local semi-supervised learning and baselines which naively combine federated learning with semi-supervised learning.
https://openreview.net/pdf/ad9ec9703d076e1e57c6d24f75e75deb35be77b0.pdf
Contrastive Learning with Adversarial Perturbations for Conditional Text Generation
https://openreview.net/forum?id=Wga_hrCa3P3
https://openreview.net/forum?id=Wga_hrCa3P3
Seanie Lee,Dong Bok Lee,Sung Ju Hwang
ICLR 2021,Poster
Recently, sequence-to-sequence (seq2seq) models with the Transformer architecture have achieved remarkable performance on various conditional text generation tasks, such as machine translation. However, most of them are trained with teacher forcing with the ground truth label given at each time step, without being exposed to incorrectly generated tokens during training, which hurts its generalization to unseen inputs, that is known as the "exposure bias" problem. In this work, we propose to solve the conditional text generation problem by contrasting positive pairs with negative pairs, such that the model is exposed to various valid or incorrect perturbations of the inputs, for improved generalization. However, training the model with naïve contrastive learning framework using random non-target sequences as negative examples is suboptimal, since they are easily distinguishable from the correct output, especially so with models pretrained with large text corpora. Also, generating positive examples requires domain-specific augmentation heuristics which may not generalize over diverse domains. To tackle this problem, we propose a principled method to generate positive and negative samples for contrastive learning of seq2seq models. Specifically, we generate negative examples by adding small perturbations to the input sequence to minimize its conditional likelihood, and positive examples by adding large perturbations while enforcing it to have a high conditional likelihood. Such `"hard'' positive and negative pairs generated using our method guides the model to better distinguish correct outputs from incorrect ones. We empirically show that our proposed method significantly improves the generalization of the seq2seq on three text generation tasks --- machine translation, text summarization, and question generation.
https://openreview.net/pdf/a9b3656c6f165fb3975db9f4187eae140eca3593.pdf
Optimal Conversion of Conventional Artificial Neural Networks to Spiking Neural Networks
https://openreview.net/forum?id=FZ1oTwcXchK
https://openreview.net/forum?id=FZ1oTwcXchK
Shikuang Deng,Shi Gu
ICLR 2021,Poster
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs) that comprise of spiking neurons to process asynchronous discrete signals. While more efficient in power consumption and inference speed on the neuromorphic hardware, SNNs are usually difficult to train directly from scratch with spikes due to the discreteness. As an alternative, many efforts have been devoted to converting conventional ANNs into SNNs by copying the weights from ANNs and adjusting the spiking threshold potential of neurons in SNNs. Researchers have designed new SNN architectures and conversion algorithms to diminish the conversion error. However, an effective conversion should address the difference between the SNN and ANN architectures with an efficient approximation of the loss function, which is missing in the field. In this work, we analyze the conversion error by recursive reduction to layer-wise summation and propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms. This pipeline enables almost no accuracy loss between the converted SNNs and conventional ANNs with only $\sim1/10$ of the typical SNN simulation time. Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory. Codes are available at https://github.com/Jackn0/snn_optimal_conversion_pipeline.
https://openreview.net/pdf/3fa8b0934cae8be35e892a8347c631357447a5e7.pdf
Efficient Continual Learning with Modular Networks and Task-Driven Priors
https://openreview.net/forum?id=EKV158tSfwv
https://openreview.net/forum?id=EKV158tSfwv
Tom Veniat,Ludovic Denoyer,MarcAurelio Ranzato
ICLR 2021,Poster
Existing literature in Continual Learning (CL) has focused on overcoming catastrophic forgetting, the inability of the learner to recall how to perform tasks observed in the past. There are however other desirable properties of a CL system, such as the ability to transfer knowledge from previous tasks and to scale memory and compute sub-linearly with the number of tasks. Since most current benchmarks focus only on forgetting using short streams of tasks, we first propose a new suite of benchmarks to probe CL algorithms across these new axes. Finally, we introduce a new modular architecture, whose modules represent atomic skills that can be composed to perform a certain task. Learning a task reduces to figuring out which past modules to re-use, and which new modules to instantiate to solve the current task. Our learning algorithm leverages a task-driven prior over the exponential search space of all possible ways to combine modules, enabling efficient learning on long streams of tasks. Our experiments show that this modular architecture and learning algorithm perform competitively on widely used CL benchmarks while yielding superior performance on the more challenging benchmarks we introduce in this work. The Benchmark is publicly available at https://github.com/facebookresearch/CTrLBenchmark.
https://openreview.net/pdf/8c3f194bd890ab75d3046245b587fdf9c6393d9b.pdf
On the Universality of Rotation Equivariant Point Cloud Networks
https://openreview.net/forum?id=6NFBvWlRXaG
https://openreview.net/forum?id=6NFBvWlRXaG
Nadav Dym,Haggai Maron
ICLR 2021,Poster
Learning functions on point clouds has applications in many fields, including computer vision, computer graphics, physics, and chemistry. Recently, there has been a growing interest in neural architectures that are invariant or equivariant to all three shape-preserving transformations of point clouds: translation, rotation, and permutation. In this paper, we present a first study of the approximation power of these architectures. We first derive two sufficient conditions for an equivariant architecture to have the universal approximation property, based on a novel characterization of the space of equivariant polynomials. We then use these conditions to show that two recently suggested models, Tensor field Networks and SE3-Transformers, are universal, and for devising two other novel universal architectures.
https://openreview.net/pdf/a30d8f9a04d1243085cf06f0bbc402bc235c374a.pdf
Neural Learning of One-of-Many Solutions for Combinatorial Problems in Structured Output Spaces
https://openreview.net/forum?id=ATp1nW2FuZL
https://openreview.net/forum?id=ATp1nW2FuZL
Yatin Nandwani,Deepanshu Jindal,Mausam .,Parag Singla
ICLR 2021,Poster
Recent research has proposed neural architectures for solving combinatorial problems in structured output spaces. In many such problems, there may exist multiple solutions for a given input, e.g. a partially filled Sudoku puzzle may have many completions satisfying all constraints. Further, we are often interested in finding any "one" of the possible solutions, without any preference between them. Existing approaches completely ignore this solution multiplicity. In this paper, we argue that being oblivious to the presence of multiple solutions can severely hamper their training ability. Our contribution is two-fold. First, we formally define the task of learning one-of-many solutions for combinatorial problems in structured output spaces, which is applicable for solving several problems of interest such as N-Queens, and Sudoku. Second, we present a generic learning framework that adapts an existing prediction network for a combinatorial problem to handle solution multiplicity. Our framework uses a selection module, whose goal is to dynamically determine, for every input, the solution that is most effective for training the network parameters in any given learning iteration. We propose an RL based approach to jointly train the selection module with the prediction network. Experiments on three different domains, and using two different prediction networks, demonstrate that our framework significantly improves the accuracy in our setting, obtaining up to 21 pt gain over the baselines.
https://openreview.net/pdf/09d55ad89b9e871fb60d8b0a43c9dc6e57f59bd8.pdf
GAN2GAN: Generative Noise Learning for Blind Denoising with Single Noisy Images
https://openreview.net/forum?id=SHvF5xaueVn
https://openreview.net/forum?id=SHvF5xaueVn
Sungmin Cha,Taeeon Park,Byeongjoon Kim,Jongduk Baek,Taesup Moon
ICLR 2021,Poster
We tackle a challenging blind image denoising problem, in which only single distinct noisy images are available for training a denoiser, and no information about noise is known, except for it being zero-mean, additive, and independent of the clean image. In such a setting, which often occurs in practice, it is not possible to train a denoiser with the standard discriminative training or with the recently developed Noise2Noise (N2N) training; the former requires the underlying clean image for the given noisy image, and the latter requires two independently realized noisy image pair for a clean image. To that end, we propose GAN2GAN (Generated-Artificial-Noise to Generated-Artificial-Noise) method that first learns a generative model that can 1) simulate the noise in the given noisy images and 2) generate a rough, noisy estimates of the clean images, then 3) iteratively trains a denoiser with subsequently synthesized noisy image pairs (as in N2N), obtained from the generative model. In results, we show the denoiser trained with our GAN2GAN achieves an impressive denoising performance on both synthetic and real-world datasets for the blind denoising setting; it almost approaches the performance of the standard discriminatively-trained or N2N-trained models that have more information than ours, and it significantly outperforms the recent baseline for the same setting, \textit{e.g.}, Noise2Void, and a more conventional yet strong one, BM3D. The official code of our method is available at https://github.com/csm9493/GAN2GAN.
https://openreview.net/pdf/c5f2ff1be65b47a30c22070c778b410ebb8ce7f0.pdf
CPR: Classifier-Projection Regularization for Continual Learning
https://openreview.net/forum?id=F2v4aqEL6ze
https://openreview.net/forum?id=F2v4aqEL6ze
Sungmin Cha,Hsiang Hsu,Taebaek Hwang,Flavio Calmon,Taesup Moon
ICLR 2021,Poster
We propose a general, yet simple patch that can be applied to existing regularization-based continual learning methods called classifier-projection regularization (CPR). Inspired by both recent results on neural networks with wide local minima and information theory, CPR adds an additional regularization term that maximizes the entropy of a classifier's output probability. We demonstrate that this additional term can be interpreted as a projection of the conditional probability given by a classifier's output to the uniform distribution. By applying the Pythagorean theorem for KL divergence, we then prove that this projection may (in theory) improve the performance of continual learning methods. In our extensive experimental results, we apply CPR to several state-of-the-art regularization-based continual learning methods and benchmark performance on popular image recognition datasets. Our results demonstrate that CPR indeed promotes a wide local minima and significantly improves both accuracy and plasticity while simultaneously mitigating the catastrophic forgetting of baseline continual learning methods. The codes and scripts for this work are available at https://github.com/csm9493/CPR_CL.
https://openreview.net/pdf/005d36463de5356fd7dbaf00e06791a95cada59a.pdf
On the Dynamics of Training Attention Models
https://openreview.net/forum?id=1OCTOShAmqB
https://openreview.net/forum?id=1OCTOShAmqB
Haoye Lu,Yongyi Mao,Amiya Nayak
ICLR 2021,Poster
The attention mechanism has been widely used in deep neural networks as a model component. By now, it has become a critical building block in many state-of-the-art natural language models. Despite its great success established empirically, the working mechanism of attention has not been investigated at a sufficient theoretical depth to date. In this paper, we set up a simple text classification task and study the dynamics of training a simple attention-based classification model using gradient descent. In this setting, we show that, for the discriminative words that the model should attend to, a persisting identity exists relating its embedding and the inner product of its key and the query. This allows us to prove that training must converge to attending to the discriminative words when the attention output is classified by a linear classifier. Experiments are performed, which validate our theoretical analysis and provide further insights.
https://openreview.net/pdf/9c905fe55b11d0ae8d1aa79de080696fb34d1e13.pdf
Model-Based Offline Planning
https://openreview.net/forum?id=OMNB1G5xzd4
https://openreview.net/forum?id=OMNB1G5xzd4
Arthur Argenson,Gabriel Dulac-Arnold
ICLR 2021,Poster
Offline learning is a key part of making reinforcement learning (RL) useable in real systems. Offline RL looks at scenarios where there is data from a system's operation, but no direct access to the system when learning a policy. Recent work on training RL policies from offline data has shown results both with model-free policies learned directly from the data, or with planning on top of learnt models of the data. Model-free policies tend to be more performant, but are more opaque, harder to command externally, and less easy to integrate into larger systems. We propose an offline learner that generates a model that can be used to control the system directly through planning. This allows us to have easily controllable policies directly from data, without ever interacting with the system. We show the performance of our algorithm, Model-Based Offline Planning (MBOP) on a series of robotics-inspired tasks, and demonstrate its ability leverage planning to respect environmental constraints. We are able to find near-optimal polices for certain simulated systems from as little as 50 seconds of real-time system interaction, and create zero-shot goal-conditioned policies on a series of environments.
https://openreview.net/pdf/81c53a9c5e305d1d030b2fa5e47206fbd6535dcf.pdf
Modelling Hierarchical Structure between Dialogue Policy and Natural Language Generator with Option Framework for Task-oriented Dialogue System
https://openreview.net/forum?id=kLbhLJ8OT12
https://openreview.net/forum?id=kLbhLJ8OT12
Jianhong Wang,Yuan Zhang,Tae-Kyun Kim,Yunjie Gu
ICLR 2021,Poster
Designing task-oriented dialogue systems is a challenging research topic, since it needs not only to generate utterances fulfilling user requests but also to guarantee the comprehensibility. Many previous works trained end-to-end (E2E) models with supervised learning (SL), however, the bias in annotated system utterances remains as a bottleneck. Reinforcement learning (RL) deals with the problem through using non-differentiable evaluation metrics (e.g., the success rate) as rewards. Nonetheless, existing works with RL showed that the comprehensibility of generated system utterances could be corrupted when improving the performance on fulfilling user requests. In our work, we (1) propose modelling the hierarchical structure between dialogue policy and natural language generator (NLG) with the option framework, called HDNO, where the latent dialogue act is applied to avoid designing specific dialogue act representations; (2) train HDNO via hierarchical reinforcement learning (HRL), as well as suggest the asynchronous updates between dialogue policy and NLG during training to theoretically guarantee their convergence to a local maximizer; and (3) propose using a discriminator modelled with language models as an additional reward to further improve the comprehensibility. We test HDNO on MultiWoz 2.0 and MultiWoz 2.1, the datasets on multi-domain dialogues, in comparison with word-level E2E model trained with RL, LaRL and HDSA, showing improvements on the performance evaluated by automatic evaluation metrics and human evaluation. Finally, we demonstrate the semantic meanings of latent dialogue acts to show the explanability for HDNO.
https://openreview.net/pdf/a99c17ceac45ecae4d3e3677d8e1179cd7d3900b.pdf
Distilling Knowledge from Reader to Retriever for Question Answering
https://openreview.net/forum?id=NTEz-6wysdb
https://openreview.net/forum?id=NTEz-6wysdb
Gautier Izacard,Edouard Grave
ICLR 2021,Poster
The task of information retrieval is an important component of many natural language processing systems, such as open domain question answering. While traditional methods were based on hand-crafted features, continuous representations based on neural networks recently obtained competitive results. A challenge of using such methods is to obtain supervised data to train the retriever model, corresponding to pairs of query and support documents. In this paper, we propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents. Our approach leverages attention scores of a reader model, used to solve the task based on retrieved documents, to obtain synthetic labels for the retriever. We evaluate our method on question answering, obtaining state-of-the-art results.
https://openreview.net/pdf/de0d6419e3d4ea4c6feacdf953b46fd95a50538a.pdf
Efficient Certified Defenses Against Patch Attacks on Image Classifiers
https://openreview.net/forum?id=hr-3PMvDpil
https://openreview.net/forum?id=hr-3PMvDpil
Jan Hendrik Metzen,Maksym Yatsura
ICLR 2021,Poster
Adversarial patches pose a realistic threat model for physical world attacks on autonomous systems via their perception component. Autonomous systems in safety-critical domains such as automated driving should thus contain a fail-safe fallback component that combines certifiable robustness against patches with efficient inference while maintaining high performance on clean inputs. We propose BagCert, a novel combination of model architecture and certification procedure that allows efficient certification. We derive a loss that enables end-to-end optimization of certified robustness against patches of different sizes and locations. On CIFAR10, BagCert certifies 10.000 examples in 43 seconds on a single GPU and obtains 86% clean and 60% certified accuracy against 5x5 patches.
https://openreview.net/pdf/964f848b9674e016ee4ad4259c6491fd8c66729a.pdf
Taking Notes on the Fly Helps Language Pre-Training
https://openreview.net/forum?id=lU5Rs_wCweN
https://openreview.net/forum?id=lU5Rs_wCweN
Qiyu Wu,Chen Xing,Yatao Li,Guolin Ke,Di He,Tie-Yan Liu
ICLR 2021,Poster
How to make unsupervised language pre-training more efficient and less resource-intensive is an important research direction in NLP. In this paper, we focus on improving the efficiency of language pre-training methods through providing better data utilization. It is well-known that in language data corpus, words follow a heavy-tail distribution. A large proportion of words appear only very few times and the embeddings of rare words are usually poorly optimized. We argue that such embeddings carry inadequate semantic signals, which could make the data utilization inefficient and slow down the pre-training of the entire model. To mitigate this problem, we propose Taking Notes on the Fly (TNF), which takes notes for rare words on the fly during pre-training to help the model understand them when they occur next time. Specifically, TNF maintains a note dictionary and saves a rare word's contextual information in it as notes when the rare word occurs in a sentence. When the same rare word occurs again during training, the note information saved beforehand can be employed to enhance the semantics of the current sentence. By doing so, TNF provides a better data utilization since cross-sentence information is employed to cover the inadequate semantics caused by rare words in the sentences. We implement TNF on both BERT and ELECTRA to check its efficiency and effectiveness. Experimental results show that TNF's training time is 60% less than its backbone pre-training models when reaching the same performance. When trained with same number of iterations, TNF outperforms its backbone methods on most of downstream tasks and the average GLUE score. Code is attached in the supplementary material.
https://openreview.net/pdf/954ca2d8ae9cd134cd7cb0003ecd87b3e6f3bf4e.pdf
Graph Edit Networks
https://openreview.net/forum?id=dlEJsyHGeaL
https://openreview.net/forum?id=dlEJsyHGeaL
Benjamin Paassen,Daniele Grattarola,Daniele Zambon,Cesare Alippi,Barbara Eva Hammer
ICLR 2021,Poster
While graph neural networks have made impressive progress in classification and regression, few approaches to date perform time series prediction on graphs, and those that do are mostly limited to edge changes. We suggest that graph edits are a more natural interface for graph-to-graph learning. In particular, graph edits are general enough to describe any graph-to-graph change, not only edge changes; they are sparse, making them easier to understand for humans and more efficient computationally; and they are local, avoiding the need for pooling layers in graph neural networks. In this paper, we propose a novel output layer - the graph edit network - which takes node embeddings as input and generates a sequence of graph edits that transform the input graph to the output graph. We prove that a mapping between the node sets of two graphs is sufficient to construct training data for a graph edit network and that an optimal mapping yields edit scripts that are almost as short as the graph edit distance between the graphs. We further provide a proof-of-concept empirical evaluation on several graph dynamical systems, which are difficult to learn for baselines from the literature.
https://openreview.net/pdf/febbb3ef6ff460d897054b7c7d79d3a0083df6a2.pdf
FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization
https://openreview.net/forum?id=8cpHIfgY4Dj
https://openreview.net/forum?id=8cpHIfgY4Dj
Lanqing Li,Rui Yang,Dijun Luo
ICLR 2021,Poster
We study the offline meta-reinforcement learning (OMRL) problem, a paradigm which enables reinforcement learning (RL) algorithms to quickly adapt to unseen tasks without any interactions with the environments, making RL truly practical in many real-world applications. This problem is still not fully understood, for which two major challenges need to be addressed. First, offline RL usually suffers from bootstrapping errors of out-of-distribution state-actions which leads to divergence of value functions. Second, meta-RL requires efficient and robust task inference learned jointly with control policy. In this work, we enforce behavior regularization on learned policy as a general approach to offline RL, combined with a deterministic context encoder for efficient task inference. We propose a novel negative-power distance metric on bounded context embedding space, whose gradients propagation is detached from the Bellman backup. We provide analysis and insight showing that some simple design choices can yield substantial improvements over recent approaches involving meta-RL and distance metric learning. To the best of our knowledge, our method is the first model-free and end-to-end OMRL algorithm, which is computationally efficient and demonstrated to outperform prior algorithms on several meta-RL benchmarks.
https://openreview.net/pdf/44984a3c82f19e6fc4db9819ab9140e0cc3ca7e0.pdf
Effective and Efficient Vote Attack on Capsule Networks
https://openreview.net/forum?id=33rtZ4Sjwjn
https://openreview.net/forum?id=33rtZ4Sjwjn
Jindong Gu,Baoyuan Wu,Volker Tresp
ICLR 2021,Poster
Standard Convolutional Neural Networks (CNNs) can be easily fooled by images with small quasi-imperceptible artificial perturbations. As alternatives to CNNs, the recently proposed Capsule Networks (CapsNets) are shown to be more robust to white-box attack than CNNs under popular attack protocols. Besides, the class-conditional reconstruction part of CapsNets is also used to detect adversarial examples. In this work, we investigate the adversarial robustness of CapsNets, especially how the inner workings of CapsNets change when the output capsules are attacked. The first observation is that adversarial examples misled CapsNets by manipulating the votes from primary capsules. Another observation is the high computational cost, when we directly apply multi-step attack methods designed for CNNs to attack CapsNets, due to the computationally expensive routing mechanism. Motivated by these two observations, we propose a novel vote attack where we attack votes of CapsNets directly. Our vote attack is not only effective, but also efficient by circumventing the routing process. Furthermore, we integrate our vote attack into the detection-aware attack paradigm, which can successfully bypass the class-conditional reconstruction based detection method. Extensive experiments demonstrate the superior attack performance of our vote attack on CapsNets.
https://openreview.net/pdf/93dc8fe0e28a6c86ef5c7b7c74d8c5968238850c.pdf
Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets
https://openreview.net/forum?id=rkQuFUmUOg3
https://openreview.net/forum?id=rkQuFUmUOg3
Hayeon Lee,Eunyoung Hyung,Sung Ju Hwang
ICLR 2021,Poster
Despite the success of recent Neural Architecture Search (NAS) methods on various tasks which have shown to output networks that largely outperform human-designed networks, conventional NAS methods have mostly tackled the optimization of searching for the network architecture for a single task (dataset), which does not generalize well across multiple tasks (datasets). Moreover, since such task-specific methods search for a neural architecture from scratch for every given task, they incur a large computational cost, which is problematic when the time and monetary budget are limited. In this paper, we propose an efficient NAS framework that is trained once on a database consisting of datasets and pretrained networks and can rapidly search for a neural architecture for a novel dataset. The proposed MetaD2A (Meta Dataset-to-Architecture) model can stochastically generate graphs (architectures) from a given set (dataset) via a cross-modal latent space learned with amortized meta-learning. Moreover, we also propose a meta-performance predictor to estimate and select the best architecture without direct training on target datasets. The experimental results demonstrate that our model meta-learned on subsets of ImageNet-1K and architectures from NAS-Bench 201 search space successfully generalizes to multiple unseen datasets including CIFAR-10 and CIFAR-100, with an average search time of 33 GPU seconds. Even under MobileNetV3 search space, MetaD2A is 5.5K times faster than NSGANetV2, a transferable NAS method, with comparable performance. We believe that the MetaD2A proposes a new research direction for rapid NAS as well as ways to utilize the knowledge from rich databases of datasets and architectures accumulated over the past years. Code is available at https://github.com/HayeonLee/MetaD2A.
https://openreview.net/pdf/fdbe572e5160119399f6de757ed8a528ebdd78b1.pdf
Impact of Representation Learning in Linear Bandits
https://openreview.net/forum?id=edJ_HipawCa
https://openreview.net/forum?id=edJ_HipawCa
Jiaqi Yang,Wei Hu,Jason D. Lee,Simon Shaolei Du
ICLR 2021,Poster
We study how representation learning can improve the efficiency of bandit problems. We study the setting where we play $T$ linear bandits with dimension $d$ concurrently, and these $T$ bandit tasks share a common $k (\ll d)$ dimensional linear representation. For the finite-action setting, we present a new algorithm which achieves $\widetilde{O}(T\sqrt{kN} + \sqrt{dkNT})$ regret, where $N$ is the number of rounds we play for each bandit. When $T$ is sufficiently large, our algorithm significantly outperforms the naive algorithm (playing $T$ bandits independently) that achieves $\widetilde{O}(T\sqrt{d N})$ regret. We also provide an $\Omega(T\sqrt{kN} + \sqrt{dkNT})$ regret lower bound, showing that our algorithm is minimax-optimal up to poly-logarithmic factors. Furthermore, we extend our algorithm to the infinite-action setting and obtain a corresponding regret bound which demonstrates the benefit of representation learning in certain regimes. We also present experiments on synthetic and real-world data to illustrate our theoretical findings and demonstrate the effectiveness of our proposed algorithms.
https://openreview.net/pdf/91886e61c4bfd75e5a6cfba8ec66df9e37c55471.pdf
Creative Sketch Generation
https://openreview.net/forum?id=gwnoVHIES05
https://openreview.net/forum?id=gwnoVHIES05
Songwei Ge,Vedanuj Goswami,Larry Zitnick,Devi Parikh
ICLR 2021,Poster
Sketching or doodling is a popular creative activity that people engage in. However, most existing work in automatic sketch understanding or generation has focused on sketches that are quite mundane. In this work, we introduce two datasets of creative sketches -- Creative Birds and Creative Creatures -- containing 10k sketches each along with part annotations. We propose DoodlerGAN -- a part-based Generative Adversarial Network (GAN) -- to generate unseen compositions of novel part appearances. Quantitative evaluations as well as human studies demonstrate that sketches generated by our approach are more creative and of higher quality than existing approaches. In fact, in Creative Birds, subjects prefer sketches generated by DoodlerGAN over those drawn by humans!
https://openreview.net/pdf/373ac60cbcad73b611ba4a2ec2c15fd2747b84ea.pdf
Self-supervised Representation Learning with Relative Predictive Coding
https://openreview.net/forum?id=068E_JSq9O
https://openreview.net/forum?id=068E_JSq9O
Yao-Hung Hubert Tsai,Martin Q. Ma,Muqiao Yang,Han Zhao,Louis-Philippe Morency,Ruslan Salakhutdinov
ICLR 2021,Poster
This paper introduces Relative Predictive Coding (RPC), a new contrastive representation learning objective that maintains a good balance among training stability, minibatch size sensitivity, and downstream task performance. The key to the success of RPC is two-fold. First, RPC introduces the relative parameters to regularize the objective for boundedness and low variance. Second, RPC contains no logarithm and exponential score functions, which are the main cause of training instability in prior contrastive objectives. We empirically verify the effectiveness of RPC on benchmark vision and speech self-supervised learning tasks. Lastly, we relate RPC with mutual information (MI) estimation, showing RPC can be used to estimate MI with low variance.
https://openreview.net/pdf/282575516141216df413ed34796271b6b81c1ac1.pdf
One Network Fits All? Modular versus Monolithic Task Formulations in Neural Networks
https://openreview.net/forum?id=uz5uw6gM0m
https://openreview.net/forum?id=uz5uw6gM0m
Atish Agarwala,Abhimanyu Das,Brendan Juba,Rina Panigrahy,Vatsal Sharan,Xin Wang,Qiuyi Zhang
ICLR 2021,Poster
Can deep learning solve multiple, very different tasks simultaneously? We investigate how the representations of the underlying tasks affect the ability of a single neural network to learn them jointly. We present theoretical and empirical findings that a single neural network is capable of simultaneously learning multiple tasks from a combined data set, for a variety of methods for representing tasks---for example, when the distinct tasks are encoded by well-separated clusters or decision trees over some task-code attributes. Indeed, more strongly, we present a novel analysis that shows that families of simple programming-like constructs for the codes encoding the tasks are learnable by two-layer neural networks with standard training. We study more generally how the complexity of learning such combined tasks grows with the complexity of the task codes; we find that learning many tasks can be provably hard, even though the individual tasks are easy to learn. We provide empirical support for the usefulness of the learning bounds by training networks on clusters, decision trees, and SQL-style aggregation.
https://openreview.net/pdf/c3144d0b41029c0a6eb962e25853af28fe75daf2.pdf
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
https://openreview.net/forum?id=de11dbHzAMF
https://openreview.net/forum?id=de11dbHzAMF
Jonathan Pilault,Amine El hattami,Christopher Pal
ICLR 2021,Poster
Multi-Task Learning (MTL) networks have emerged as a promising method for transferring learned knowledge across different tasks. However, MTL must deal with challenges such as: overfitting to low resource tasks, catastrophic forgetting, and negative task transfer, or learning interference. Often, in Natural Language Processing (NLP), a separate model per task is needed to obtain the best performance. However, many fine-tuning approaches are both parameter inefficient, i.e., potentially involving one new model per task, and highly susceptible to losing knowledge acquired during pretraining. We propose a novel Transformer based Hypernetwork Adapter consisting of a new conditional attention mechanism as well as a set of task-conditioned modules that facilitate weight sharing. Through this construction, we achieve more efficient parameter sharing and mitigate forgetting by keeping half of the weights of a pretrained model fixed. We also use a new multi-task data sampling strategy to mitigate the negative effects of data imbalance across tasks. Using this approach, we are able to surpass single task fine-tuning methods while being parameter and data efficient (using around 66% of the data). Compared to other BERT Large methods on GLUE, our 8-task model surpasses other Adapter methods by 2.8% and our 24-task model outperforms by 0.7-1.0% models that use MTL and single task fine-tuning. We show that a larger variant of our single multi-task model approach performs competitively across 26 NLP tasks and yields state-of-the-art results on a number of test and development sets.
https://openreview.net/pdf/c3044c4a7c51d46a59a66bf5a93e9d87747fce37.pdf
A Universal Representation Transformer Layer for Few-Shot Image Classification
https://openreview.net/forum?id=04cII6MumYV
https://openreview.net/forum?id=04cII6MumYV
Lu Liu,William L. Hamilton,Guodong Long,Jing Jiang,Hugo Larochelle
ICLR 2021,Poster
Few-shot classification aims to recognize unseen classes when presented with only a small number of samples. We consider the problem of multi-domain few-shot image classification, where unseen classes and examples come from diverse data sources. This problem has seen growing interest and has inspired the development of benchmarks such as Meta-Dataset. A key challenge in this multi-domain setting is to effectively integrate the feature representations from the diverse set of training domains. Here, we propose a Universal Representation Transformer (URT) layer, that meta-learns to leverage universal features for few-shot classification by dynamically re-weighting and composing the most appropriate domain-specific representations. In experiments, we show that URT sets a new state-of-the-art result on Meta-Dataset. Specifically, it achieves top-performance on the highest number of data sources compared to competing methods. We analyze variants of URT and present a visualization of the attention score heatmaps that sheds light on how the model performs cross-domain generalization.
https://openreview.net/pdf/82c74f9d1bbe056efab8db3ab6e90c45142d11f3.pdf
Isometric Propagation Network for Generalized Zero-shot Learning
https://openreview.net/forum?id=-mWcQVLPSPy
https://openreview.net/forum?id=-mWcQVLPSPy
Lu Liu,Tianyi Zhou,Guodong Long,Jing Jiang,Xuanyi Dong,Chengqi Zhang
ICLR 2021,Poster
Zero-shot learning (ZSL) aims to classify images of an unseen class only based on a few attributes describing that class but no access to any training sample. A popular strategy is to learn a mapping between the semantic space of class attributes and the visual space of images based on the seen classes and their data. Thus, an unseen class image can be ideally mapped to its corresponding class attributes. The key challenge is how to align the representations in the two spaces. For most ZSL settings, the attributes for each seen/unseen class are only represented by a vector while the seen-class data provide much more information. Thus, the imbalanced supervision from the semantic and the visual space can make the learned mapping easily overfitting to the seen classes. To resolve this problem, we propose Isometric Propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces. Specifically, IPN learns to propagate the class representations on an auto-generated graph within each space. In contrast to only aligning the resulted static representation, we regularize the two dynamic propagation procedures to be isometric in terms of the two graphs' edge weights per step by minimizing a consistency loss between them. IPN achieves state-of-the-art performance on three popular ZSL benchmarks. To evaluate the generalization capability of IPN, we further build two larger benchmarks with more diverse unseen classes and demonstrate the advantages of IPN on them.
https://openreview.net/pdf/da2fc73fa15eb07f35399aa7799e950e718cb61e.pdf
Towards Impartial Multi-task Learning
https://openreview.net/forum?id=IMPnRXEWpvr
https://openreview.net/forum?id=IMPnRXEWpvr
Liyang Liu,Yi Li,Zhanghui Kuang,Jing-Hao Xue,Yimin Chen,Wenming Yang,Qingmin Liao,Wayne Zhang
ICLR 2021,Poster
Multi-task learning (MTL) has been widely used in representation learning. However, naively training all tasks simultaneously may lead to the partial training issue, where specific tasks are trained more adequately than others. In this paper, we propose to learn multiple tasks impartially. Specifically, for the task-shared parameters, we optimize the scaling factors via a closed-form solution, such that the aggregated gradient (sum of raw gradients weighted by the scaling factors) has equal projections onto individual tasks. For the task-specific parameters, we dynamically weigh the task losses so that all of them are kept at a comparable scale. Further, we find the above gradient balance and loss balance are complementary and thus propose a hybrid balance method to further improve the performance. Our impartial multi-task learning (IMTL) can be end-to-end trained without any heuristic hyper-parameter tuning, and is general to be applied on all kinds of losses without any distribution assumption. Moreover, our IMTL can converge to similar results even when the task losses are designed to have different scales, and thus it is scale-invariant. We extensively evaluate our IMTL on the standard MTL benchmarks including Cityscapes, NYUv2 and CelebA. It outperforms existing loss weighting methods under the same experimental settings.
https://openreview.net/pdf/1641df474b8e0e2f7dd6c0dda99081b06fed400c.pdf
On Learning Universal Representations Across Languages
https://openreview.net/forum?id=Uu1Nw-eeTxJ
https://openreview.net/forum?id=Uu1Nw-eeTxJ
Xiangpeng Wei,Rongxiang Weng,Yue Hu,Luxi Xing,Heng Yu,Weihua Luo
ICLR 2021,Poster
Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks. However, existing approaches essentially capture the co-occurrence among tokens through involving the masked language model (MLM) objective with token-level cross entropy. In this work, we extend these approaches to learn sentence-level representations and show the effectiveness on cross-lingual understanding and generation. Specifically, we propose a Hierarchical Contrastive Learning (HiCTL) method to (1) learn universal representations for parallel sentences distributed in one or multiple languages and (2) distinguish the semantically-related words from a shared cross-lingual vocabulary for each sentence. We conduct evaluations on two challenging cross-lingual tasks, XTREME and machine translation. Experimental results show that the HiCTL outperforms the state-of-the-art XLM-R by an absolute gain of 4.2% accuracy on the XTREME benchmark as well as achieves substantial improvements on both of the high resource and low-resource English$\rightarrow$X translation tasks over strong baselines.
https://openreview.net/pdf/24e87f8b61ab2261652760587470c14f2fef8366.pdf
Isotropy in the Contextual Embedding Space: Clusters and Manifolds
https://openreview.net/forum?id=xYGNO86OWDH
https://openreview.net/forum?id=xYGNO86OWDH
Xingyu Cai,Jiaji Huang,Yuchen Bian,Kenneth Church
ICLR 2021,Poster
The geometric properties of contextual embedding spaces for deep language models such as BERT and ERNIE, have attracted considerable attention in recent years. Investigations on the contextual embeddings demonstrate a strong anisotropic space such that most of the vectors fall within a narrow cone, leading to high cosine similarities. It is surprising that these LMs are as successful as they are, given that most of their embedding vectors are as similar to one another as they are. In this paper, we argue that the isotropy indeed exists in the space, from a different but more constructive perspective. We identify isolated clusters and low dimensional manifolds in the contextual embedding space, and introduce tools to both qualitatively and quantitatively analyze them. We hope the study in this paper could provide insights towards a better understanding of the deep language models.
https://openreview.net/pdf/8b00c8e698e9a810bfcee44a4ae5f6c3adeb7266.pdf
MoPro: Webly Supervised Learning with Momentum Prototypes
https://openreview.net/forum?id=0-EYBhgw80y
https://openreview.net/forum?id=0-EYBhgw80y
Junnan Li,Caiming Xiong,Steven Hoi
ICLR 2021,Poster
We propose a webly-supervised representation learning method that does not suffer from the annotation unscalability of supervised learning, nor the computation unscalability of self-supervised learning. Most existing works on webly-supervised representation learning adopt a vanilla supervised learning method without accounting for the prevalent noise in the training data, whereas most prior methods in learning with label noise are less effective for real-world large-scale noisy data. We propose momentum prototypes (MoPro), a simple contrastive learning method that achieves online label noise correction, out-of-distribution sample removal, and representation learning. MoPro achieves state-of-the-art performance on WebVision, a weakly-labeled noisy dataset. MoPro also shows superior performance when the pretrained model is transferred to down-stream image classification and detection tasks. It outperforms the ImageNet supervised pretrained model by +10.5 on 1-shot classification on VOC, and outperforms the best self-supervised pretrained model by +17.3 when finetuned on 1% of ImageNet labeled samples. Furthermore, MoPro is more robust to distribution shifts. Code and pretrained models are available at https://github.com/salesforce/MoPro.
https://openreview.net/pdf/129bfcc800abbeedcdbb1bdfc2e469f06974f5a8.pdf
GraphCodeBERT: Pre-training Code Representations with Data Flow
https://openreview.net/forum?id=jLoC4ez43PZ
https://openreview.net/forum?id=jLoC4ez43PZ
Daya Guo,Shuo Ren,Shuai Lu,Zhangyin Feng,Duyu Tang,Shujie LIU,Long Zhou,Nan Duan,Alexey Svyatkovskiy,Shengyu Fu,Michele Tufano,Shao Kun Deng,Colin Clement,Dawn Drain,Neel Sundaresan,Jian Yin,Daxin Jiang,Ming Zhou
ICLR 2021,Poster
Pre-trained models for programming language have achieved dramatic empirical improvements on a variety of code-related tasks such as code search, code completion, code summarization, etc. However, existing pre-trained models regard a code snippet as a sequence of tokens, while ignoring the inherent structure of code, which provides crucial code semantics and would enhance the code understanding process. We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code. Instead of taking syntactic-level structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables. Such a semantic-level structure is neat and does not bring an unnecessarily deep hierarchy of AST, the property of which makes the model more efficient. We develop GraphCodeBERT based on Transformer. In addition to using the task of masked language modeling, we introduce two structure-aware pre-training tasks. One is to predict code structure edges, and the other is to align representations between source code and code structure. We implement the model in an efficient way with a graph-guided masked attention function to incorporate the code structure. We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement. Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks. We further show that the model prefers structure-level attentions over token-level attentions in the task of code search.
https://openreview.net/pdf/9e81b47417b883d933baaf98c7e08ce4d7b14fa0.pdf
Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space Navigation
https://openreview.net/forum?id=HOFxeCutxZR
https://openreview.net/forum?id=HOFxeCutxZR
Peiye Zhuang,Oluwasanmi O Koyejo,Alex Schwing
ICLR 2021,Poster
Controllable semantic image editing enables a user to change entire image attributes with a few clicks, e.g., gradually making a summer scene look like it was taken in winter. Classic approaches for this task use a Generative Adversarial Net (GAN) to learn a latent space and suitable latent-space transformations. However, current approaches often suffer from attribute edits that are entangled, global image identity changes, and diminished photo-realism. To address these concerns, we learn multiple attribute transformations simultaneously, integrate attribute regression into the training of transformation functions, and apply a content loss and an adversarial loss that encourages the maintenance of image identity and photo-realism. We propose quantitative evaluation strategies for measuring controllable editing performance, unlike prior work, which primarily focuses on qualitative evaluation. Our model permits better control for both single- and multiple-attribute editing while preserving image identity and realism during transformation. We provide empirical results for both natural and synthetic images, highlighting that our model achieves state-of-the-art performance for targeted image manipulation.
https://openreview.net/pdf/818a65b722eab947d8c57665b571d535ce9ca68a.pdf
Fourier Neural Operator for Parametric Partial Differential Equations
https://openreview.net/forum?id=c8P9NQVtmnO
https://openreview.net/forum?id=c8P9NQVtmnO
Zongyi Li,Nikola Borislavov Kovachki,Kamyar Azizzadenesheli,Burigede liu,Kaushik Bhattacharya,Andrew Stuart,Anima Anandkumar
ICLR 2021,Poster
The classical development of neural networks has primarily focused on learning mappings between finite-dimensional Euclidean spaces. Recently, this has been generalized to neural operators that learn mappings between function spaces. For partial differential equations (PDEs), neural operators directly learn the mapping from any functional parametric dependence to the solution. Thus, they learn an entire family of PDEs, in contrast to classical methods which solve one instance of the equation. In this work, we formulate a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture. We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation. The Fourier neural operator is the first ML-based method to successfully model turbulent flows with zero-shot super-resolution. It is up to three orders of magnitude faster compared to traditional PDE solvers. Additionally, it achieves superior accuracy compared to previous learning-based solvers under fixed resolution.
https://openreview.net/pdf/53c47f849d1cd4d21b865caf7d774e07a5c42aa4.pdf
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs
https://openreview.net/forum?id=vYeQQ29Tbvx
https://openreview.net/forum?id=vYeQQ29Tbvx
Jonathan Frankle,David J. Schwab,Ari S. Morcos
ICLR 2021,Poster
A wide variety of deep learning techniques from style transfer to multitask learning rely on training affine transformations of features. Most prominent among these is the popular feature normalization technique BatchNorm, which normalizes activations and then subsequently applies a learned affine transform. In this paper, we aim to understand the role and expressive power of affine parameters used to transform features in this way. To isolate the contribution of these parameters from that of the learned features they transform, we investigate the performance achieved when training only these parameters in BatchNorm and freezing all weights at their random initializations. Doing so leads to surprisingly high performance considering the significant limitations that this style of training imposes. For example, sufficiently deep ResNets reach 82% (CIFAR-10) and 32% (ImageNet, top-5) accuracy in this configuration, far higher than when training an equivalent number of randomly chosen parameters elsewhere in the network. BatchNorm achieves this performance in part by naturally learning to disable around a third of the random features. Not only do these results highlight the expressive power of affine parameters in deep learning, but - in a broader sense - they characterize the expressive power of neural networks constructed simply by shifting and rescaling random features.
https://openreview.net/pdf/fd54fc17e45c5cb0f95f9e8ce5c8a3a4eb8f759d.pdf
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis
https://openreview.net/forum?id=1Fqg133qRaI
https://openreview.net/forum?id=1Fqg133qRaI
Bingchen Liu,Yizhe Zhu,Kunpeng Song,Ahmed Elgammal
ICLR 2021,Poster
Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for GAN with minimum computing cost. We propose a light-weight GAN structure that gains superior quality on 1024^2 resolution. Notably, the model converges from scratch with just a few hours of training on a single RTX-2080 GPU, and has a consistent performance, even with less than 100 training samples. Two technique designs constitute our work, a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder. With thirteen datasets covering a wide variety of image domains (The datasets and code are available at https://github.com/odegeasslbc/FastGAN-pytorch), we show our model's superior performance compared to the state-of-the-art StyleGAN2, when data and computing budget are limited.
https://openreview.net/pdf/a4c6b0fcdebd9a96b6a3338b13240a2dadd71f78.pdf