title
stringlengths
19
143
url
stringlengths
41
43
detail_url
stringlengths
41
43
authors
stringlengths
9
347
tags
stringclasses
3 values
abstract
stringlengths
457
2.38k
pdf
stringlengths
71
71
HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients
https://openreview.net/forum?id=TNkPBBYFkXg
https://openreview.net/forum?id=TNkPBBYFkXg
Enmao Diao,Jie Ding,Vahid Tarokh
ICLR 2021,Poster
Federated Learning (FL) is a method of training machine learning models on private data distributed over a large number of possibly heterogeneous clients such as mobile phones and IoT devices. In this work, we propose a new federated learning framework named HeteroFL to address heterogeneous clients equipped with very different computation and communication capabilities. Our solution can enable the training of heterogeneous local models with varying computation complexities and still produce a single global inference model. For the first time, our method challenges the underlying assumption of existing work that local models have to share the same architecture as the global model. We demonstrate several strategies to enhance FL training and conduct extensive empirical evaluations, including five computation complexity levels of three model architecture on three datasets. We show that adaptively distributing subnetworks according to clients' capabilities is both computation and communication efficient.
https://openreview.net/pdf/da02aa1b25ebd5799fabfa9e199c793460ef9794.pdf
Semantic Re-tuning with Contrastive Tension
https://openreview.net/forum?id=Ov_sMNau-PF
https://openreview.net/forum?id=Ov_sMNau-PF
Fredrik Carlsson,Amaru Cuba Gyllensten,Evangelia Gogoulou,Erik Ylipää Hellqvist,Magnus Sahlgren
ICLR 2021,Poster
Extracting semantically useful natural language sentence representations from pre-trained deep neural networks such as Transformers remains a challenge. We first demonstrate that pre-training objectives impose a significant task bias onto the final layers of models with a layer-wise survey of the Semantic Textual Similarity (STS) correlations for multiple common Transformer language models. We then propose a new self-supervised method called Contrastive Tension (CT) to counter such biases. CT frames the training objective as a noise-contrastive task between the final layer representations of two independent models, in turn making the final layer representations suitable for feature extraction. Results from multiple common unsupervised and supervised STS tasks indicate that CT outperforms previous State Of The Art (SOTA), and when combining CT with supervised data we improve upon previous SOTA results with large margins.
https://openreview.net/pdf/183f4e3fc886804360e6169ab1b7192bbe476098.pdf
Dataset Meta-Learning from Kernel Ridge-Regression
https://openreview.net/forum?id=l-PrrQrK0QR
https://openreview.net/forum?id=l-PrrQrK0QR
Timothy Nguyen,Zhourong Chen,Jaehoon Lee
ICLR 2021,Poster
One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm. We introduce the novel concept of $\epsilon$-approximation of datasets, obtaining datasets which are much smaller than or are significant corruptions of the original training data while maintaining similar performance. We introduce a meta-learning algorithm Kernel Inducing Points (KIP) for obtaining such remarkable datasets, drawing inspiration from recent developments in the correspondence between infinitely-wide neural networks and kernel ridge-regression (KRR). For KRR tasks, we demonstrate that KIP can compress datasets by one or two orders of magnitude, significantly improving previous dataset distillation and subset selection methods while obtaining state of the art results for MNIST and CIFAR10 classification. Furthermore, our KIP-learned datasets are transferable to the training of finite-width neural networks even beyond the lazy-training regime. Consequently, we obtain state of the art results for neural network dataset distillation with potential applications to privacy-preservation.
https://openreview.net/pdf/e5bd67ca9948951b21c82c12b69280270a7bfe71.pdf
AUXILIARY TASK UPDATE DECOMPOSITION: THE GOOD, THE BAD AND THE NEUTRAL
https://openreview.net/forum?id=1GTma8HwlYp
https://openreview.net/forum?id=1GTma8HwlYp
Lucio M. Dery,Yann Dauphin,David Grangier
ICLR 2021,Poster
While deep learning has been very beneficial in data-rich settings, tasks with smaller training set often resort to pre-training or multitask learning to leverage data from other tasks. In this case, careful consideration is needed to select tasks and model parameterizations such that updates from the auxiliary tasks actually help the primary task. We seek to alleviate this burden by formulating a model-agnostic framework that performs fine-grained manipulation of the auxiliary task gradients. We propose to decompose auxiliary updates into directions which help, damage or leave the primary task loss unchanged. This allows weighting the update directions differently depending on their impact on the problem of interest. We present a novel and efficient algorithm for that purpose and show its advantage in practice. Our method leverages efficient automatic differentiation procedures and randomized singular value decomposition for scalability. We show that our framework is generic and encompasses some prior work as particular cases. Our approach consistently outperforms strong and widely used baselines when leveraging out-of-distribution data for Text and Image classification tasks.
https://openreview.net/pdf/abc70350e11147c46076e6b97c615c42e2ab46d5.pdf
Fast And Slow Learning Of Recurrent Independent Mechanisms
https://openreview.net/forum?id=Lc28QAB4ypz
https://openreview.net/forum?id=Lc28QAB4ypz
Kanika Madan,Nan Rosemary Ke,Anirudh Goyal,Bernhard Schölkopf,Yoshua Bengio
ICLR 2021,Poster
Decomposing knowledge into interchangeable pieces promises a generalization advantage when there are changes in distribution. A learning agent interacting with its environment is likely to be faced with situations requiring novel combinations of existing pieces of knowledge. We hypothesize that such a decomposition of knowledge is particularly relevant for being able to generalize in a systematic way to out-of-distribution changes. To study these ideas, we propose a particular training framework in which we assume that the pieces of knowledge an agent needs and its reward function are stationary and can be re-used across tasks. An attention mechanism dynamically selects which modules can be adapted to the current task, and the parameters of the \textit{selected} modules are allowed to change quickly as the learner is confronted with variations in what it experiences, while the parameters of the attention mechanisms act as stable, slowly changing, meta-parameters. We focus on pieces of knowledge captured by an ensemble of modules sparsely communicating with each other via a bottleneck of attention. We find that meta-learning the modular aspects of the proposed system greatly helps in achieving faster adaptation in a reinforcement learning setup involving navigation in a partially observed grid world with image-level input. We also find that reversing the role of parameters and meta-parameters does not work nearly as well, suggesting a particular role for fast adaptation of the dynamically selected modules.
https://openreview.net/pdf/023176cca43806a7d1f2ee58f5d0b4940b4331b2.pdf
Auction Learning as a Two-Player Game
https://openreview.net/forum?id=YHdeAO61l6T
https://openreview.net/forum?id=YHdeAO61l6T
Jad Rahme,Samy Jelassi,S. Matthew Weinberg
ICLR 2021,Poster
Designing an incentive compatible auction that maximizes expected revenue is a central problem in Auction Design. While theoretical approaches to the problem have hit some limits, a recent research direction initiated by Duetting et al. (2019) consists in building neural network architectures to find optimal auctions. We propose two conceptual deviations from their approach which result in enhanced performance. First, we use recent results in theoretical auction design to introduce a time-independent Lagrangian. This not only circumvents the need for an expensive hyper-parameter search (as in prior work), but also provides a single metric to compare the performance of two auctions (absent from prior work). Second, the optimization procedure in previous work uses an inner maximization loop to compute optimal misreports. We amortize this process through the introduction of an additional neural network. We demonstrate the effectiveness of our approach by learning competitive or strictly improved auctions compared to prior work. Both results together further imply a novel formulation of Auction Design as a two-player game with stationary utility functions.
https://openreview.net/pdf/4d275376b7d84287985b9093947c20acab6d0751.pdf
A PAC-Bayesian Approach to Generalization Bounds for Graph Neural Networks
https://openreview.net/forum?id=TR-Nj6nFx42
https://openreview.net/forum?id=TR-Nj6nFx42
Renjie Liao,Raquel Urtasun,Richard Zemel
ICLR 2021,Poster
In this paper, we derive generalization bounds for two primary classes of graph neural networks (GNNs), namely graph convolutional networks (GCNs) and message passing GNNs (MPGNNs), via a PAC-Bayesian approach. Our result reveals that the maximum node degree and the spectral norm of the weights govern the generalization bounds of both models. We also show that our bound for GCNs is a natural generalization of the results developed in \citep{neyshabur2017pac} for fully-connected and convolutional neural networks. For MPGNNs, our PAC-Bayes bound improves over the Rademacher complexity based bound \citep{garg2020generalization}, showing a tighter dependency on the maximum node degree and the maximum hidden dimension. The key ingredients of our proofs are a perturbation analysis of GNNs and the generalization of PAC-Bayes analysis to non-homogeneous GNNs. We perform an empirical study on several synthetic and real-world graph datasets and verify that our PAC-Bayes bound is tighter than others.
https://openreview.net/pdf/0759194b8b655e695c8feaf2e2eef3a788da9130.pdf
Contextual Transformation Networks for Online Continual Learning
https://openreview.net/forum?id=zx_uX-BO7CH
https://openreview.net/forum?id=zx_uX-BO7CH
Quang Pham,Chenghao Liu,Doyen Sahoo,Steven HOI
ICLR 2021,Poster
Continual learning methods with fixed architectures rely on a single network to learn models that can perform well on all tasks. As a result, they often only accommodate common features of those tasks but neglect each task's specific features. On the other hand, dynamic architecture methods can have a separate network for each task, but they are too expensive to train and not scalable in practice, especially in online settings. To address this problem, we propose a novel online continual learning method named ``Contextual Transformation Networks” (CTN) to efficiently model the \emph{task-specific features} while enjoying neglectable complexity overhead compared to other fixed architecture methods. Moreover, inspired by the Complementary Learning Systems (CLS) theory, we propose a novel dual memory design and an objective to train CTN that can address both catastrophic forgetting and knowledge transfer simultaneously. Our extensive experiments show that CTN is competitive with a large scale dynamic architecture network and consistently outperforms other fixed architecture methods under the same standard backbone. Our implementation can be found at \url{https://github.com/phquang/Contextual-Transformation-Network}.
https://openreview.net/pdf/677e7eacc15f5a8cfa20b7a38a726599b2f960ca.pdf
Adaptive and Generative Zero-Shot Learning
https://openreview.net/forum?id=ahAUv8TI2Mz
https://openreview.net/forum?id=ahAUv8TI2Mz
Yu-Ying Chou,Hsuan-Tien Lin,Tyng-Luh Liu
ICLR 2021,Poster
We address the problem of generalized zero-shot learning (GZSL) where the task is to predict the class label of a target image whether its label belongs to the seen or unseen category. Similar to ZSL, the learning setting assumes that all class-level semantic features are given, while only the images of seen classes are available for training. By exploring the correlation between image features and the corresponding semantic features, the main idea of the proposed approach is to enrich the semantic-to-visual (S2V) embeddings via a seamless fusion of adaptive and generative learning. To this end, we extend the semantic features of each class by supplementing image-adaptive attention so that the learned S2V embedding can account for not only inter-class but also intra-class variations. In addition, to break the limit of training with images only from seen classes, we design a generative scheme to simultaneously generate virtual class labels and their visual features by sampling and interpolating over seen counterparts. In inference, a testing image will give rise to two different S2V embeddings, seen and virtual. The former is used to decide whether the underlying label is of the unseen category or otherwise a specific seen class; the latter is to predict an unseen class label. To demonstrate the effectiveness of our method, we report state-of-the-art results on four standard GZSL datasets, including an ablation study of the proposed modules.
https://openreview.net/pdf/c95de71bec56a004df30033ab55061c714367261.pdf
Online Adversarial Purification based on Self-supervised Learning
https://openreview.net/forum?id=_i3ASPp12WS
https://openreview.net/forum?id=_i3ASPp12WS
Changhao Shi,Chester Holtz,Gal Mishne
ICLR 2021,Poster
Deep neural networks are known to be vulnerable to adversarial examples, where a perturbation in the input space leads to an amplified shift in the latent network representation. In this paper, we combine canonical supervised learning with self-supervised representation learning, and present Self-supervised Online Adversarial Purification (SOAP), a novel defense strategy that uses a self-supervised loss to purify adversarial examples at test-time. Our approach leverages the label-independent nature of self-supervised signals and counters the adversarial perturbation with respect to the self-supervised tasks. SOAP yields competitive robust accuracy against state-of-the-art adversarial training and purification methods, with considerably less training complexity. In addition, our approach is robust even when adversaries are given the knowledge of the purification defense strategy. To the best of our knowledge, our paper is the first that generalizes the idea of using self-supervised signals to perform online test-time purification.
https://openreview.net/pdf/c72b2431912d9433eb862f7ff1c59d589191b939.pdf
FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders
https://openreview.net/forum?id=N6JECD-PI5w
https://openreview.net/forum?id=N6JECD-PI5w
Pengyu Cheng,Weituo Hao,Siyang Yuan,Shijing Si,Lawrence Carin
ICLR 2021,Poster
Pretrained text encoders, such as BERT, have been applied increasingly in various natural language processing (NLP) tasks, and have recently demonstrated significant performance gains. However, recent studies have demonstrated the existence of social bias in these pretrained NLP models. Although prior works have made progress on word-level debiasing, improved sentence-level fairness of pretrained encoders still lacks exploration. In this paper, we proposed the first neural debiasing method for a pretrained sentence encoder, which transforms the pretrained encoder outputs into debiased representations via a fair filter (FairFil) network. To learn the FairFil, we introduce a contrastive learning framework that not only minimizes the correlation between filtered embeddings and bias words but also preserves rich semantic information of the original sentences. On real-world datasets, our FairFil effectively reduces the bias degree of pretrained text encoders, while continuously showing desirable performance on downstream tasks. Moreover, our post hoc method does not require any retraining of the text encoders, further enlarging FairFil's application space.
https://openreview.net/pdf/c3b19ced57b7827c059693736c4217a27b682d92.pdf
Reset-Free Lifelong Learning with Skill-Space Planning
https://openreview.net/forum?id=HIGSa_3kOx3
https://openreview.net/forum?id=HIGSa_3kOx3
Kevin Lu,Aditya Grover,Pieter Abbeel,Igor Mordatch
ICLR 2021,Poster
The objective of \textit{lifelong} reinforcement learning (RL) is to optimize agents which can continuously adapt and interact in changing environments. However, current RL approaches fail drastically when environments are non-stationary and interactions are non-episodic. We propose \textit{Lifelong Skill Planning} (LiSP), an algorithmic framework for lifelong RL based on planning in an abstract space of higher-order skills. We learn the skills in an unsupervised manner using intrinsic rewards and plan over the learned skills using a learned dynamics model. Moreover, our framework permits skill discovery even from offline data, thereby reducing the need for excessive real-world interactions. We demonstrate empirically that LiSP successfully enables long-horizon planning and learns agents that can avoid catastrophic failures even in challenging non-stationary and non-episodic environments derived from gridworld and MuJoCo benchmarks.
https://openreview.net/pdf/c2294e8113d0b33d3849f2a97396d946826c3de3.pdf
Efficient Empowerment Estimation for Unsupervised Stabilization
https://openreview.net/forum?id=u2YNJPcQlwq
https://openreview.net/forum?id=u2YNJPcQlwq
Ruihan Zhao,Kevin Lu,Pieter Abbeel,Stas Tiomkin
ICLR 2021,Poster
Intrinsically motivated artificial agents learn advantageous behavior without externally-provided rewards. Previously, it was shown that maximizing mutual information between agent actuators and future states, known as the empowerment principle, enables unsupervised stabilization of dynamical systems at upright positions, which is a prototypical intrinsically motivated behavior for upright standing and walking. This follows from the coincidence between the objective of stabilization and the objective of empowerment. Unfortunately, sample-based estimation of this kind of mutual information is challenging. Recently, various variational lower bounds (VLBs) on empowerment have been proposed as solutions; however, they are often biased, unstable in training, and have high sample complexity. In this work, we propose an alternative solution based on a trainable representation of a dynamical system as a Gaussian channel, which allows us to efficiently calculate an unbiased estimator of empowerment by convex optimization. We demonstrate our solution for sample-based unsupervised stabilization on different dynamical control systems and show the advantages of our method by comparing it to the existing VLB approaches. Specifically, we show that our method has a lower sample complexity, is more stable in training, possesses the essential properties of the empowerment function, and allows estimation of empowerment from images. Consequently, our method opens a path to wider and easier adoption of empowerment for various applications.
https://openreview.net/pdf/59dc834b878ff1144857f1787ea553243043395c.pdf
MixKD: Towards Efficient Distillation of Large-scale Language Models
https://openreview.net/forum?id=UFGEelJkLu5
https://openreview.net/forum?id=UFGEelJkLu5
Kevin J Liang,Weituo Hao,Dinghan Shen,Yufan Zhou,Weizhu Chen,Changyou Chen,Lawrence Carin
ICLR 2021,Poster
Large-scale language models have recently demonstrated impressive empirical performance. Nevertheless, the improved results are attained at the price of bigger models, more power consumption, and slower inference, which hinder their applicability to low-resource (both memory and computation) platforms. Knowledge distillation (KD) has been demonstrated as an effective framework for compressing such big models. However, large-scale neural network systems are prone to memorize training instances, and thus tend to make inconsistent predictions when the data distribution is altered slightly. Moreover, the student model has few opportunities to request useful information from the teacher model when there is limited task-specific data available. To address these issues, we propose MixKD, a data-agnostic distillation framework that leverages mixup, a simple yet efficient data augmentation approach, to endow the resulting model with stronger generalization ability. Concretely, in addition to the original training examples, the student model is encouraged to mimic the teacher's behavior on the linear interpolation of example pairs as well. We prove from a theoretical perspective that under reasonable conditions MixKD gives rise to a smaller gap between the generalization error and the empirical error. To verify its effectiveness, we conduct experiments on the GLUE benchmark, where MixKD consistently leads to significant gains over the standard KD training, and outperforms several competitive baselines. Experiments under a limited-data setting and ablation studies further demonstrate the advantages of the proposed approach.
https://openreview.net/pdf/1973dfb092fcfb9ef04acaf338a759f67dcc68b8.pdf
CaPC Learning: Confidential and Private Collaborative Learning
https://openreview.net/forum?id=h2EbJ4_wMVq
https://openreview.net/forum?id=h2EbJ4_wMVq
Christopher A. Choquette-Choo,Natalie Dullerud,Adam Dziedzic,Yunxiang Zhang,Somesh Jha,Nicolas Papernot,Xiao Wang
ICLR 2021,Poster
Machine learning benefits from large training datasets, which may not always be possible to collect by any single entity, especially when using privacy-sensitive data. In many contexts, such as healthcare and finance, separate parties may wish to collaborate and learn from each other's data but are prevented from doing so due to privacy regulations. Some regulations prevent explicit sharing of data between parties by joining datasets in a central location (confidentiality). Others also limit implicit sharing of data, e.g., through model predictions (privacy). There is currently no method that enables machine learning in such a setting, where both confidentiality and privacy need to be preserved, to prevent both explicit and implicit sharing of data. Federated learning only provides confidentiality, not privacy, since gradients shared still contain private information. Differentially private learning assumes unreasonably large datasets. Furthermore, both of these learning paradigms produce a central model whose architecture was previously agreed upon by all parties rather than enabling collaborative learning where each party learns and improves their own local model. We introduce Confidential and Private Collaborative (CaPC) learning, the first method provably achieving both confidentiality and privacy in a collaborative setting. We leverage secure multi-party computation (MPC), homomorphic encryption (HE), and other techniques in combination with privately aggregated teacher models. We demonstrate how CaPC allows participants to collaborate without having to explicitly join their training sets or train a central model. Each party is able to improve the accuracy and fairness of their model, even in settings where each party has a model that performs well on their own dataset or when datasets are not IID and model architectures are heterogeneous across parties.
https://openreview.net/pdf/db02ce664fd72e5ee7ca8809ea8714aa7e6cfdb6.pdf
Multiplicative Filter Networks
https://openreview.net/forum?id=OmtmcPkkhT
https://openreview.net/forum?id=OmtmcPkkhT
Rizal Fathony,Anit Kumar Sahu,Devin Willmott,J Zico Kolter
ICLR 2021,Poster
Although deep networks are typically used to approximate functions over high dimensional inputs, recent work has increased interest in neural networks as function approximators for low-dimensional-but-complex functions, such as representing images as a function of pixel coordinates, solving differential equations, or representing signed distance fields or neural radiance fields. Key to these recent successes has been the use of new elements such as sinusoidal nonlinearities, or Fourier features in positional encodings, which vastly outperform simple ReLU networks. In this paper, we propose and empirically demonstrate that an arguably simpler class of function approximators can work just as well for such problems: multiplicative filter networks. In these networks, we avoid traditional compositional depth altogether, and simply multiply together (linear functions of) sinusoidal or Gabor wavelet functions applied to the input. This representation has the notable advantage that the entire function can simply be viewed as a linear function approximator over an exponential number of Fourier or Gabor basis functions, respectively. Despite this simplicity, when compared to recent approaches that use Fourier features with ReLU networks or sinusoidal activation networks, we show that these multiplicative filter networks largely outperform or match the performance of these recent approaches on the domains highlighted in these past works.
https://openreview.net/pdf/e0702b90f0766df82135fc5e14a2df510e9aa9d5.pdf
Planning from Pixels using Inverse Dynamics Models
https://openreview.net/forum?id=V6BjBgku7Ro
https://openreview.net/forum?id=V6BjBgku7Ro
Keiran Paster,Sheila A. McIlraith,Jimmy Ba
ICLR 2021,Poster
Learning dynamics models in high-dimensional observation spaces can be challenging for model-based RL agents. We propose a novel way to learn models in a latent space by learning to predict sequences of future actions conditioned on task completion. These models track task-relevant environment dynamics over a distribution of tasks, while simultaneously serving as an effective heuristic for planning with sparse rewards. We evaluate our method on challenging visual goal completion tasks and show a substantial increase in performance compared to prior model-free approaches.
https://openreview.net/pdf/e1667f4513f3892a3eac139e23ee5198363e6741.pdf
Semi-supervised Keypoint Localization
https://openreview.net/forum?id=yFJ67zTeI2
https://openreview.net/forum?id=yFJ67zTeI2
Olga Moskvyak,Frederic Maire,Feras Dayoub,Mahsa Baktashmotlagh
ICLR 2021,Poster
Knowledge about the locations of keypoints of an object in an image can assist in fine-grained classification and identification tasks, particularly for the case of objects that exhibit large variations in poses that greatly influence their visual appearance, such as wild animals. However, supervised training of a keypoint detection network requires annotating a large image dataset for each animal species, which is a labor-intensive task. To reduce the need for labeled data, we propose to learn simultaneously keypoint heatmaps and pose invariant keypoint representations in a semi-supervised manner using a small set of labeled images along with a larger set of unlabeled images. Keypoint representations are learnt with a semantic keypoint consistency constraint that forces the keypoint detection network to learn similar features for the same keypoint across the dataset. Pose invariance is achieved by making keypoint representations for the image and its augmented copies closer together in feature space. Our semi-supervised approach significantly outperforms previous methods on several benchmarks for human and animal body landmark localization.
https://openreview.net/pdf/60b9fa896e2494cd3d4cdf45231c251d92e4bb9f.pdf
Emergent Road Rules In Multi-Agent Driving Environments
https://openreview.net/forum?id=d8Q1mt2Ghw
https://openreview.net/forum?id=d8Q1mt2Ghw
Avik Pal,Jonah Philion,Yuan-Hong Liao,Sanja Fidler
ICLR 2021,Poster
For autonomous vehicles to safely share the road with human drivers, autonomous vehicles must abide by specific "road rules" that human drivers have agreed to follow. "Road rules" include rules that drivers are required to follow by law – such as the requirement that vehicles stop at red lights – as well as more subtle social rules – such as the implicit designation of fast lanes on the highway. In this paper, we provide empirical evidence that suggests that – instead of hard-coding road rules into self-driving algorithms – a scalable alternative may be to design multi-agent environments in which road rules emerge as optimal solutions to the problem of maximizing traffic flow. We analyze what ingredients in driving environments cause the emergence of these road rules and find that two crucial factors are noisy perception and agents’ spatial density. We provide qualitative and quantitative evidence of the emergence of seven social driving behaviors, ranging from obeying traffic signals to following lanes, all of which emerge from training agents to drive quickly to destinations without colliding. Our results add empirical support for the social road rules that countries worldwide have agreed on for safe, efficient driving.
https://openreview.net/pdf/858edcf2544391055e14f4c41482bcc25bb9ae3f.pdf
SSD: A Unified Framework for Self-Supervised Outlier Detection
https://openreview.net/forum?id=v5gjXpmR8J
https://openreview.net/forum?id=v5gjXpmR8J
Vikash Sehwag,Mung Chiang,Prateek Mittal
ICLR 2021,Poster
We ask the following question: what training information is required to design an effective outlier/out-of-distribution (OOD) detector, i.e., detecting samples that lie far away from training distribution? Since unlabeled data is easily accessible for many applications, the most compelling approach is to develop detectors based on only unlabeled in-distribution data. However, we observe that most existing detectors based on unlabeled data perform poorly, often equivalent to a random prediction. In contrast, existing state-of-the-art OOD detectors achieve impressive performance but require access to fine-grained data labels for supervised training. We propose SSD, an outlier detector based on only unlabeled in-distribution data. We use self-supervised representation learning followed by a Mahalanobis distance based detection in the feature space. We demonstrate that SSD outperforms most existing detectors based on unlabeled data by a large margin. Additionally, SSD even achieves performance on par, and sometimes even better, with supervised training based detectors. Finally, we expand our detection framework with two key extensions. First, we formulate few-shot OOD detection, in which the detector has access to only one to five samples from each class of the targeted OOD dataset. Second, we extend our framework to incorporate training data labels, if available. We find that our novel detection framework based on SSD displays enhanced performance with these extensions, and achieves state-of-the-art performance. Our code is publicly available at https://github.com/inspire-group/SSD.
https://openreview.net/pdf/89219c090f5f217510ca46c6b68a0b62df071e81.pdf
ECONOMIC HYPERPARAMETER OPTIMIZATION WITH BLENDED SEARCH STRATEGY
https://openreview.net/forum?id=VbLH04pRA3
https://openreview.net/forum?id=VbLH04pRA3
Chi Wang,Qingyun Wu,Silu Huang,Amin Saied
ICLR 2021,Poster
We study the problem of using low cost to search for hyperparameter configurations in a large search space with heterogeneous evaluation cost and model quality. We propose a blended search strategy to combine the strengths of global and local search, and prioritize them on the fly with the goal of minimizing the total cost spent in finding good configurations. Our approach demonstrates robust performance for tuning both tree-based models and deep neural networks on a large AutoML benchmark, as well as superior performance in model quality, time, and resource consumption for a production transformer-based NLP model fine-tuning task.
https://openreview.net/pdf/77d37e291c10e692ce47faac8bfed0bbbf8f58bd.pdf
Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks
https://openreview.net/forum?id=KYPz4YsCPj
https://openreview.net/forum?id=KYPz4YsCPj
Yanbang Wang,Yen-Yu Chang,Yunyu Liu,Jure Leskovec,Pan Li
ICLR 2021,Poster
Temporal networks serve as abstractions of many real-world dynamic systems. These networks typically evolve according to certain laws, such as the law of triadic closure, which is universal in social networks. Inductive representation learning of temporal networks should be able to capture such laws and further be applied to systems that follow the same laws but have not been unseen during the training stage. Previous works in this area depend on either network node identities or rich edge attributes and typically fail to extract these laws. Here, we propose {\em Causal Anonymous Walks (CAWs)} to inductively represent a temporal network. CAWs are extracted by temporal random walks and work as automatic retrieval of temporal network motifs to represent network dynamics while avoiding the time-consuming selection and counting of those motifs. CAWs adopt a novel anonymization strategy that replaces node identities with the hitting counts of the nodes based on a set of sampled walks to keep the method inductive, and simultaneously establish the correlation between motifs. We further propose a neural-network model CAW-N to encode CAWs, and pair it with a CAW sampling strategy with constant memory and time cost to support online training and inference. CAW-N is evaluated to predict links over 6 real temporal networks and uniformly outperforms previous SOTA methods by averaged 15\% AUC gain in the inductive setting. CAW-N also outperforms previous methods in 5 out of the 6 networks in the transductive setting.
https://openreview.net/pdf/6cc011fa593c3860f0afcadd1e157f9160471ce6.pdf
Robust Overfitting may be mitigated by properly learned smoothening
https://openreview.net/forum?id=qZzy5urZw9
https://openreview.net/forum?id=qZzy5urZw9
Tianlong Chen,Zhenyu Zhang,Sijia Liu,Shiyu Chang,Zhangyang Wang
ICLR 2021,Poster
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in adversarially robust training of deep networks, and that appropriate early-stopping of adversarial training (AT) could match the performance gains of most recent algorithmic improvements. This intriguing problem of robust overfitting motivates us to seek more remedies. As a pilot study, this paper investigates two empirical means to inject more learned smoothening during AT: one leveraging knowledge distillation and self-training to smooth the logits, the other performing stochastic weight averaging (Izmailov et al., 2018) to smooth the weights. Despite the embarrassing simplicity, the two approaches are surprisingly effective and hassle-free in mitigating robust overfitting. Experiments demonstrate that by plugging in them to AT, we can simultaneously boost the standard accuracy by $3.72\%\sim6.68\%$ and robust accuracy by $0.22\%\sim2 .03\%$, across multiple datasets (STL-10, SVHN, CIFAR-10, CIFAR-100, and Tiny ImageNet), perturbation types ($\ell_{\infty}$ and $\ell_2$), and robustified methods (PGD, TRADES, and FSGM), establishing the new state-of-the-art bar in AT. We present systematic visualizations and analyses to dive into their possible working mechanisms. We also carefully exclude the possibility of gradient masking by evaluating our models' robustness against transfer attacks. Codes are available at https://github.com/VITA-Group/Alleviate-Robust-Overfitting.
https://openreview.net/pdf/099a32f12e88483a4451fe099750daeb8a1a0128.pdf
Local Search Algorithms for Rank-Constrained Convex Optimization
https://openreview.net/forum?id=tH6_VWZjoq
https://openreview.net/forum?id=tH6_VWZjoq
Kyriakos Axiotis,Maxim Sviridenko
ICLR 2021,Poster
We propose greedy and local search algorithms for rank-constrained convex optimization, namely solving $\underset{\mathrm{rank}(A)\leq r^*}{\min}\, R(A)$ given a convex function $R:\mathbb{R}^{m\times n}\rightarrow \mathbb{R}$ and a parameter $r^*$. These algorithms consist of repeating two steps: (a) adding a new rank-1 matrix to $A$ and (b) enforcing the rank constraint on $A$. We refine and improve the theoretical analysis of Shalev-Shwartz et al. (2011), and show that if the rank-restricted condition number of $R$ is $\kappa$, a solution $A$ with rank $O(r^*\cdot \min\{\kappa \log \frac{R(\mathbf{0})-R(A^*)}{\epsilon}, \kappa^2\})$ and $R(A) \leq R(A^*) + \epsilon$ can be recovered, where $A^*$ is the optimal solution. This significantly generalizes associated results on sparse convex optimization, as well as rank-constrained convex optimization for smooth functions. We then introduce new practical variants of these algorithms that have superior runtime and recover better solutions in practice. We demonstrate the versatility of these methods on a wide range of applications involving matrix completion and robust principal component analysis.
https://openreview.net/pdf/458331518335ba1ae4617033b3d271418ec81093.pdf
Learning Task Decomposition with Ordered Memory Policy Network
https://openreview.net/forum?id=vcopnwZ7bC
https://openreview.net/forum?id=vcopnwZ7bC
Yuchen Lu,Yikang Shen,Siyuan Zhou,Aaron Courville,Joshua B. Tenenbaum,Chuang Gan
ICLR 2021,Poster
Many complex real-world tasks are composed of several levels of subtasks. Humans leverage these hierarchical structures to accelerate the learning process and achieve better generalization. In this work, we study the inductive bias and propose Ordered Memory Policy Network (OMPN) to discover subtask hierarchy by learning from demonstration. The discovered subtask hierarchy could be used to perform task decomposition, recovering the subtask boundaries in an unstructured demonstration. Experiments on Craft and Dial demonstrate that our model can achieve higher task decomposition performance under both unsupervised and weakly supervised settings, comparing with strong baselines. OMPN can also be directly applied to partially observable environments and still achieve higher task decomposition performance. Our visualization further confirms that the subtask hierarchy can emerge in our model 1.
https://openreview.net/pdf/7f228b5f98840f6f20f111e8ed6608d54277730d.pdf
Property Controllable Variational Autoencoder via Invertible Mutual Dependence
https://openreview.net/forum?id=tYxG_OMs9WE
https://openreview.net/forum?id=tYxG_OMs9WE
Xiaojie Guo,Yuanqi Du,Liang Zhao
ICLR 2021,Poster
Deep generative models have made important progress towards modeling complex, high dimensional data via learning latent representations. Their usefulness is nevertheless often limited by a lack of control over the generative process or a poor understanding of the latent representation. To overcome these issues, attention is now focused on discovering latent variables correlated to the data properties and ways to manipulate these properties. This paper presents the new Property controllable VAE (PCVAE), where a new Bayesian model is proposed to inductively bias the latent representation using explicit data properties via novel group-wise and property-wise disentanglement. Each data property corresponds seamlessly to a latent variable, by innovatively enforcing invertible mutual dependence between them. This allows us to move along the learned latent dimensions to control specific properties of the generated data with great precision. Quantitative and qualitative evaluations confirm that the PCVAE outperforms the existing models by up to 28% in capturing and 65% in manipulating the desired properties.
https://openreview.net/pdf/7a243ac1776d6ff97e3e45e24a68cc1d897b9b36.pdf
Grounding Physical Concepts of Objects and Events Through Dynamic Visual Reasoning
https://openreview.net/forum?id=bhCDO_cEGCz
https://openreview.net/forum?id=bhCDO_cEGCz
Zhenfang Chen,Jiayuan Mao,Jiajun Wu,Kwan-Yee Kenneth Wong,Joshua B. Tenenbaum,Chuang Gan
ICLR 2021,Poster
We study the problem of dynamic visual reasoning on raw videos. This is a challenging problem; currently, state-of-the-art models often require dense supervision on physical object properties and events from simulation, which are impractical to obtain in real life. In this paper, we present the Dynamic Concept Learner (DCL), a unified framework that grounds physical objects and events from video and language. DCL first adopts a trajectory extractor to track each object over time and to represent it as a latent, object-centric feature vector. Building upon this object-centric representation, DCL learns to approximate the dynamic interaction among objects using graph networks. DCL further incorporates a semantic parser to parse question into semantic programs and, finally, a program executor to run the program to answer the question, levering the learned dynamics model. After training, DCL can detect and associate objects across the frames, ground visual properties and physical events, understand the causal relationship between events, make future and counterfactual predictions, and leverage these extracted presentations for answering queries. DCL achieves state-of-the-art performance on CLEVRER, a challenging causal video reasoning dataset, even without using ground-truth attributes and collision labels from simulations for training. We further test DCL on a newly proposed video-retrieval and event localization dataset derived from CLEVRER, showing its strong generalization capacity.
https://openreview.net/pdf/b0012d0f037d3416af76be33e23dacc31d14746f.pdf
gradSim: Differentiable simulation for system identification and visuomotor control
https://openreview.net/forum?id=c_E8kFWfhp0
https://openreview.net/forum?id=c_E8kFWfhp0
J. Krishna Murthy,Miles Macklin,Florian Golemo,Vikram Voleti,Linda Petrini,Martin Weiss,Breandan Considine,Jérôme Parent-Lévesque,Kevin Xie,Kenny Erleben,Liam Paull,Florian Shkurti,Derek Nowrouzezahrai,Sanja Fidler
ICLR 2021,Poster
In this paper, we tackle the problem of estimating object physical properties such as mass, friction, and elasticity directly from video sequences. Such a system identification problem is fundamentally ill-posed due to the loss of information during image formation. Current best solutions to the problem require precise 3D labels which are labor intensive to gather, and infeasible to create for many systems such as deformable solids or cloth. In this work we present gradSim, a framework that overcomes the dependence on 3D supervision by combining differentiable multiphysics simulation and differentiable rendering to jointly model the evolution of scene dynamics and image formation. This unique combination enables backpropagation from pixels in a video sequence through to the underlying physical attributes that generated them. Furthermore, our unified computation graph across dynamics and rendering engines enables the learning of challenging visuomotor control tasks, without relying on state-based (3D) supervision, while obtaining performance competitive to/better than techniques that require precise 3D labels.
https://openreview.net/pdf/4a6d5a30558be4f1d305beba6c91e7617ddb5c96.pdf
Generative Scene Graph Networks
https://openreview.net/forum?id=RmcPm9m3tnk
https://openreview.net/forum?id=RmcPm9m3tnk
Fei Deng,Zhuo Zhi,Donghun Lee,Sungjin Ahn
ICLR 2021,Poster
Human perception excels at building compositional hierarchies of parts and objects from unlabeled scenes that help systematic generalization. Yet most work on generative scene modeling either ignores the part-whole relationship or assumes access to predefined part labels. In this paper, we propose Generative Scene Graph Networks (GSGNs), the first deep generative model that learns to discover the primitive parts and infer the part-whole relationship jointly from multi-object scenes without supervision and in an end-to-end trainable way. We formulate GSGN as a variational autoencoder in which the latent representation is a tree-structured probabilistic scene graph. The leaf nodes in the latent tree correspond to primitive parts, and the edges represent the symbolic pose variables required for recursively composing the parts into whole objects and then the full scene. This allows novel objects and scenes to be generated both by sampling from the prior and by manual configuration of the pose variables, as we do with graphics engines. We evaluate GSGN on datasets of scenes containing multiple compositional objects, including a challenging Compositional CLEVR dataset that we have developed. We show that GSGN is able to infer the latent scene graph, generalize out of the training regime, and improve data efficiency in downstream tasks.
https://openreview.net/pdf/4972f3189bc1990cd88f0c12abbe7111acfe3c15.pdf
Decentralized Attribution of Generative Models
https://openreview.net/forum?id=_kxlwvhOodK
https://openreview.net/forum?id=_kxlwvhOodK
Changhoon Kim,Yi Ren,Yezhou Yang
ICLR 2021,Poster
Growing applications of generative models have led to new threats such as malicious personation and digital copyright infringement. One solution to these threats is model attribution, i.e., the identification of user-end models where the contents under question are generated. Existing studies showed empirical feasibility of attribution through a centralized classifier trained on all existing user-end models. However, this approach is not scalable in a reality where the number of models ever grows. Neither does it provide an attributability guarantee. To this end, this paper studies decentralized attribution, which relies on binary classifiers associated with each user-end model. Each binary classifier is parameterized by a user-specific key and distinguishes its associated model distribution from the authentic data distribution. We develop sufficient conditions of the keys that guarantee an attributability lower bound. Our method is validated on MNIST, CelebA, and FFHQ datasets. We also examine the trade-off between generation quality and robustness of attribution against adversarial post-processes.
https://openreview.net/pdf/6895a29f55a7f92f11157dd3802660ded9122484.pdf
Individually Fair Rankings
https://openreview.net/forum?id=71zCSP_HuBN
https://openreview.net/forum?id=71zCSP_HuBN
Amanda Bower,Hamid Eftekhari,Mikhail Yurochkin,Yuekai Sun
ICLR 2021,Poster
We develop an algorithm to train individually fair learning-to-rank (LTR) models. The proposed approach ensures items from minority groups appear alongside similar items from majority groups. This notion of fair ranking is based on the definition of individual fairness from supervised learning and is more nuanced than prior fair LTR approaches that simply ensure the ranking model provides underrepresented items with a basic level of exposure. The crux of our method is an optimal transport-based regularizer that enforces individual fairness and an efficient algorithm for optimizing the regularizer. We show that our approach leads to certifiably individually fair LTR models and demonstrate the efficacy of our method on ranking tasks subject to demographic biases.
https://openreview.net/pdf/79474c3ea4a5449a9adbae8a72783142b915282b.pdf
Adaptive Federated Optimization
https://openreview.net/forum?id=LkFG3lB13U5
https://openreview.net/forum?id=LkFG3lB13U5
Sashank J. Reddi,Zachary Charles,Manzil Zaheer,Zachary Garrett,Keith Rush,Jakub Konečný,Sanjiv Kumar,Hugh Brendan McMahan
ICLR 2021,Poster
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyze their convergence in the presence of heterogeneous data for general non-convex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.
https://openreview.net/pdf/d3f38daf93af27b20819fe19a4c3ca3f2635d9b1.pdf
GANs Can Play Lottery Tickets Too
https://openreview.net/forum?id=1AoMhc_9jER
https://openreview.net/forum?id=1AoMhc_9jER
Xuxi Chen,Zhenyu Zhang,Yongduo Sui,Tianlong Chen
ICLR 2021,Poster
Deep generative adversarial networks (GANs) have gained growing popularity in numerous scenarios, while usually suffer from high parameter complexities for resource-constrained real-world applications. However, the compression of GANs has less been explored. A few works show that heuristically applying compression techniques normally leads to unsatisfactory results, due to the notorious training instability of GANs. In parallel, the lottery ticket hypothesis shows prevailing success on discriminative models, in locating sparse matching subnetworks capable of training in isolation to full model performance. In this work, we for the first time study the existence of such trainable matching subnetworks in deep GANs. For a range of GANs, we certainly find matching subnetworks at $67\%$-$74\%$ sparsity. We observe that with or without pruning discriminator has a minor effect on the existence and quality of matching subnetworks, while the initialization weights used in the discriminator plays a significant role. We then show the powerful transferability of these subnetworks to unseen tasks. Furthermore, extensive experimental results demonstrate that our found subnetworks substantially outperform previous state-of-the-art GAN compression approaches in both image generation (e.g. SNGAN) and image-to-image translation GANs (e.g. CycleGAN). Codes available at https://github.com/VITA-Group/GAN-LTH.
https://openreview.net/pdf/f9f13cd41ac8fcc30b5177eac267e3a61229f0e4.pdf
Improving Relational Regularized Autoencoders with Spherical Sliced Fused Gromov Wasserstein
https://openreview.net/forum?id=DiQD7FWL233
https://openreview.net/forum?id=DiQD7FWL233
Khai Nguyen,Son Nguyen,Nhat Ho,Tung Pham,Hung Bui
ICLR 2021,Poster
Relational regularized autoencoder (RAE) is a framework to learn the distribution of data by minimizing a reconstruction loss together with a relational regularization on the prior of latent space. A recent attempt to reduce the inner discrepancy between the prior and aggregated posterior distributions is to incorporate sliced fused Gromov-Wasserstein (SFG) between these distributions. That approach has a weakness since it treats every slicing direction similarly, meanwhile several directions are not useful for the discriminative task. To improve the discrepancy and consequently the relational regularization, we propose a new relational discrepancy, named spherical sliced fused Gromov Wasserstein (SSFG), that can find an important area of projections characterized by a von Mises-Fisher distribution. Then, we introduce two variants of SSFG to improve its performance. The first variant, named mixture spherical sliced fused Gromov Wasserstein (MSSFG), replaces the vMF distribution by a mixture of von Mises-Fisher distributions to capture multiple important areas of directions that are far from each other. The second variant, named power spherical sliced fused Gromov Wasserstein (PSSFG), replaces the vMF distribution by a power spherical distribution to improve the sampling time of the vMF distribution in high dimension settings. We then apply the new discrepancies to the RAE framework to achieve its new variants. Finally, we conduct extensive experiments to show that the new autoencoders have favorable performance in learning latent manifold structure, image generation, and reconstruction.
https://openreview.net/pdf/d9cff7f7264e7a99f687eeb02f8e8bcf415be338.pdf
Learning Reasoning Paths over Semantic Graphs for Video-grounded Dialogues
https://openreview.net/forum?id=hPWj1qduVw8
https://openreview.net/forum?id=hPWj1qduVw8
Hung Le,Nancy F. Chen,Steven Hoi
ICLR 2021,Poster
Compared to traditional visual question answering, video-grounded dialogues require additional reasoning over dialogue context to answer questions in a multi-turn setting. Previous approaches to video-grounded dialogues mostly use dialogue context as a simple text input without modelling the inherent information flows at the turn level. In this paper, we propose a novel framework of Reasoning Paths in Dialogue Context (PDC). PDC model discovers information flows among dialogue turns through a semantic graph constructed based on lexical components in each question and answer. PDC model then learns to predict reasoning paths over this semantic graph. Our path prediction model predicts a path from the current turn through past dialogue turns that contain additional visual cues to answer the current question. Our reasoning model sequentially processes both visual and textual information through this reasoning path and the propagated features are used to generate the answer. Our experimental results demonstrate the effectiveness of our method and provide additional insights on how models use semantic dependencies in a dialogue context to retrieve visual cues.
https://openreview.net/pdf/c7f4e978ac75833ccb55c20a8c0c0e1e3f25c2f0.pdf
Extreme Memorization via Scale of Initialization
https://openreview.net/forum?id=Z4R1vxLbRLO
https://openreview.net/forum?id=Z4R1vxLbRLO
Harsh Mehta,Ashok Cutkosky,Behnam Neyshabur
ICLR 2021,Poster
We construct an experimental setup in which changing the scale of initialization strongly impacts the implicit regularization induced by SGD, interpolating from good generalization performance to completely memorizing the training set while making little progress on the test set. Moreover, we find that the extent and manner in which generalization ability is affected depends on the activation and loss function used, with sin activation being the most extreme. In the case of the homogeneous ReLU activation, we show that this behavior can be attributed to the loss function. Our empirical investigation reveals that increasing the scale of initialization correlates with misalignment of representations and gradients across examples in the same class. This insight allows us to device an alignment measure over gradients and representations which can capture this phenomenon. We demonstrate that our alignment measure correlates with generalization of deep models trained on image classification tasks.
https://openreview.net/pdf/80a1ad20645ef877ad4166e2c824ab56fda83b7c.pdf
Teaching with Commentaries
https://openreview.net/forum?id=4RbdgBh9gE
https://openreview.net/forum?id=4RbdgBh9gE
Aniruddh Raghu,Maithra Raghu,Simon Kornblith,David Duvenaud,Geoffrey Hinton
ICLR 2021,Poster
Effective training of deep neural networks can be challenging, and there remain many open questions on how to best learn these models. Recently developed methods to improve neural network training examine teaching: providing learned information during the training process to improve downstream model performance. In this paper, we take steps towards extending the scope of teaching. We propose a flexible teaching framework using commentaries, learned meta-information helpful for training on a particular task. We present gradient-based methods to learn commentaries, leveraging recent work on implicit differentiation for scalability. We explore diverse applications of commentaries, from weighting training examples, to parameterising label-dependent data augmentation policies, to representing attention masks that highlight salient image regions. We find that commentaries can improve training speed and/or performance, and provide insights about the dataset and training process. We also observe that commentaries generalise: they can be reused when training new models to obtain performance benefits, suggesting a use-case where commentaries are stored with a dataset and leveraged in future for improved model training.
https://openreview.net/pdf/f5e220ca55cfe80991bc55b5fde70e5a2e3b7d71.pdf
In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning
https://openreview.net/forum?id=-ODN6SbiUU
https://openreview.net/forum?id=-ODN6SbiUU
Mamshad Nayeem Rizve,Kevin Duarte,Yogesh S Rawat,Mubarak Shah
ICLR 2021,Poster
The recent research in semi-supervised learning (SSL) is mostly dominated by consistency regularization based methods which achieve strong performance. However, they heavily rely on domain-specific data augmentations, which are not easy to generate for all data modalities. Pseudo-labeling (PL) is a general SSL approach that does not have this constraint but performs relatively poorly in its original formulation. We argue that PL underperforms due to the erroneous high confidence predictions from poorly calibrated models; these predictions generate many incorrect pseudo-labels, leading to noisy training. We propose an uncertainty-aware pseudo-label selection (UPS) framework which improves pseudo labeling accuracy by drastically reducing the amount of noise encountered in the training process. Furthermore, UPS generalizes the pseudo-labeling process, allowing for the creation of negative pseudo-labels; these negative pseudo-labels can be used for multi-label classification as well as negative learning to improve the single-label classification. We achieve strong performance when compared to recent SSL methods on the CIFAR-10 and CIFAR-100 datasets. Also, we demonstrate the versatility of our method on the video dataset UCF-101 and the multi-label dataset Pascal VOC.
https://openreview.net/pdf/c979bcaed90f2b14dbf27b5e90fdbb74407f161b.pdf
Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds
https://openreview.net/forum?id=MDsQkFP1Aw
https://openreview.net/forum?id=MDsQkFP1Aw
Efthymios Tzinis,Scott Wisdom,Aren Jansen,Shawn Hershey,Tal Remez,Dan Ellis,John R. Hershey
ICLR 2021,Poster
Recent progress in deep learning has enabled many advances in sound separation and visual scene understanding. However, extracting sound sources which are apparent in natural videos remains an open problem. In this work, we present AudioScope, a novel audio-visual sound separation framework that can be trained without supervision to isolate on-screen sound sources from real in-the-wild videos. Prior audio-visual separation work assumed artificial limitations on the domain of sound classes (e.g., to speech or music), constrained the number of sources, and required strong sound separation or visual segmentation labels. AudioScope overcomes these limitations, operating on an open domain of sounds, with variable numbers of sources, and without labels or prior visual segmentation. The training procedure for AudioScope uses mixture invariant training (MixIT) to separate synthetic mixtures of mixtures (MoMs) into individual sources, where noisy labels for mixtures are provided by an unsupervised audio-visual coincidence model. Using the noisy labels, along with attention between video and audio features, AudioScope learns to identify audio-visual similarity and to suppress off-screen sounds. We demonstrate the effectiveness of our approach using a dataset of video clips extracted from open-domain YFCC100m video data. This dataset contains a wide diversity of sound classes recorded in unconstrained conditions, making the application of previous methods unsuitable. For evaluation and semi-supervised experiments, we collected human labels for presence of on-screen and off-screen sounds on a small subset of clips.
https://openreview.net/pdf/30b613d34d9d0b25c3ed4bf3ba159cd74ba805b3.pdf
Cut out the annotator, keep the cutout: better segmentation with weak supervision
https://openreview.net/forum?id=bjkX6Kzb5H
https://openreview.net/forum?id=bjkX6Kzb5H
Sarah Hooper,Michael Wornow,Ying Hang Seah,Peter Kellman,Hui Xue,Frederic Sala,Curtis Langlotz,Christopher Re
ICLR 2021,Poster
Constructing large, labeled training datasets for segmentation models is an expensive and labor-intensive process. This is a common challenge in machine learning, addressed by methods that require few or no labeled data points such as few-shot learning (FSL) and weakly-supervised learning (WS). Such techniques, however, have limitations when applied to image segmentation---FSL methods often produce noisy results and are strongly dependent on which few datapoints are labeled, while WS models struggle to fully exploit rich image information. We propose a framework that fuses FSL and WS for segmentation tasks, enabling users to train high-performing segmentation networks with very few hand-labeled training points. We use FSL models as weak sources in a WS framework, requiring a very small set of reference labeled images, and introduce a new WS model that focuses on key areas---areas with contention among noisy labels---of the image to fuse these weak sources. Empirically, we evaluate our proposed approach over seven well-motivated segmentation tasks. We show that our methods can achieve within 1.4 Dice points compared to fully supervised networks while only requiring five hand-labeled training points. Compared to existing FSL methods, our approach improves performance by a mean 3.6 Dice points over the next-best method.
https://openreview.net/pdf/483be50ec4cee1c25de217a88795d4d99938cb4a.pdf
CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for Natural Language Understanding
https://openreview.net/forum?id=Ozk9MrX1hvA
https://openreview.net/forum?id=Ozk9MrX1hvA
Yanru Qu,Dinghan Shen,Yelong Shen,Sandra Sajeev,Weizhu Chen,Jiawei Han
ICLR 2021,Poster
Data augmentation has been demonstrated as an effective strategy for improving model generalization and data efficiency. However, due to the discrete nature of natural language, designing label-preserving transformations for text data tends to be more challenging. In this paper, we propose a novel data augmentation frame-work dubbed CoDA, which synthesizes diverse and informative augmented examples by integrating multiple transformations organically. Moreover, a contrastive regularization is introduced to capture the global relationship among all the data samples. A momentum encoder along with a memory bank is further leveraged to better estimate the contrastive loss. To verify the effectiveness of the proposed framework, we apply CoDA to Transformer-based models on a wide range of natural language understanding tasks. On the GLUE benchmark, CoDA gives rise to an average improvement of 2.2%while applied to the Roberta-large model. More importantly, it consistently exhibits stronger results relative to several competitive data augmentation and adversarial training baselines (including the low-resource settings). Extensive experiments show that the proposed contrastive objective can be flexibly combined with various data augmentation approaches to further boost their performance, highlighting the wide applicability of the CoDA framework.
https://openreview.net/pdf/f00c2ea329ae5573307a659b808b791fca635c77.pdf
Deep Learning meets Projective Clustering
https://openreview.net/forum?id=EQfpYwF3-b
https://openreview.net/forum?id=EQfpYwF3-b
Alaa Maalouf,Harry Lang,Daniela Rus,Dan Feldman
ICLR 2021,Poster
A common approach for compressing Natural Language Processing (NLP) networks is to encode the embedding layer as a matrix $A\in\mathbb{R}^{n\times d}$, compute its rank-$j$ approximation $A_j$ via SVD (Singular Value Decomposition), and then factor $A_j$ into a pair of matrices that correspond to smaller fully-connected layers to replace the original embedding layer. Geometrically, the rows of $A$ represent points in $\mathbb{R}^d$, and the rows of $A_j$ represent their projections onto the $j$-dimensional subspace that minimizes the sum of squared distances (``errors'') to the points. In practice, these rows of $A$ may be spread around $k>1$ subspaces, so factoring $A$ based on a single subspace may lead to large errors that turn into large drops in accuracy. Inspired by \emph{projective clustering} from computational geometry, we suggest replacing this subspace by a set of $k$ subspaces, each of dimension $j$, that minimizes the sum of squared distances over every point (row in $A$) to its \emph{closest} subspace. Based on this approach, we provide a novel architecture that replaces the original embedding layer by a set of $k$ small layers that operate in parallel and are then recombined with a single fully-connected layer. Extensive experimental results on the GLUE benchmark yield networks that are both more accurate and smaller compared to the standard matrix factorization (SVD). For example, we further compress DistilBERT by reducing the size of the embedding layer by $40\%$ while incurring only a $0.5\%$ average drop in accuracy over all nine GLUE tasks, compared to a $2.8\%$ drop using the existing SVD approach. On RoBERTa we achieve $43\%$ compression of the embedding layer with less than a $0.8\%$ average drop in accuracy as compared to a $3\%$ drop previously.
https://openreview.net/pdf/b30e3cfa2920dfd21d347c92e0226bcb13aab969.pdf
Learning to Deceive Knowledge Graph Augmented Models via Targeted Perturbation
https://openreview.net/forum?id=b7g3_ZMHnT0
https://openreview.net/forum?id=b7g3_ZMHnT0
Mrigank Raman,Aaron Chan,Siddhant Agarwal,PeiFeng Wang,Hansen Wang,Sungchul Kim,Ryan Rossi,Handong Zhao,Nedim Lipka,Xiang Ren
ICLR 2021,Poster
Knowledge graphs (KGs) have helped neural models improve performance on various knowledge-intensive tasks, like question answering and item recommendation. By using attention over the KG, such KG-augmented models can also "explain" which KG information was most relevant for making a given prediction. In this paper, we question whether these models are really behaving as we expect. We show that, through a reinforcement learning policy (or even simple heuristics), one can produce deceptively perturbed KGs, which maintain the downstream performance of the original KG while significantly deviating from the original KG's semantics and structure. Our findings raise doubts about KG-augmented models' ability to reason about KG information and give sensible explanations.
https://openreview.net/pdf/f507111c61d895cf0cf9f23f8fdd018a9ca5717d.pdf
Knowledge Distillation as Semiparametric Inference
https://openreview.net/forum?id=m4UCf24r0Y
https://openreview.net/forum?id=m4UCf24r0Y
Tri Dao,Govinda M Kamath,Vasilis Syrgkanis,Lester Mackey
ICLR 2021,Poster
A popular approach to model compression is to train an inexpensive student model to mimic the class probabilities of a highly accurate but cumbersome teacher model. Surprisingly, this two-step knowledge distillation process often leads to higher accuracy than training the student directly on labeled data. To explain and enhance this phenomenon, we cast knowledge distillation as a semiparametric inference problem with the optimal student model as the target, the unknown Bayes class probabilities as nuisance, and the teacher probabilities as a plug-in nuisance estimate. By adapting modern semiparametric tools, we derive new guarantees for the prediction error of standard distillation and develop two enhancements—cross-fitting and loss correction—to mitigate the impact of teacher overfitting and underfitting on student performance. We validate our findings empirically on both tabular and image data and observe consistent improvements from our knowledge distillation enhancements.
https://openreview.net/pdf/1ff09cb99e2aa00a9e0a0dfe445b3bc32eee2418.pdf
Meta-Learning with Neural Tangent Kernels
https://openreview.net/forum?id=Ti87Pv5Oc8
https://openreview.net/forum?id=Ti87Pv5Oc8
Yufan Zhou,Zhenyi Wang,Jiayi Xian,Changyou Chen,Jinhui Xu
ICLR 2021,Poster
Model Agnostic Meta-Learning (MAML) has emerged as a standard framework for meta-learning, where a meta-model is learned with the ability of fast adapting to new tasks. However, as a double-looped optimization problem, MAML needs to differentiate through the whole inner-loop optimization path for every outer-loop training step, which may lead to both computational inefficiency and sub-optimal solutions. In this paper, we generalize MAML to allow meta-learning to be defined in function spaces, and propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK). Within this paradigm, we introduce two meta-learning algorithms in the RKHS, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework. We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory. Extensive experimental studies demonstrate advantages of our paradigm in both efficiency and quality of solutions compared to related meta-learning algorithms. Another interesting feature of our proposed methods is that they are demonstrated to be more robust to adversarial attacks and out-of-distribution adaptation than popular baselines, as demonstrated in our experiments.
https://openreview.net/pdf/07382947621a75697286cffb9d20483d2fd8337e.pdf
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics
https://openreview.net/forum?id=9r30XCjf5Dt
https://openreview.net/forum?id=9r30XCjf5Dt
Yanchao Sun,Da Huo,Furong Huang
ICLR 2021,Poster
Poisoning attacks on Reinforcement Learning (RL) systems could take advantage of RL algorithm’s vulnerabilities and cause failure of the learning. However, prior works on poisoning RL usually either unrealistically assume the attacker knows the underlying Markov Decision Process (MDP), or directly apply the poisoning methods in supervised learning to RL. In this work, we build a generic poisoning framework for online RL via a comprehensive investigation of heterogeneous poisoning models in RL. Without any prior knowledge of the MDP, we propose a strategic poisoning algorithm called Vulnerability-Aware Adversarial Critic Poison (VA2C-P), which works for on-policy deep RL agents, closing the gap that no poisoning method exists for policy-based RL agents. VA2C-P uses a novel metric, stability radius in RL, that measures the vulnerability of RL algorithms. Experiments on multiple deep RL agents and multiple environments show that our poisoning algorithm successfully prevents agents from learning a good policy or teaches the agents to converge to a target policy, with a limited attacking budget.
https://openreview.net/pdf/fb9e902c18157059497d56cdc36770d12b05acf4.pdf
Understanding and Improving Lexical Choice in Non-Autoregressive Translation
https://openreview.net/forum?id=ZTFeSBIX9C
https://openreview.net/forum?id=ZTFeSBIX9C
Liang Ding,Longyue Wang,Xuebo Liu,Derek F. Wong,Dacheng Tao,Zhaopeng Tu
ICLR 2021,Poster
Knowledge distillation (KD) is essential for training non-autoregressive translation (NAT) models by reducing the complexity of the raw data with an autoregressive teacher model. In this study, we empirically show that as a side effect of this training, the lexical choice errors on low-frequency words are propagated to the NAT model from the teacher model. To alleviate this problem, we propose to expose the raw data to NAT models to restore the useful information of low-frequency words, which are missed in the distilled data. To this end, we introduce an extra Kullback-Leibler divergence term derived by comparing the lexical choice of NAT model and that embedded in the raw data. Experimental results across language pairs and model architectures demonstrate the effectiveness and universality of the proposed approach. Extensive analyses confirm our claim that our approach improves performance by reducing the lexical choice errors on low-frequency words. Encouragingly, our approach pushes the SOTA NAT performance on the WMT14 English-German and WMT16 Romanian-English datasets up to 27.8 and 33.8 BLEU points, respectively.
https://openreview.net/pdf/ba4c60d18c1a69639e2d9988925bcd11396ff936.pdf
Layer-adaptive Sparsity for the Magnitude-based Pruning
https://openreview.net/forum?id=H6ATjJ0TKdf
https://openreview.net/forum?id=H6ATjJ0TKdf
Jaeho Lee,Sejun Park,Sangwoo Mo,Sungsoo Ahn,Jinwoo Shin
ICLR 2021,Poster
Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance. However, without a clear consensus on ``how to choose,'' the layerwise sparsities are mostly selected algorithm-by-algorithm, often resorting to handcrafted heuristics or an extensive hyperparameter search. To fill this gap, we propose a novel importance score for global pruning, coined layer-adaptive magnitude-based pruning (LAMP) score; the score is a rescaled version of weight magnitude that incorporates the model-level $\ell_2$ distortion incurred by pruning, and does not require any hyperparameter tuning or heavy computation. Under various image classification setups, LAMP consistently outperforms popular existing schemes for layerwise sparsity selection. Furthermore, we observe that LAMP continues to outperform baselines even in weight-rewinding setups, while the connectivity-oriented layerwise sparsity (the strongest baseline overall) performs worse than a simple global magnitude-based pruning in this case. Code: https://github.com/jaeho-lee/layer-adaptive-sparsity
https://openreview.net/pdf/6c6e88f6354b6fb0bc2955ecb9e518ca2f65432f.pdf
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning
https://openreview.net/forum?id=n1HD8M6WGn
https://openreview.net/forum?id=n1HD8M6WGn
Xuebo Liu,Longyue Wang,Derek F. Wong,Liang Ding,Lidia S. Chao,Zhaopeng Tu
ICLR 2021,Poster
Encoder layer fusion (EncoderFusion) is a technique to fuse all the encoder layers (instead of the uppermost layer) for sequence-to-sequence (Seq2Seq) models, which has proven effective on various NLP tasks. However, it is still not entirely clear why and when EncoderFusion should work. In this paper, our main contribution is to take a step further in understanding EncoderFusion. Many of previous studies believe that the success of EncoderFusion comes from exploiting surface and syntactic information embedded in lower encoder layers. Unlike them, we find that the encoder embedding layer is more important than other intermediate encoder layers. In addition, the uppermost decoder layer consistently pays more attention to the encoder embedding layer across NLP tasks. Based on this observation, we propose a simple fusion method, SurfaceFusion, by fusing only the encoder embedding layer for the softmax layer. Experimental results show that SurfaceFusion outperforms EncoderFusion on several NLP benchmarks, including machine translation, text summarization, and grammatical error correction. It obtains the state-of-the-art performance on WMT16 Romanian-English and WMT14 English-French translation tasks. Extensive analyses reveal that SurfaceFusion learns more expressive bilingual word embeddings by building a closer relationship between relevant source and target embeddings. Source code is freely available at https://github.com/SunbowLiu/SurfaceFusion.
https://openreview.net/pdf/aabc62bd94feebbc116e4d479e55dd7b0d856959.pdf
SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization
https://openreview.net/forum?id=-M0QkvBGTTq
https://openreview.net/forum?id=-M0QkvBGTTq
A F M Shahab Uddin,Mst. Sirazam Monira,Wheemyung Shin,TaeChoong Chung,Sung-Ho Bae
ICLR 2021,Poster
Advanced data augmentation strategies have widely been studied to improve the generalization ability of deep learning models. Regional dropout is one of the popular solutions that guides the model to focus on less discriminative parts by randomly removing image regions, resulting in improved regularization. However, such information removal is undesirable. On the other hand, recent strategies suggest to randomly cut and mix patches and their labels among training images, to enjoy the advantages of regional dropout without having any pointless pixel in the augmented images. We argue that such random selection strategies of the patches may not necessarily represent sufficient information about the corresponding object and thereby mixing the labels according to that uninformative patch enables the model to learn unexpected feature representation. Therefore, we propose SaliencyMix that carefully selects a representative image patch with the help of a saliency map and mixes this indicative patch with the target image, thus leading the model to learn more appropriate feature representation. SaliencyMix achieves the best known top-1 error of $21.26\%$ and $20.09\%$ for ResNet-50 and ResNet-101 architectures on ImageNet classification, respectively, and also improves the model robustness against adversarial perturbations. Furthermore, models that are trained with SaliencyMix, help to improve the object detection performance. Source code is available at \url{https://github.com/SaliencyMix/SaliencyMix}.
https://openreview.net/pdf/05e902b237602356704a807abbdec8f2a5ab6414.pdf
Are wider nets better given the same number of parameters?
https://openreview.net/forum?id=_zx8Oka09eF
https://openreview.net/forum?id=_zx8Oka09eF
Anna Golubeva,Guy Gur-Ari,Behnam Neyshabur
ICLR 2021,Poster
Empirical studies demonstrate that the performance of neural networks improves with increasing number of parameters. In most of these studies, the number of parameters is increased by increasing the network width. This begs the question: Is the observed improvement due to the larger number of parameters, or is it due to the larger width itself? We compare different ways of increasing model width while keeping the number of parameters constant. We show that for models initialized with a random, static sparsity pattern in the weight tensors, network width is the determining factor for good performance, while the number of weights is secondary, as long as the model achieves high training accuarcy. As a step towards understanding this effect, we analyze these models in the framework of Gaussian Process kernels. We find that the distance between the sparse finite-width model kernel and the infinite-width kernel at initialization is indicative of model performance.
https://openreview.net/pdf/dc2756fb031ed3eea85d3a93b530a6c1f39d81d5.pdf
Discovering Non-monotonic Autoregressive Orderings with Variational Inference
https://openreview.net/forum?id=jP1vTH3inC
https://openreview.net/forum?id=jP1vTH3inC
Xuanlin Li,Brandon Trabucco,Dong Huk Park,Michael Luo,Sheng Shen,Trevor Darrell,Yang Gao
ICLR 2021,Poster
The predominant approach for language modeling is to encode a sequence of tokens from left to right, but this eliminates a source of information: the order by which the sequence was naturally generated. One strategy to recover this information is to decode both the content and ordering of tokens. Some prior work supervises content and ordering with hand-designed loss functions to encourage specific orders or bootstraps from a predefined ordering. These approaches require domain-specific insight. Other prior work searches over valid insertion operations that lead to ground truth sequences during training, which has high time complexity and cannot be efficiently parallelized. We address these limitations with an unsupervised learner that can be trained in a fully-parallelizable manner to discover high-quality autoregressive orders in a data driven way without a domain-specific prior. The learner is a neural network that performs variational inference with the autoregressive ordering as a latent variable. Since the corresponding variational lower bound is not differentiable, we develop a practical algorithm for end-to-end optimization using policy gradients. Strong empirical results with our solution on sequence modeling tasks suggest that our algorithm is capable of discovering various autoregressive orders for different sequences that are competitive with or even better than fixed orders.
https://openreview.net/pdf/a9bcdb59d5a61ce55316a3ef34787b838b592a4a.pdf
CompOFA – Compound Once-For-All Networks for Faster Multi-Platform Deployment
https://openreview.net/forum?id=IgIk8RRT-Z
https://openreview.net/forum?id=IgIk8RRT-Z
Manas Sahni,Shreya Varshini,Alind Khare,Alexey Tumanov
ICLR 2021,Poster
The emergence of CNNs in mainstream deployment has necessitated methods to design and train efficient architectures tailored to maximize the accuracy under diverse hardware and latency constraints. To scale these resource-intensive tasks with an increasing number of deployment targets, Once-For-All (OFA) proposed an approach to jointly train several models at once with a constant training cost. However, this cost remains as high as 40-50 GPU days and also suffers from a combinatorial explosion of sub-optimal model configurations. We seek to reduce this search space -- and hence the training budget -- by constraining search to models close to the accuracy-latency Pareto frontier. We incorporate insights of compound relationships between model dimensions to build CompOFA, a design space smaller by several orders of magnitude. Through experiments on ImageNet, we demonstrate that even with simple heuristics we can achieve a 2x reduction in training time and 216x speedup in model search/extraction time compared to the state of the art, without loss of Pareto optimality! We also show that this smaller design space is dense enough to support equally accurate models for a similar diversity of hardware and latency targets, while also reducing the complexity of the training and subsequent extraction algorithms. Our source code is available at https://github.com/gatech-sysml/CompOFA
https://openreview.net/pdf/cd9ed036121abc86a3630081eb6c6264788c8194.pdf
Representing Partial Programs with Blended Abstract Semantics
https://openreview.net/forum?id=mCtadqIxOJ
https://openreview.net/forum?id=mCtadqIxOJ
Maxwell Nye,Yewen Pu,Matthew Bowers,Jacob Andreas,Joshua B. Tenenbaum,Armando Solar-Lezama
ICLR 2021,Poster
Synthesizing programs from examples requires searching over a vast, combinatorial space of possible programs. In this search process, a key challenge is representing the behavior of a partially written program before it can be executed, to judge if it is on the right track and predict where to search next. We introduce a general technique for representing partially written programs in a program synthesis engine. We take inspiration from the technique of abstract interpretation, in which an approximate execution model is used to determine if an unfinished program will eventually satisfy a goal specification. Here we learn an approximate execution model implemented as a modular neural network. By constructing compositional program representations that implicitly encode the interpretation semantics of the underlying programming language, we can represent partial programs using a flexible combination of concrete execution state and learned neural representations, using the learned approximate semantics when concrete semantics are not known (in unfinished parts of the program). We show that these hybrid neuro-symbolic representations enable execution-guided synthesizers to use more powerful language constructs, such as loops and higher-order functions, and can be used to synthesize programs more accurately for a given search budget than pure neural approaches in several domains.
https://openreview.net/pdf/8f274ee0e7de9e855efc59efc1bf500d94b68773.pdf
PolarNet: Learning to Optimize Polar Keypoints for Keypoint Based Object Detection
https://openreview.net/forum?id=TYXs_y84xRj
https://openreview.net/forum?id=TYXs_y84xRj
Wu Xiongwei,Doyen Sahoo,Steven HOI
ICLR 2021,Poster
A variety of anchor-free object detectors have been actively proposed as possible alternatives to the mainstream anchor-based detectors that often rely on complicated design of anchor boxes. Despite achieving promising performance on par with anchor-based detectors, the existing anchor-free detectors such as FCOS or CenterNet predict objects based on standard Cartesian coordinates, which often yield poor quality keypoints. Further, the feature representation is also scale-sensitive. In this paper, we propose a new anchor-free keypoint based detector ``PolarNet", where keypoints are represented as a set of Polar coordinates instead of Cartesian coordinates. The ``PolarNet" detector learns offsets pointing to the corners of objects in order to learn high quality keypoints. Additionally, PolarNet uses features of corner points to localize objects, making the localization scale-insensitive. Finally in our experiments, we show that PolarNet, an anchor-free detector, outperforms the existing anchor-free detectors, and it is able to achieve highly competitive result on COCO test-dev benchmark ($47.8\%$ and $50.3\%$ AP under the single-model single-scale and multi-scale testing) which is on par with the state-of-the-art two-stage anchor-based object detectors. The code and the models are available at https://github.com/XiongweiWu/PolarNetV1
https://openreview.net/pdf/d08ca7f6d8b412afb77ae32d7522a517e41f4741.pdf
Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective
https://openreview.net/forum?id=Cnon5ezMHtu
https://openreview.net/forum?id=Cnon5ezMHtu
Wuyang Chen,Xinyu Gong,Zhangyang Wang
ICLR 2021,Poster
Neural Architecture Search (NAS) has been explosively studied to automate the discovery of top-performer neural networks. Current works require heavy training of supernet or intensive architecture evaluations, thus suffering from heavy resource consumption and often incurring search bias due to truncated training or approximations. Can we select the best neural architectures without involving any training and eliminate a drastic portion of the search cost? We provide an affirmative answer, by proposing a novel framework called \textit{training-free neural architecture search} ($\textbf{TE-NAS}$). TE-NAS ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK), and the number of linear regions in the input space. Both are motivated by recent theory advances in deep networks, and can be computed without any training. We show that: (1) these two measurements imply the $\textit{trainability}$ and $\textit{expressivity}$ of a neural network; and (2) they strongly correlate with the network's actual test accuracy. Further on, we design a pruning-based NAS mechanism to achieve a more flexible and superior trade-off between the trainability and expressivity during the search. In NAS-Bench-201 and DARTS search spaces, TE-NAS completes high-quality search but only costs $\textbf{0.5}$ and $\textbf{4}$ GPU hours with one 1080Ti on CIFAR-10 and ImageNet, respectively. We hope our work to inspire more attempts in bridging between the theoretic findings of deep networks and practical impacts in real NAS applications.
https://openreview.net/pdf/097fe3785855414961469f27465c798144ea4b9e.pdf
Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding
https://openreview.net/forum?id=8qDwejCuCN
https://openreview.net/forum?id=8qDwejCuCN
Sana Tonekaboni,Danny Eytan,Anna Goldenberg
ICLR 2021,Poster
Time series are often complex and rich in information but sparsely labeled and therefore challenging to model. In this paper, we propose a self-supervised framework for learning robust and generalizable representations for time series. Our approach, called Temporal Neighborhood Coding (TNC), takes advantage of the local smoothness of a signal's generative process to define neighborhoods in time with stationary properties. Using a debiased contrastive objective, our framework learns time series representations by ensuring that in the encoding space, the distribution of signals from within a neighborhood is distinguishable from the distribution of non-neighboring signals. Our motivation stems from the medical field, where the ability to model the dynamic nature of time series data is especially valuable for identifying, tracking, and predicting the underlying patients' latent states in settings where labeling data is practically impossible. We compare our method to recently developed unsupervised representation learning approaches and demonstrate superior performance on clustering and classification tasks for multiple datasets.
https://openreview.net/pdf/0e06b6edae016465a8d856db9d43ae54b938746a.pdf
Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary with Width and Depth
https://openreview.net/forum?id=KJNcAkY8tY4
https://openreview.net/forum?id=KJNcAkY8tY4
Thao Nguyen,Maithra Raghu,Simon Kornblith
ICLR 2021,Poster
A key factor in the success of deep neural networks is the ability to scale models to improve performance by varying the architecture depth and width. This simple property of neural network design has resulted in highly effective architectures for a variety of tasks. Nevertheless, there is limited understanding of effects of depth and width on the learned representations. In this paper, we study this fundamental question. We begin by investigating how varying depth and width affects model hidden representations, finding a characteristic block structure in the hidden representations of larger capacity (wider or deeper) models. We demonstrate that this block structure arises when model capacity is large relative to the size of the training set, and is indicative of the underlying layers preserving and propagating the dominant principal component of their representations. This discovery has important ramifications for features learned by different models, namely, representations outside the block structure are often similar across architectures with varying widths and depths, but the block structure is unique to each model. We analyze the output predictions of different model architectures, finding that even when the overall accuracy is similar, wide and deep models exhibit distinctive error patterns and variations across classes.
https://openreview.net/pdf/cb12ae8308060f86d8970f514c2a0e8a33d13c22.pdf
Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning
https://openreview.net/forum?id=cu7IUiOhujH
https://openreview.net/forum?id=cu7IUiOhujH
Beliz Gunel,Jingfei Du,Alexis Conneau,Veselin Stoyanov
ICLR 2021,Poster
State-of-the-art natural language understanding classification models follow two-stages: pre-training a large language model on an auxiliary task, and then fine-tuning the model on a task-specific labeled dataset using cross-entropy loss. However, the cross-entropy loss has several shortcomings that can lead to sub-optimal generalization and instability. Driven by the intuition that good generalization requires capturing the similarity between examples in one class and contrasting them with examples in other classes, we propose a supervised contrastive learning (SCL) objective for the fine-tuning stage. Combined with cross-entropy, our proposed SCL loss obtains significant improvements over a strong RoBERTa-Large baseline on multiple datasets of the GLUE benchmark in few-shot learning settings, without requiring specialized architecture, data augmentations, memory banks, or additional unsupervised data. Our proposed fine-tuning objective leads to models that are more robust to different levels of noise in the fine-tuning training data, and can generalize better to related tasks with limited labeled data.
https://openreview.net/pdf/02dcbc0bf1ebd53ed5b69a2ca9aa27b3d3c53893.pdf
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
https://openreview.net/forum?id=tlV90jvZbw
https://openreview.net/forum?id=tlV90jvZbw
Reinhard Heckel,Fatih Furkan Yilmaz
ICLR 2021,Poster
Over-parameterized models, such as large deep networks, often exhibit a double descent phenomenon, whereas a function of model size, error first decreases, increases, and decreases at last. This intriguing double descent behavior also occurs as a function of training epochs and has been conjectured to arise because training epochs control the model complexity. In this paper, we show that such epoch-wise double descent occurs for a different reason: It is caused by a superposition of two or more bias-variance tradeoffs that arise because different parts of the network are learned at different epochs, and mitigating this by proper scaling of stepsizes can significantly improve the early stopping performance. We show this analytically for i) linear regression, where differently scaled features give rise to a superposition of bias-variance tradeoffs, and for ii) a wide two-layer neural network, where the first and second layers govern bias-variance tradeoffs. Inspired by this theory, we study two standard convolutional networks empirically and show that eliminating epoch-wise double descent through adjusting stepsizes of different layers improves the early stopping performance.
https://openreview.net/pdf/eaf02d8eb8ad9232e0b10b405cf104b4547de602.pdf
Contrastive Syn-to-Real Generalization
https://openreview.net/forum?id=F8whUO8HNbP
https://openreview.net/forum?id=F8whUO8HNbP
Wuyang Chen,Zhiding Yu,Shalini De Mello,Sifei Liu,Jose M. Alvarez,Zhangyang Wang,Anima Anandkumar
ICLR 2021,Poster
Training on synthetic data can be beneficial for label or data-scarce scenarios. However, synthetically trained models often suffer from poor generalization in real domains due to domain gaps. In this work, we make a key observation that the diversity of the learned feature embeddings plays an important role in the generalization performance. To this end, we propose contrastive synthetic-to-real generalization (CSG), a novel framework that leverage the pre-trained ImageNet knowledge to prevent overfitting to the synthetic domain, while promoting the diversity of feature embeddings as an inductive bias to improve generalization. In addition, we enhance the proposed CSG framework with attentional pooling (A-pool) to let the model focus on semantically important regions and further improve its generalization. We demonstrate the effectiveness of CSG on various synthetic training tasks, exhibiting state-of-the-art performance on zero-shot domain generalization.
https://openreview.net/pdf/a7ade6e78d9e1ddd5b9584676f313379bbfbce16.pdf
Benchmarks for Deep Off-Policy Evaluation
https://openreview.net/forum?id=kWSeGEeHvF8
https://openreview.net/forum?id=kWSeGEeHvF8
Justin Fu,Mohammad Norouzi,Ofir Nachum,George Tucker,ziyu wang,Alexander Novikov,Mengjiao Yang,Michael R Zhang,Yutian Chen,Aviral Kumar,Cosmin Paduraru,Sergey Levine,Thomas Paine
ICLR 2021,Poster
Off-policy evaluation (OPE) holds the promise of being able to leverage large, offline datasets for both evaluating and selecting complex policies for decision making. The ability to learn offline is particularly important in many real-world domains, such as in healthcare, recommender systems, or robotics, where online data collection is an expensive and potentially dangerous process. Being able to accurately evaluate and select high-performing policies without requiring online interaction could yield significant benefits in safety, time, and cost for these applications. While many OPE methods have been proposed in recent years, comparing results between papers is difficult because currently there is a lack of a comprehensive and unified benchmark, and measuring algorithmic progress has been challenging due to the lack of difficult evaluation tasks. In order to address this gap, we present a collection of policies that in conjunction with existing offline datasets can be used for benchmarking off-policy evaluation. Our tasks include a range of challenging high-dimensional continuous control problems, with wide selections of datasets and policies for performing policy selection. The goal of our benchmark is to provide a standardized measure of progress that is motivated from a set of principles designed to challenge and test the limits of existing OPE methods. We perform an evaluation of state-of-the-art algorithms and provide open-source access to our data and code to foster future research in this area.
https://openreview.net/pdf/3a90850ebecc25b81a9534180c75842a2b672812.pdf
Pre-training Text-to-Text Transformers for Concept-centric Common Sense
https://openreview.net/forum?id=3k20LAiHYL2
https://openreview.net/forum?id=3k20LAiHYL2
Wangchunshu Zhou,Dong-Ho Lee,Ravi Kiran Selvam,Seyeon Lee,Xiang Ren
ICLR 2021,Poster
Pretrained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks that require a syntactic and semantic understanding of the text. However, current pre-training objectives such as masked token prediction (for BERT-style PTLMs) and masked span infilling (for T5-style PTLMs) do not explicitly model the relational and compositional commonsense knowledge about everyday concepts, which is crucial to many downstream tasks requiring commonsense reasoning. To augment PTLMs with common sense, we propose generative and contrastive objectives as intermediate self-supervised pre-training tasks between general pre-training and downstream task-specific fine-tuning. We also propose a joint training framework to unify generative and contrastive objectives so that these objectives can be more effective. Our proposed objectives can pack more commonsense knowledge into the parameters of a pre-trained text-to-text transformer without relying on external knowledge bases, yielding better performance on both NLU and NLG tasks. We apply our method on a pre-trained T5 model in an intermediate task transfer learning fashion to train a concept-aware language model (CALM) and experiment with five commonsense benchmarks (four NLU tasks and one NLG task). Experimental results show that CALM outperforms baseline methods by a consistent margin.
https://openreview.net/pdf/30f24a224a3d4133f7da640c76644f91a3d41f0a.pdf
Combining Label Propagation and Simple Models out-performs Graph Neural Networks
https://openreview.net/forum?id=8E1-f3VhX1o
https://openreview.net/forum?id=8E1-f3VhX1o
Qian Huang,Horace He,Abhay Singh,Ser-Nam Lim,Austin Benson
ICLR 2021,Poster
Graph Neural Networks (GNNs) are a predominant technique for learning over graphs. However, there is relatively little understanding of why GNNs are successful in practice and whether they are necessary for good performance. Here, we show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs by combining shallow models that ignore the graph structure with two simple post-processing steps that exploit correlation in the label structure: (i) an “error correlation” that spreads residual errors in training data to correct errors in test data and (ii) a “prediction correlation” that smooths the predictions on the test data. We call this overall procedure Correct and Smooth (C&S), and the post-processing steps are implemented via simple modifications to standard label propagation techniques that have long been used in graph-based semi-supervised learning. Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks, with just a small fraction of the parameters and orders of magnitude faster runtime. For instance, we exceed the best-known GNN performance on the OGB-Products dataset with 137 times fewer parameters and greater than 100 times less training time. The performance of our methods highlights how directly incorporating label information into the learning algorithm (as is common in traditional methods) yields easy and substantial performance gains. We can also incorporate our techniques into big GNN models, providing modest gains in some cases.
https://openreview.net/pdf/7c1b32ea12a84f37e53a2145fc40a23c3642c2e8.pdf
Learning Long-term Visual Dynamics with Region Proposal Interaction Networks
https://openreview.net/forum?id=_X_4Akcd8Re
https://openreview.net/forum?id=_X_4Akcd8Re
Haozhi Qi,Xiaolong Wang,Deepak Pathak,Yi Ma,Jitendra Malik
ICLR 2021,Poster
Learning long-term dynamics models is the key to understanding physical common sense. Most existing approaches on learning dynamics from visual input sidestep long-term predictions by resorting to rapid re-planning with short-term models. This not only requires such models to be super accurate but also limits them only to tasks where an agent can continuously obtain feedback and take action at each step until completion. In this paper, we aim to leverage the ideas from success stories in visual recognition tasks to build object representations that can capture inter-object and object-environment interactions over a long range. To this end, we propose Region Proposal Interaction Networks (RPIN), which reason about each object's trajectory in a latent region-proposal feature space. Thanks to the simple yet effective object representation, our approach outperforms prior methods by a significant margin both in terms of prediction quality and their ability to plan for downstream tasks, and also generalize well to novel environments. Code, pre-trained models, and more visualization results are available at https://haozhi.io/RPIN.
https://openreview.net/pdf/5da931176d4a8bcb421d7a6a087fce6475f7c406.pdf
Chaos of Learning Beyond Zero-sum and Coordination via Game Decompositions
https://openreview.net/forum?id=a3wKPZpGtCF
https://openreview.net/forum?id=a3wKPZpGtCF
Yun Kuen Cheung,Yixin Tao
ICLR 2021,Poster
It is of primary interest for ML to understand how agents learn and interact dynamically in competitive environments and games (e.g. GANs). But this has been a difficult task, as irregular behaviors are commonly observed in such systems. This can be explained theoretically, for instance, by the works of Cheung and Piliouras (COLT 2019; NeurIPS 2020), which showed that in two-person zero-sum games, if agents employ one of the most well-known learning algorithms, Multiplicative Weights Update (MWU), then Lyapunov chaos occurs everywhere in the payoff space. In this paper, we study how persistent chaos can occur in the more general normal game settings, where the agents might have the motivation to coordinate (which is not true for zero-sum games) and the number of agents can be arbitrary. We characterize bimatrix games where MWU, its optimistic variant (OMWU) or Follow-the-Regularized-Leader (FTRL) algorithms are Lyapunov chaotic almost everywhere in the payoff space. Technically, our characterization is derived by extending the volume-expansion argument of Cheung and Piliouras via the canonical game decomposition into zero-sum and coordination components. Interestingly, the two components induce opposite volume-changing behaviors, so the overall behavior can be analyzed by comparing the strengths of the components against each other. The comparison is done via our new notion of "matrix domination" or via a linear program. For multi-player games, we present a local equivalence of volume change between general games and graphical games, which is used to perform volume and chaos analyses of MWU and OMWU in potential games.
https://openreview.net/pdf/eb21a8cbc05cb76a2135d38e48ebb0d0192bb4d5.pdf
Control-Aware Representations for Model-based Reinforcement Learning
https://openreview.net/forum?id=dgd4EJqsbW5
https://openreview.net/forum?id=dgd4EJqsbW5
Brandon Cui,Yinlam Chow,Mohammad Ghavamzadeh
ICLR 2021,Poster
A major challenge in modern reinforcement learning (RL) is efficient control of dynamical systems from high-dimensional sensory observations. Learning controllable embedding (LCE) is a promising approach that addresses this challenge by embedding the observations into a lower-dimensional latent space, estimating the latent dynamics, and utilizing it to perform control in the latent space. Two important questions in this area are how to learn a representation that is amenable to the control problem at hand, and how to achieve an end-to-end framework for representation learning and control. In this paper, we take a few steps towards addressing these questions. We first formulate a LCE model to learn representations that are suitable to be used by a policy iteration style algorithm in the latent space.We call this model control-aware representation learning(CARL). We derive a loss function and three implementations for CARL. In the offline implementation, we replace the locally-linear control algorithm (e.g., iLQR) used by the existing LCE methods with a RL algorithm, namely model-based soft actor-critic, and show that it results in significant improvement. In online CARL, we interleave representation learning and control, and demonstrate further gain in performance. Finally, we propose value-guided CARL, a variation in which we optimize a weighted version of the CARL loss function, where the weights depend on the TD-error of the current policy. We evaluate the proposed algorithms by extensive experiments on benchmark tasks and compare them with several LCE baselines.
https://openreview.net/pdf/f0d80d862dab33f2ed69b44a0f14fda119006af8.pdf
Provably robust classification of adversarial examples with detection
https://openreview.net/forum?id=sRA5rLNpmQc
https://openreview.net/forum?id=sRA5rLNpmQc
Fatemeh Sheikholeslami,Ali Lotfi,J Zico Kolter
ICLR 2021,Poster
Adversarial attacks against deep networks can be defended against either by building robust classifiers or, by creating classifiers that can \emph{detect} the presence of adversarial perturbations. Although it may intuitively seem easier to simply detect attacks rather than build a robust classifier, this has not bourne out in practice even empirically, as most detection methods have subsequently been broken by adaptive attacks, thus necessitating \emph{verifiable} performance for detection mechanisms. In this paper, we propose a new method for jointly training a provably robust classifier and detector. Specifically, we show that by introducing an additional "abstain/detection" into a classifier, we can modify existing certified defense mechanisms to allow the classifier to either robustly classify \emph{or} detect adversarial attacks. We extend the common interval bound propagation (IBP) method for certified robustness under $\ell_\infty$ perturbations to account for our new robust objective, and show that the method outperforms traditional IBP used in isolation, especially for large perturbation sizes. Specifically, tests on MNIST and CIFAR-10 datasets exhibit promising results, for example with provable robust error less than $63.63\%$ and $67.92\%$, for $55.6\%$ and $66.37\%$ natural error, for $\epsilon=8/255$ and $16/255$ on the CIFAR-10 dataset, respectively.
https://openreview.net/pdf/f8635fcc4d33b492dbd371448f02d31878d69223.pdf
Return-Based Contrastive Representation Learning for Reinforcement Learning
https://openreview.net/forum?id=_TM6rT7tXke
https://openreview.net/forum?id=_TM6rT7tXke
Guoqing Liu,Chuheng Zhang,Li Zhao,Tao Qin,Jinhua Zhu,Li Jian,Nenghai Yu,Tie-Yan Liu
ICLR 2021,Poster
Recently, various auxiliary tasks have been proposed to accelerate representation learning and improve sample efficiency in deep reinforcement learning (RL). However, existing auxiliary tasks do not take the characteristics of RL problems into consideration and are unsupervised. By leveraging returns, the most important feedback signals in RL, we propose a novel auxiliary task that forces the learnt representations to discriminate state-action pairs with different returns. Our auxiliary loss is theoretically justified to learn representations that capture the structure of a new form of state-action abstraction, under which state-action pairs with similar return distributions are aggregated together. Empirically, our algorithm outperforms strong baselines on complex tasks in Atari games and DeepMind Control suite, and achieves even better performance when combined with existing auxiliary tasks.
https://openreview.net/pdf/da82358af2f47721465fefc3dffd1bd3f3f2c16e.pdf
Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification
https://openreview.net/forum?id=ijJZbomCJIm
https://openreview.net/forum?id=ijJZbomCJIm
Francisco Utrera,Evan Kravitz,N. Benjamin Erichson,Rajiv Khanna,Michael W. Mahoney
ICLR 2021,Poster
Transfer learning has emerged as a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains. This process consists of taking a neural network pre-trained on a large feature-rich source dataset, freezing the early layers that encode essential generic image properties, and then fine-tuning the last few layers in order to capture specific information related to the target situation. This approach is particularly useful when only limited or weakly labeled data are available for the new task. In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models, especially if only limited data are available for the new domain task. Further, we observe that adversarial training biases the learnt representations to retaining shapes, as opposed to textures, which impacts the transferability of the source models. Finally, through the lens of influence functions, we discover that transferred adversarially-trained models contain more human-identifiable semantic information, which explains -- at least partly -- why adversarially-trained models transfer better.
https://openreview.net/pdf/566e8902b7a749d4525ff5f0933ffdae3a9bec39.pdf
Learning Structural Edits via Incremental Tree Transformations
https://openreview.net/forum?id=v9hAX77--cZ
https://openreview.net/forum?id=v9hAX77--cZ
Ziyu Yao,Frank F. Xu,Pengcheng Yin,Huan Sun,Graham Neubig
ICLR 2021,Poster
While most neural generative models generate outputs in a single pass, the human creative process is usually one of iterative building and refinement. Recent work has proposed models of editing processes, but these mostly focus on editing sequential data and/or only model a single editing pass. In this paper, we present a generic model for incremental editing of structured data (i.e. ''structural edits''). Particularly, we focus on tree-structured data, taking abstract syntax trees of computer programs as our canonical example. Our editor learns to iteratively generate tree edits (e.g. deleting or adding a subtree) and applies them to the partially edited data, thereby the entire editing process can be formulated as consecutive, incremental tree transformations. To show the unique benefits of modeling tree edits directly, we further propose a novel edit encoder for learning to represent edits, as well as an imitation learning method that allows the editor to be more robust. We evaluate our proposed editor on two source code edit datasets, where results show that, with the proposed edit encoder, our editor significantly improves accuracy over previous approaches that generate the edited program directly in one pass. Finally, we demonstrate that training our editor to imitate experts and correct its mistakes dynamically can further improve its performance.
https://openreview.net/pdf/1fa89cfde10c367bd3c970a467f69d1e81ef7f40.pdf
Cross-Attentional Audio-Visual Fusion for Weakly-Supervised Action Localization
https://openreview.net/forum?id=hWr3e3r-oH5
https://openreview.net/forum?id=hWr3e3r-oH5
Jun-Tae Lee,Mihir Jain,Hyoungwoo Park,Sungrack Yun
ICLR 2021,Poster
Temporally localizing actions in videos is one of the key components for video understanding. Learning from weakly-labeled data is seen as a potential solution towards avoiding expensive frame-level annotations. Different from other works which only depend on visual-modality, we propose to learn richer audiovisual representation for weakly-supervised action localization. First, we propose a multi-stage cross-attention mechanism to collaboratively fuse audio and visual features, which preserves the intra-modal characteristics. Second, to model both foreground and background frames, we construct an open-max classifier that treats the background class as an open-set. Third, for precise action localization, we design consistency losses to enforce temporal continuity for the action class prediction, and also help with foreground-prediction reliability. Extensive experiments on two publicly available video-datasets (AVE and ActivityNet1.2) show that the proposed method effectively fuses audio and visual modalities, and achieves the state-of-the-art results for weakly-supervised action localization.
https://openreview.net/pdf/2d9210844c74d2a119c3878f1e6c2475a0d3af86.pdf
Improved Estimation of Concentration Under $\ell_p$-Norm Distance Metrics Using Half Spaces
https://openreview.net/forum?id=BUlyHkzjgmA
https://openreview.net/forum?id=BUlyHkzjgmA
Jack Prescott,Xiao Zhang,David Evans
ICLR 2021,Poster
Concentration of measure has been argued to be the fundamental cause of adversarial vulnerability. Mahloujifar et al. (2019) presented an empirical way to measure the concentration of a data distribution using samples, and employed it to find lower bounds on intrinsic robustness for several benchmark datasets. However, it remains unclear whether these lower bounds are tight enough to provide a useful approximation for the intrinsic robustness of a dataset. To gain a deeper understanding of the concentration of measure phenomenon, we first extend the Gaussian Isoperimetric Inequality to non-spherical Gaussian measures and arbitrary $\ell_p$-norms ($p \geq 2$). We leverage these theoretical insights to design a method that uses half-spaces to estimate the concentration of any empirical dataset under $\ell_p$-norm distance metrics. Our proposed algorithm is more efficient than Mahloujifar et al. (2019)'s, and experiments on synthetic datasets and image benchmarks demonstrate that it is able to find much tighter intrinsic robustness bounds. These tighter estimates provide further evidence that rules out intrinsic dataset concentration as a possible explanation for the adversarial vulnerability of state-of-the-art classifiers.
https://openreview.net/pdf/5d9950ac35e5e85a527dacf6286c7b9c148005bd.pdf
Beyond Categorical Label Representations for Image Classification
https://openreview.net/forum?id=MyHwDabUHZm
https://openreview.net/forum?id=MyHwDabUHZm
Boyuan Chen,Yu Li,Sunand Raghupathi,Hod Lipson
ICLR 2021,Poster
We find that the way we choose to represent data labels can have a profound effect on the quality of trained models. For example, training an image classifier to regress audio labels rather than traditional categorical probabilities produces a more reliable classification. This result is surprising, considering that audio labels are more complex than simpler numerical probabilities or text. We hypothesize that high dimensional, high entropy label representations are generally more useful because they provide a stronger error signal. We support this hypothesis with evidence from various label representations including constant matrices, spectrograms, shuffled spectrograms, Gaussian mixtures, and uniform random matrices of various dimensionalities. Our experiments reveal that high dimensional, high entropy labels achieve comparable accuracy to text (categorical) labels on standard image classification tasks, but features learned through our label representations exhibit more robustness under various adversarial attacks and better effectiveness with a limited amount of training data. These results suggest that label representation may play a more important role than previously thought.
https://openreview.net/pdf/14e605cccc7af2ba01dc51b23e624ff89dbeff7c.pdf
Fantastic Four: Differentiable and Efficient Bounds on Singular Values of Convolution Layers
https://openreview.net/forum?id=JCRblSgs34Z
https://openreview.net/forum?id=JCRblSgs34Z
Sahil Singla,Soheil Feizi
ICLR 2021,Poster
In deep neural networks, the spectral norm of the Jacobian of a layer bounds the factor by which the norm of a signal changes during forward/backward propagation. Spectral norm regularizations have been shown to improve generalization, robustness and optimization of deep learning methods. Existing methods to compute the spectral norm of convolution layers either rely on heuristics that are efficient in computation but lack guarantees or are theoretically-sound but computationally expensive. In this work, we obtain the best of both worlds by deriving {\it four} provable upper bounds on the spectral norm of a standard 2D multi-channel convolution layer. These bounds are differentiable and can be computed efficiently during training with negligible overhead. One of these bounds is in fact the popular heuristic method of Miyato et al. (multiplied by a constant factor depending on filter sizes). Each of these four bounds can achieve the tightest gap depending on convolution filters. Thus, we propose to use the minimum of these four bounds as a tight, differentiable and efficient upper bound on the spectral norm of convolution layers. Moreover, our spectral bound is an effective regularizer and can be used to bound either the lipschitz constant or curvature values (eigenvalues of the Hessian) of neural networks. Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and robustness of deep networks.
https://openreview.net/pdf/6c7018c5dcc64de7e42204d28cf786cb4a596c69.pdf
Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC via Variance Reduction
https://openreview.net/forum?id=iOnhIy-a-0n
https://openreview.net/forum?id=iOnhIy-a-0n
Wei Deng,Qi Feng,Georgios P. Karagiannis,Guang Lin,Faming Liang
ICLR 2021,Poster
Replica exchange stochastic gradient Langevin dynamics (reSGLD) has shown promise in accelerating the convergence in non-convex learning; however, an excessively large correction for avoiding biases from noisy energy estimators has limited the potential of the acceleration. To address this issue, we study the variance reduction for noisy energy estimators, which promotes much more effective swaps. Theoretically, we provide a non-asymptotic analysis on the exponential convergence for the underlying continuous-time Markov jump process; moreover, we consider a generalized Girsanov theorem which includes the change of Poisson measure to overcome the crude discretization based on the Gr\"{o}wall's inequality and yields a much tighter error in the 2-Wasserstein ($\mathcal{W}_2$) distance. Numerically, we conduct extensive experiments and obtain state-of-the-art results in optimization and uncertainty estimates for synthetic experiments and image data.
https://openreview.net/pdf/c077f043fe1bbbb4d720b3fb0fbe7afe580d8374.pdf
IsarStep: a Benchmark for High-level Mathematical Reasoning
https://openreview.net/forum?id=Pzj6fzU6wkj
https://openreview.net/forum?id=Pzj6fzU6wkj
Wenda Li,Lei Yu,Yuhuai Wu,Lawrence C. Paulson
ICLR 2021,Poster
A well-defined benchmark is essential for measuring and accelerating research progress of machine learning models. In this paper, we present a benchmark for high-level mathematical reasoning and study the reasoning capabilities of neural sequence-to-sequence models. We build a non-synthetic dataset from the largest repository of proofs written by human experts in a theorem prover. The dataset has a broad coverage of undergraduate and research-level mathematical and computer science theorems. In our defined task, a model is required to fill in a missing intermediate proposition given surrounding proofs. This task provides a starting point for the long-term goal of having machines generate human-readable proofs automatically. Our experiments and analysis reveal that while the task is challenging, neural models can capture non-trivial mathematical reasoning. We further design a hierarchical transformer that outperforms the transformer baseline.
https://openreview.net/pdf/c9fb7dd359102a00d8676684bd704c54961a5285.pdf
Factorizing Declarative and Procedural Knowledge in Structured, Dynamical Environments
https://openreview.net/forum?id=VVdmjgu7pKM
https://openreview.net/forum?id=VVdmjgu7pKM
Anirudh Goyal,Alex Lamb,Phanideep Gampa,Philippe Beaudoin,Charles Blundell,Sergey Levine,Yoshua Bengio,Michael Curtis Mozer
ICLR 2021,Poster
Modeling a structured, dynamic environment like a video game requires keeping track of the objects and their states (declarative knowledge) as well as predicting how objects behave (procedural knowledge). Black-box models with a monolithic hidden state often fail to apply procedural knowledge consistently and uniformly, i.e., they lack systematicity. For example, in a video game, correct prediction of one enemy's trajectory does not ensure correct prediction of another's. We address this issue via an architecture that factorizes declarative and procedural knowledge and that imposes modularity within each form of knowledge. The architecture consists of active modules called object files that maintain the state of a single object and invoke passive external knowledge sources called schemata that prescribe state updates. To use a video game as an illustration, two enemies of the same type will share schemata but will have separate object files to encode their distinct state (e.g., health, position). We propose to use attention to determine which object files to update, the selection of schemata, and the propagation of information between object files. The resulting architecture is a drop-in replacement conforming to the same input-output interface as normal recurrent networks (e.g., LSTM, GRU) yet achieves substantially better generalization on environments that have multiple object tokens of the same type, including a challenging intuitive physics benchmark.
https://openreview.net/pdf/927b511da0f53c9d48b5dbe33f31772d15ec97ca.pdf
Provable Rich Observation Reinforcement Learning with Combinatorial Latent States
https://openreview.net/forum?id=hx1IXFHAw7R
https://openreview.net/forum?id=hx1IXFHAw7R
Dipendra Misra,Qinghua Liu,Chi Jin,John Langford
ICLR 2021,Poster
We propose a novel setting for reinforcement learning that combines two common real-world difficulties: presence of observations (such as camera images) and factored states (such as location of objects). In our setting, the agent receives observations generated stochastically from a "latent" factored state. These observations are "rich enough" to enable decoding of the latent state and remove partial observability concerns. Since the latent state is combinatorial, the size of state space is exponential in the number of latent factors. We create a learning algorithm FactoRL (Fact-o-Rel) for this setting, which uses noise-contrastive learning to identify latent structures in emission processes and discover a factorized state space. We derive polynomial sample complexity guarantees for FactoRL which polynomially depend upon the number factors, and very weakly depend on the size of the observation space. We also provide a guarantee of polynomial time complexity when given access to an efficient planning algorithm.
https://openreview.net/pdf/6a01a542edf09482d75550c673ddcb462727111a.pdf
LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
https://openreview.net/forum?id=hJmtwocEqzc
https://openreview.net/forum?id=hJmtwocEqzc
Valeriia Cherepanova,Micah Goldblum,Harrison Foley,Shiyuan Duan,John P Dickerson,Gavin Taylor,Tom Goldstein
ICLR 2021,Poster
Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike. These systems are typically built by scraping social media profiles for user images. Adversarial perturbations have been proposed for bypassing facial recognition systems. However, existing methods fail on full-scale systems and commercial APIs. We develop our own adversarial filter that accounts for the entire image processing pipeline and is demonstrably effective against industrial-grade pipelines that include face detection and large scale databases. Additionally, we release an easy-to-use webtool that significantly degrades the accuracy of Amazon Rekognition and the Microsoft Azure Face Recognition API, reducing the accuracy of each to below 1%.
https://openreview.net/pdf/33f4bbc102bd62362928fed6df483a1a2d5ef1ba.pdf
Neural Networks for Learning Counterfactual G-Invariances from Single Environments
https://openreview.net/forum?id=7t1FcJUWhi3
https://openreview.net/forum?id=7t1FcJUWhi3
S Chandra Mouli,Bruno Ribeiro
ICLR 2021,Poster
Despite —or maybe because of— their astonishing capacity to fit data, neural networks are believed to have difficulties extrapolating beyond training data distribution. This work shows that, for extrapolations based on finite transformation groups, a model’s inability to extrapolate is unrelated to its capacity. Rather, the shortcoming is inherited from a learning hypothesis: Examples not explicitly observed with infinitely many training examples have underspecified outcomes in the learner’s model. In order to endow neural networks with the ability to extrapolate over group transformations, we introduce a learning framework counterfactually-guided by the learning hypothesis that any group invariance to (known) transformation groups is mandatory even without evidence, unless the learner deems it inconsistent with the training data. Unlike existing invariance-driven methods for (counterfactual) extrapolations, this framework allows extrapolations from a single environment. Finally, we introduce sequence and image extrapolation tasks that validate our framework and showcase the shortcomings of traditional approaches.
https://openreview.net/pdf/f68c8dcf4d107a320df0e519d96021379ed46828.pdf
Simple Spectral Graph Convolution
https://openreview.net/forum?id=CYO5T-YjWZV
https://openreview.net/forum?id=CYO5T-YjWZV
Hao Zhu,Piotr Koniusz
ICLR 2021,Poster
Graph Convolutional Networks (GCNs) are leading methods for learning graph representations. However, without specially designed architectures, the performance of GCNs degrades quickly with increased depth. As the aggregated neighborhood size and neural network depth are two completely orthogonal aspects of graph representation, several methods focus on summarizing the neighborhood by aggregating K-hop neighborhoods of nodes while using shallow neural networks. However, these methods still encounter oversmoothing, and suffer from high computation and storage costs. In this paper, we use a modified Markov Diffusion Kernel to derive a variant of GCN called Simple Spectral Graph Convolution (SSGC). Our spectral analysis shows that our simple spectral graph convolution used in SSGC is a trade-off of low- and high-pass filter bands which capture the global and local contexts of each node. We provide two theoretical claims which demonstrate that we can aggregate over a sequence of increasingly larger neighborhoods compared to competitors while limiting severe oversmoothing. Our experimental evaluations show that SSGC with a linear learner is competitive in text and node classification tasks. Moreover, SSGC is comparable to other state-of-the-art methods for node clustering and community prediction tasks.
https://openreview.net/pdf/9015cbfb15f31fdf7835279414de3b27ef3b0c01.pdf
Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics
https://openreview.net/forum?id=LhY8QdUGSuw
https://openreview.net/forum?id=LhY8QdUGSuw
Vinay Venkatesh Ramasesh,Ethan Dyer,Maithra Raghu
ICLR 2021,Poster
Catastrophic forgetting is a recurring challenge to developing versatile deep learning models. Despite its ubiquity, there is limited understanding of its connections to neural network (hidden) representations and task semantics. In this paper, we address this important knowledge gap. Through quantitative analysis of neural representations, we find that deeper layers are disproportionately responsible for forgetting, with sequential training resulting in an erasure of earlier task representational subspaces. Methods to mitigate forgetting stabilize these deeper layers, but show diversity on precise effects, with some increasing feature reuse while others store task representations orthogonally, preventing interference. These insights also enable the development of an analytic argument and empirical picture relating forgetting to task semantic similarity, where we find that maximal forgetting occurs for task sequences with intermediate similarity.
https://openreview.net/pdf/d11b4b8cdf4b9f940c435a7b3c50cf2790aa071d.pdf
On Fast Adversarial Robustness Adaptation in Model-Agnostic Meta-Learning
https://openreview.net/forum?id=o81ZyBCojoA
https://openreview.net/forum?id=o81ZyBCojoA
Ren Wang,Kaidi Xu,Sijia Liu,Pin-Yu Chen,Tsui-Wei Weng,Chuang Gan,Meng Wang
ICLR 2021,Poster
Model-agnostic meta-learning (MAML) has emerged as one of the most successful meta-learning techniques in few-shot learning. It enables us to learn a $\textit{meta-initialization}$ of model parameters (that we call $\textit{meta-model}$) to rapidly adapt to new tasks using a small amount of labeled training data. Despite the generalization power of the meta-model, it remains elusive that how $\textit{adversarial robustness}$ can be maintained by MAML in few-shot learning. In addition to generalization, robustness is also desired for a meta-model to defend adversarial examples (attacks). Toward promoting adversarial robustness in MAML, we first study $\textit{when}$ a robustness-promoting regularization should be incorporated, given the fact that MAML adopts a bi-level (fine-tuning vs. meta-update) learning procedure. We show that robustifying the meta-update stage is sufficient to make robustness adapted to the task-specific fine-tuning stage even if the latter uses a standard training protocol. We also make additional justification on the acquired robustness adaptation by peering into the interpretability of neurons' activation maps. Furthermore, we investigate $\textit{how}$ robust regularization can $\textit{efficiently}$ be designed in MAML. We propose a general but easily-optimized robustness-regularized meta-learning framework, which allows the use of unlabeled data augmentation, fast adversarial attack generation, and computationally-light fine-tuning. In particular, we for the first time show that the auxiliary contrastive learning task can enhance the adversarial robustness of MAML. Finally, extensive experiments are conducted to demonstrate the effectiveness of our proposed methods in robust few-shot learning.
https://openreview.net/pdf/c29f970186b2b2658cd52eea7aac2b5266c649f4.pdf
The geometry of integration in text classification RNNs
https://openreview.net/forum?id=42kiJ7n_8xO
https://openreview.net/forum?id=42kiJ7n_8xO
Kyle Aitken,Vinay Venkatesh Ramasesh,Ankush Garg,Yuan Cao,David Sussillo,Niru Maheswaranathan
ICLR 2021,Poster
Despite the widespread application of recurrent neural networks (RNNs), a unified understanding of how RNNs solve particular tasks remains elusive. In particular, it is unclear what dynamical patterns arise in trained RNNs, and how those pat-terns depend on the training dataset or task. This work addresses these questions in the context of text classification, building on earlier work studying the dynamics of binary sentiment-classification networks (Maheswaranathan et al., 2019). We study text-classification tasks beyond the binary case, exploring the dynamics ofRNNs trained on both natural and synthetic datasets. These dynamics, which we find to be both interpretable and low-dimensional, share a common mechanism across architectures and datasets: specifically, these text-classification networks use low-dimensional attractor manifolds to accumulate evidence for each class as they process the text. The dimensionality and geometry of the attractor manifold are determined by the structure of the training dataset, with the dimensionality reflecting the number of scalar quantities the network remembers in order to classify.In categorical classification, for example, we show that this dimensionality is one less than the number of classes. Correlations in the dataset, such as those induced by ordering, can further reduce the dimensionality of the attractor manifold; we show how to predict this reduction using simple word-count statistics computed on the training dataset. To the degree that integration of evidence towards a decision is a common computational primitive, this work continues to lay the foundation for using dynamical systems techniques to study the inner workings of RNNs.
https://openreview.net/pdf/bc724aa9a5ce537c4e5005d963641086e1e41bb3.pdf
Towards Robust Neural Networks via Close-loop Control
https://openreview.net/forum?id=2AL06y9cDE-
https://openreview.net/forum?id=2AL06y9cDE-
Zhuotong Chen,Qianxiao Li,Zheng Zhang
ICLR 2021,Poster
Despite their success in massive engineering applications, deep neural networks are vulnerable to various perturbations due to their black-box nature. Recent study has shown that a deep neural network can misclassify the data even if the input data is perturbed by an imperceptible amount. In this paper, we address the robustness issue of neural networks by a novel close-loop control method from the perspective of dynamic systems. Instead of modifying the parameters in a fixed neural network architecture, a close-loop control process is added to generate control signals adaptively for the perturbed or corrupted data. We connect the robustness of neural networks with optimal control using the geometrical information of underlying data to design the control objective. The detailed analysis shows how the embedding manifolds of state trajectory affect error estimation of the proposed method. Our approach can simultaneously maintain the performance on clean data and improve the robustness against many types of data perturbations. It can also further improve the performance of robustly trained neural networks against different perturbations. To the best of our knowledge, this is the first work that improves the robustness of neural networks with close-loop control.
https://openreview.net/pdf/596019eba6149f7c83bd7dc648809e2100b337d8.pdf
Projected Latent Markov Chain Monte Carlo: Conditional Sampling of Normalizing Flows
https://openreview.net/forum?id=MBpHUFrcG2x
https://openreview.net/forum?id=MBpHUFrcG2x
Chris Cannella,Mohammadreza Soltani,Vahid Tarokh
ICLR 2021,Poster
We introduce Projected Latent Markov Chain Monte Carlo (PL-MCMC), a technique for sampling from the exact conditional distributions learned by normalizing flows. As a conditional sampling method, PL-MCMC enables Monte Carlo Expectation Maximization (MC-EM) training of normalizing flows from incomplete data. Through experimental tests applying normalizing flows to missing data tasks for a variety of data sets, we demonstrate the efficacy of PL-MCMC for conditional sampling from normalizing flows.
https://openreview.net/pdf/32946e80b74b4bb7d6f25d74cb773ac68b9b4a36.pdf
Understanding the failure modes of out-of-distribution generalization
https://openreview.net/forum?id=fSTD6NFIW_b
https://openreview.net/forum?id=fSTD6NFIW_b
Vaishnavh Nagarajan,Anders Andreassen,Behnam Neyshabur
ICLR 2021,Poster
Empirical studies suggest that machine learning models often rely on features, such as the background, that may be spuriously correlated with the label only during training time, resulting in poor accuracy during test-time. In this work, we identify the fundamental factors that give rise to this behavior, by explaining why models fail this way even in easy-to-learn tasks where one would expect these models to succeed. In particular, through a theoretical study of gradient-descent-trained linear classifiers on some easy-to-learn tasks, we uncover two complementary failure modes. These modes arise from how spurious correlations induce two kinds of skews in the data: one geometric in nature and another, statistical. Finally, we construct natural modifications of image classification datasets to understand when these failure modes can arise in practice. We also design experiments to isolate the two failure modes when training modern neural networks on these datasets.
https://openreview.net/pdf/2790b3f2ccfda08399e0549ba75e2da20bd2d1b1.pdf
Usable Information and Evolution of Optimal Representations During Training
https://openreview.net/forum?id=p8agn6bmTbr
https://openreview.net/forum?id=p8agn6bmTbr
Michael Kleinman,Alessandro Achille,Daksh Idnani,Jonathan Kao
ICLR 2021,Poster
We introduce a notion of usable information contained in the representation learned by a deep network, and use it to study how optimal representations for the task emerge during training. We show that the implicit regularization coming from training with Stochastic Gradient Descent with a high learning-rate and small batch size plays an important role in learning minimal sufficient representations for the task. In the process of arriving at a minimal sufficient representation, we find that the content of the representation changes dynamically during training. In particular, we find that semantically meaningful but ultimately irrelevant information is encoded in the early transient dynamics of training, before being later discarded. In addition, we evaluate how perturbing the initial part of training impacts the learning dynamics and the resulting representations. We show these effects on both perceptual decision-making tasks inspired by neuroscience literature, as well as on standard image classification tasks.
https://openreview.net/pdf/ecfb28e9a1edfd9c52876b78d81632b816d662b2.pdf
Adaptive Extra-Gradient Methods for Min-Max Optimization and Games
https://openreview.net/forum?id=R0a0kFI3dJx
https://openreview.net/forum?id=R0a0kFI3dJx
Kimon Antonakopoulos,Veronica Belmega,Panayotis Mertikopoulos
ICLR 2021,Poster
We present a new family of min-max optimization algorithms that automatically exploit the geometry of the gradient data observed at earlier iterations to perform more informative extra-gradient steps in later ones. Thanks to this adaptation mechanism, the proposed method automatically detects whether the problem is smooth or not, without requiring any prior tuning by the optimizer. As a result, the algorithm simultaneously achieves order-optimal convergence rates, \ie it converges to an $\varepsilon$-optimal solution within $\mathcal{O}(1/\varepsilon)$ iterations in smooth problems, and within $\mathcal{O}(1/\varepsilon^2)$ iterations in non-smooth ones. Importantly, these guarantees do not require any of the standard boundedness or Lipschitz continuity conditions that are typically assumed in the literature; in particular, they apply even to problems with singularities (such as resource allocation problems and the like). This adaptation is achieved through the use of a geometric apparatus based on Finsler metrics and a suitably chosen mirror-prox template that allows us to derive sharp convergence rates for the methods at hand.
https://openreview.net/pdf/b85ffd0f421c8180b9a511a825ac3f10fc824b9b.pdf
Shapley explainability on the data manifold
https://openreview.net/forum?id=OPyWRrcjVQw
https://openreview.net/forum?id=OPyWRrcjVQw
Christopher Frye,Damien de Mijolla,Tom Begley,Laurence Cowton,Megan Stanley,Ilya Feige
ICLR 2021,Poster
Explainability in AI is crucial for model development, compliance with regulation, and providing operational nuance to predictions. The Shapley framework for explainability attributes a model’s predictions to its input features in a mathematically principled and model-agnostic way. However, general implementations of Shapley explainability make an untenable assumption: that the model’s features are uncorrelated. In this work, we demonstrate unambiguous drawbacks of this assumption and develop two solutions to Shapley explainability that respect the data manifold. One solution, based on generative modelling, provides flexible access to data imputations; the other directly learns the Shapley value-function, providing performance and stability at the cost of flexibility. While “off-manifold” Shapley values can (i) give rise to incorrect explanations, (ii) hide implicit model dependence on sensitive attributes, and (iii) lead to unintelligible explanations in higher-dimensional data, on-manifold explainability overcomes these problems.
https://openreview.net/pdf/ed871c78bdc2768918e12775dd57dff6b36e4c24.pdf
Reinforcement Learning with Random Delays
https://openreview.net/forum?id=QFYnKlBJYR
https://openreview.net/forum?id=QFYnKlBJYR
Yann Bouteiller,Simon Ramstedt,Giovanni Beltrame,Christopher Pal,Jonathan Binas
ICLR 2021,Poster
Action and observation delays commonly occur in many Reinforcement Learning applications, such as remote control scenarios. We study the anatomy of randomly delayed environments, and show that partially resampling trajectory fragments in hindsight allows for off-policy multi-step value estimation. We apply this principle to derive Delay-Correcting Actor-Critic (DCAC), an algorithm based on Soft Actor-Critic with significantly better performance in environments with delays. This is shown theoretically and also demonstrated practically on a delay-augmented version of the MuJoCo continuous control benchmark.
https://openreview.net/pdf/744fcf663d9a7335f90ed1ec81d97b3661166e56.pdf
Shape or Texture: Understanding Discriminative Features in CNNs
https://openreview.net/forum?id=NcFEZOi-rLa
https://openreview.net/forum?id=NcFEZOi-rLa
Md Amirul Islam,Matthew Kowal,Patrick Esser,Sen Jia,Björn Ommer,Konstantinos G. Derpanis,Neil Bruce
ICLR 2021,Poster
Contrasting the previous evidence that neurons in the later layers of a Convolutional Neural Network (CNN) respond to complex object shapes, recent studies have shown that CNNs actually exhibit a 'texture bias': given an image with both texture and shape cues (e.g., a stylized image), a CNN is biased towards predicting the category corresponding to the texture. However, these previous studies conduct experiments on the final classification output of the network, and fail to robustly evaluate the bias contained (i) in the latent representations, and (ii) on a per-pixel level. In this paper, we design a series of experiments that overcome these issues. We do this with the goal of better understanding what type of shape information contained in the network is discriminative, where shape information is encoded, as well as when the network learns about object shape during training. We show that a network learns the majority of overall shape information at the first few epochs of training and that this information is largely encoded in the last few layers of a CNN. Finally, we show that the encoding of shape does not imply the encoding of localized per-pixel semantic information. The experimental results and findings provide a more accurate understanding of the behaviour of current CNNs, thus helping to inform future design choices.
https://openreview.net/pdf/bec98c0c8f3a77adc5822b10b5fd4273ff383136.pdf
NOVAS: Non-convex Optimization via Adaptive Stochastic Search for End-to-end Learning and Control
https://openreview.net/forum?id=Iw4ZGwenbXf
https://openreview.net/forum?id=Iw4ZGwenbXf
Ioannis Exarchos,Marcus Aloysius Pereira,Ziyi Wang,Evangelos Theodorou
ICLR 2021,Poster
In this work we propose the use of adaptive stochastic search as a building block for general, non-convex optimization operations within deep neural network architectures. Specifically, for an objective function located at some layer in the network and parameterized by some network parameters, we employ adaptive stochastic search to perform optimization over its output. This operation is differentiable and does not obstruct the passing of gradients during backpropagation, thus enabling us to incorporate it as a component in end-to-end learning. We study the proposed optimization module's properties and benchmark it against two existing alternatives on a synthetic energy-based structured prediction task, and further showcase its use in stochastic optimal control applications.
https://openreview.net/pdf/19f093001a03a82a092d19740971a45fff9f47a8.pdf
Negative Data Augmentation
https://openreview.net/forum?id=Ovp8dvB8IBH
https://openreview.net/forum?id=Ovp8dvB8IBH
Abhishek Sinha,Kumar Ayush,Jiaming Song,Burak Uzkent,Hongxia Jin,Stefano Ermon
ICLR 2021,Poster
Data augmentation is often used to enlarge datasets with synthetic samples generated in accordance with the underlying data distribution. To enable a wider range of augmentations, we explore negative data augmentation strategies (NDA) that intentionally create out-of-distribution samples. We show that such negative out-of-distribution samples provide information on the support of the data distribution, and can be leveraged for generative modeling and representation learning. We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator. We prove that under suitable conditions, optimizing the resulting objective still recovers the true data distribution but can directly bias the generator towards avoiding samples that lack the desired structure. Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities. Further, we incorporate the same negative data augmentation strategy in a contrastive learning framework for self-supervised representation learning on images and videos, achieving improved performance on downstream image classification, object detection, and action recognition tasks. These results suggest that prior knowledge on what does not constitute valid data is an effective form of weak supervision across a range of unsupervised learning tasks.
https://openreview.net/pdf/3f45494e997f0d54f6dc5dac083f571047ee0c92.pdf
Molecule Optimization by Explainable Evolution
https://openreview.net/forum?id=jHefDGsorp5
https://openreview.net/forum?id=jHefDGsorp5
Binghong Chen,Tianzhe Wang,Chengtao Li,Hanjun Dai,Le Song
ICLR 2021,Poster
Optimizing molecules for desired properties is a fundamental yet challenging task in chemistry, material science, and drug discovery. This paper develops a novel algorithm for optimizing molecular properties via an Expectation-Maximization (EM) like explainable evolutionary process. The algorithm is designed to mimic human experts in the process of searching for desirable molecules and alternate between two stages: the first stage on explainable local search which identifies rationales, i.e., critical subgraph patterns accounting for desired molecular properties, and the second stage on molecule completion which explores the larger space of molecules containing good rationales. We test our approach against various baselines on a real-world multi-property optimization task where each method is given the same number of queries to the property oracle. We show that our evolution-by-explanation algorithm is 79% better than the best baseline in terms of a generic metric combining aspects such as success rate, novelty, and diversity. Human expert evaluation on optimized molecules shows that 60% of top molecules obtained from our methods are deemed successful.
https://openreview.net/pdf/885e03a6e7ca9e559b96bce0daf001f769f98de4.pdf
Estimating Lipschitz constants of monotone deep equilibrium models
https://openreview.net/forum?id=VcB4QkSfyO
https://openreview.net/forum?id=VcB4QkSfyO
Chirag Pabbaraju,Ezra Winston,J Zico Kolter
ICLR 2021,Poster
Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries. However, existing bounds get substantially weaker with increasing depth of the network, which makes it unclear how to apply such bounds to recently proposed models such as the deep equilibrium (DEQ) model, which can be viewed as representing an infinitely-deep network. In this paper, we show that monotone DEQs, a recently-proposed subclass of DEQs, have Lipschitz constants that can be bounded as a simple function of the strong monotonicity parameter of the network. We derive simple-yet-tight bounds on both the input-output mapping and the weight-output mapping defined by these networks, and demonstrate that they are small relative to those for comparable standard DNNs. We show that one can use these bounds to design monotone DEQ models, even with e.g. multi-scale convolutional structure, that still have constraints on the Lipschitz constant. We also highlight how to use these bounds to develop PAC-Bayes generalization bounds that do not depend on any depth of the network, and which avoid the exponential depth-dependence of comparable DNN bounds.
https://openreview.net/pdf/62c8f87a22f20b30e037ebb6a618d34b540f0e93.pdf
Implicit Gradient Regularization
https://openreview.net/forum?id=3q5IqUrkcF
https://openreview.net/forum?id=3q5IqUrkcF
David Barrett,Benoit Dherin
ICLR 2021,Poster
Gradient descent can be surprisingly good at optimizing deep neural networks without overfitting and without explicit regularization. We find that the discrete steps of gradient descent implicitly regularize models by penalizing gradient descent trajectories that have large loss gradients. We call this Implicit Gradient Regularization (IGR) and we use backward error analysis to calculate the size of this regularization. We confirm empirically that implicit gradient regularization biases gradient descent toward flat minima, where test errors are small and solutions are robust to noisy parameter perturbations. Furthermore, we demonstrate that the implicit gradient regularization term can be used as an explicit regularizer, allowing us to control this gradient regularization directly. More broadly, our work indicates that backward error analysis is a useful theoretical approach to the perennial question of how learning rate, model size, and parameter regularization interact to determine the properties of overparameterized models optimized with gradient descent.
https://openreview.net/pdf/5fac8e016a2873ec230214a072ff1cc0307e64f7.pdf
Faster Binary Embeddings for Preserving Euclidean Distances
https://openreview.net/forum?id=YCXrx6rRCXO
https://openreview.net/forum?id=YCXrx6rRCXO
Jinjie Zhang,Rayan Saab
ICLR 2021,Poster
We propose a fast, distance-preserving, binary embedding algorithm to transform a high-dimensional dataset $\mathcal{T}\subseteq\mathbb{R}^n$ into binary sequences in the cube $\{\pm 1\}^m$. When $\mathcal{T}$ consists of well-spread (i.e., non-sparse) vectors, our embedding method applies a stable noise-shaping quantization scheme to $A x$ where $A\in\mathbb{R}^{m\times n}$ is a sparse Gaussian random matrix. This contrasts with most binary embedding methods, which usually use $x\mapsto \mathrm{sign}(Ax)$ for the embedding. Moreover, we show that Euclidean distances among the elements of $\mathcal{T}$ are approximated by the $\ell_1$ norm on the images of $\{\pm 1\}^m$ under a fast linear transformation. This again contrasts with standard methods, where the Hamming distance is used instead. Our method is both fast and memory efficient, with time complexity $O(m)$ and space complexity $O(m)$ on well-spread data. When the data is not well-spread, we show that the approach still works provided that data is transformed via a Walsh-Hadamard matrix, but now the cost is $O(n\log n)$ per data point. Further, we prove that the method is accurate and its associated error is comparable to that of a continuous valued Johnson-Lindenstrauss embedding plus a quantization error that admits a polynomial decay as the embedding dimension $m$ increases. Thus the length of the binary codes required to achieve a desired accuracy is quite small, and we show it can even be compressed further without compromising the accuracy. To illustrate our results, we test the proposed method on natural images and show that it achieves strong performance.
https://openreview.net/pdf/1eba3bf99a991505d994341a4156be4959947011.pdf
Scalable Transfer Learning with Expert Models
https://openreview.net/forum?id=23ZjUGpjcc
https://openreview.net/forum?id=23ZjUGpjcc
Joan Puigcerver,Carlos Riquelme Ruiz,Basil Mustafa,Cedric Renggli,André Susano Pinto,Sylvain Gelly,Daniel Keysers,Neil Houlsby
ICLR 2021,Poster
Transfer of pre-trained representations can improve sample efficiency and reduce computational requirements for new tasks. However, representations used for transfer are usually generic, and are not tailored to a particular distribution of downstream tasks. We explore the use of expert representations for transfer with a simple, yet effective, strategy. We train a diverse set of experts by exploiting existing label structures, and use cheap-to-compute performance proxies to select the relevant expert for each target task. This strategy scales the process of transferring to new tasks, since it does not revisit the pre-training data during transfer. Accordingly, it requires little extra compute per target task, and results in a speed-up of 2-3 orders of magnitude compared to competing approaches. Further, we provide an adapter-based architecture able to compress many experts into a single model. We evaluate our approach on two different data sources and demonstrate that it outperforms baselines on over 20 diverse vision tasks in both cases.
https://openreview.net/pdf/659e2338755eb562f4d6d679d55eb83e71fa5007.pdf