title
stringlengths
12
151
url
stringlengths
41
43
detail_url
stringlengths
41
43
authors
stringlengths
6
562
tags
stringclasses
3 values
abstract
stringlengths
519
2.34k
pdf
stringlengths
71
71
Energy-Inspired Molecular Conformation Optimization
https://openreview.net/forum?id=7QfLW-XZTl
https://openreview.net/forum?id=7QfLW-XZTl
Jiaqi Guan,Wesley Wei Qian,qiang liu,Wei-Ying Ma,Jianzhu Ma,Jian Peng
ICLR 2022,Poster
This paper studies an important problem in computational chemistry: predicting a molecule's spatial atom arrangements, or a molecular conformation. We propose a neural energy minimization formulation that casts the prediction problem into an unrolled optimization process, where a neural network is parametrized to learn the gradient fields of an implicit conformational energy landscape. Assuming different forms of the underlying potential energy function, we can not only reinterpret and unify many of the existing models but also derive new variants of SE(3)-equivariant neural networks in a principled manner. In our experiments, these new variants show superior performance in molecular conformation optimization comparing to existing SE(3)-equivariant neural networks. Moreover, our energy-inspired formulation is also suitable for molecular conformation generation, where we can generate more diverse and accurate conformers comparing to existing baselines.
https://openreview.net/pdf/4a1d13d340c8727a4bc1ddee07f22d749cf0db8d.pdf
Towards Deepening Graph Neural Networks: A GNTK-based Optimization Perspective
https://openreview.net/forum?id=tT9t_ZctZRL
https://openreview.net/forum?id=tT9t_ZctZRL
Wei Huang,Yayong Li,weitao Du,Richard Xu,Jie Yin,Ling Chen,Miao Zhang
ICLR 2022,Poster
Graph convolutional networks (GCNs) and their variants have achieved great success in dealing with graph-structured data. Nevertheless, it is well known that deep GCNs suffer from the over-smoothing problem, where node representations tend to be indistinguishable as more layers are stacked up. The theoretical research to date on deep GCNs has focused primarily on expressive power rather than trainability, an optimization perspective. Compared to expressivity, trainability attempts to address a more fundamental question: Given a sufficiently expressive space of models, can we successfully find a good solution via gradient descent-based optimizers? This work fills this gap by exploiting the Graph Neural Tangent Kernel (GNTK), which governs the optimization trajectory under gradient descent for wide GCNs. We formulate the asymptotic behaviors of GNTK in the large depth, which enables us to reveal the dropping trainability of wide and deep GCNs at an exponential rate in the optimization process. Additionally, we extend our theoretical framework to analyze residual connection-based techniques, which are found to be merely able to mitigate the exponential decay of trainability mildly. Inspired by our theoretical insights on trainability, we propose Critical DropEdge, a connectivity-aware and graph-adaptive sampling method, to alleviate the exponential decay problem more fundamentally. Experimental evaluation consistently confirms using our proposed method can achieve better results compared to relevant counterparts with both infinite-width and finite-width.
https://openreview.net/pdf/c18075d5d39ff0265c97344cea20791494daa455.pdf
Connectome-constrained Latent Variable Model of Whole-Brain Neural Activity
https://openreview.net/forum?id=CJzi3dRlJE-
https://openreview.net/forum?id=CJzi3dRlJE-
Lu Mi,Richard Xu,Sridhama Prakhya,Albert Lin,Nir Shavit,Aravinthan Samuel,Srinivas C Turaga
ICLR 2022,Poster
The availability of both anatomical connectivity and brain-wide neural activity measurements in C. elegans make the worm a promising system for learning detailed, mechanistic models of an entire nervous system in a data-driven way. However, one faces several challenges when constructing such a model. We often do not have direct experimental access to important modeling details such as single-neuron dynamics and the signs and strengths of the synaptic connectivity. Further, neural activity can only be measured in a subset of neurons, often indirectly via calcium imaging, and significant trial-to-trial variability has been observed. To address these challenges, we introduce a connectome-constrained latent variable model (CC-LVM) of the unobserved voltage dynamics of the entire C. elegans nervous system and the observed calcium signals. We used the framework of variational autoencoders to fit parameters of the mechanistic simulation constituting the generative model of the LVM to calcium imaging observations. A variational approximate posterior distribution over latent voltage traces for all neurons is efficiently inferred using an inference network, and constrained by a prior distribution given by the biophysical simulation of neural dynamics. We applied this model to an experimental whole-brain dataset, and found that connectomic constraints enable our LVM to predict the activity of neurons whose activity were withheld significantly better than models unconstrained by a connectome. We explored models with different degrees of biophysical detail, and found that models with realistic conductance-based synapses provide markedly better predictions than current-based synapses for this system.
https://openreview.net/pdf/302ccb9e518f0da30bda16c4db8038467af5f38e.pdf
T-WaveNet: A Tree-Structured Wavelet Neural Network for Time Series Signal Analysis
https://openreview.net/forum?id=U4uFaLyg7PV
https://openreview.net/forum?id=U4uFaLyg7PV
Minhao LIU,Ailing Zeng,Qiuxia LAI,Ruiyuan Gao,Min Li,Jing Qin,Qiang Xu
ICLR 2022,Poster
Time series signal analysis plays an essential role in many applications, e.g., activity recognition and healthcare monitoring. Recently, features extracted with deep neural networks (DNNs) have shown to be more effective than conventional hand-crafted ones. However, most existing solutions rely solely on the network to extract information carried in the raw signal, regardless of its inherent physical and statistical properties, leading to sub-optimal performance particularly under a limited amount of training data. In this work, we propose a novel tree-structured wavelet neural network for time series signal analysis, namely \emph{T-WaveNet}, taking advantage of an inherent property of various types of signals, known as the \emph{dominant frequency range}. Specifically, with \emph{T-WaveNet}, we first conduct frequency spectrum energy analysis of the signals to get a set of dominant frequency subbands. Then, we construct a tree-structured network that iteratively decomposes the input signal into various frequency subbands with similar energies. Each node on the tree is built with an invertible neural network (INN) based wavelet transform unit. Such a disentangled representation learning method facilitates a more effective extraction of the discriminative features, as demonstrated with the comprehensive experiments on various real-life time series classification datasets.
https://openreview.net/pdf/c527f1cebdb3737b281ff65a78f914a97d634bad.pdf
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
https://openreview.net/forum?id=AmUhwTOHgm
https://openreview.net/forum?id=AmUhwTOHgm
Fangyu Liu,Yunlong Jiao,Jordan Massiah,Emine Yilmaz,Serhii Havrylov
ICLR 2022,Poster
In NLP, a large volume of tasks involve pairwise comparison between two sequences (e.g. sentence similarity and paraphrase identification). Predominantly, two formulations are used for sentence-pair tasks: bi-encoders and cross-encoders. Bi-encoders produce fixed-dimensional sentence representations and are computationally efficient, however, they usually underperform cross-encoders. Cross-encoders can leverage their attention heads to exploit inter-sentence interactions for better performance but they require task fine-tuning and are computationally more expensive. In this paper, we present a completely unsupervised sentence representation model termed as Trans-Encoder that combines the two learning paradigms into an iterative joint framework to simultaneously learn enhanced bi- and cross-encoders. Specifically, on top of a pre-trained Language Model (PLM), we start with converting it to an unsupervised bi-encoder, and then alternate between the bi- and cross-encoder task formulations. In each alternation, one task formulation will produce pseudo-labels which are used as learning signals for the other task formulation. We then propose an extension to conduct such self-distillation approach on multiple PLMs in parallel and use the average of their pseudo-labels for mutual distillation. Trans-Encoder creates, to the best of our knowledge, the first completely unsupervised cross-encoder and also a state-of-the-art unsupervised bi-encoder for sentence similarity. Both the bi-encoder and cross-encoder formulations of Trans-Encoder outperform recently proposed state-of-the-art unsupervised sentence encoders such as Mirror-BERT and SimCSE by up to 5% on the sentence similarity benchmarks.
https://openreview.net/pdf/3a6f31c7903c67d5e431aafcd98d91be36443b10.pdf
Path Integral Sampler: A Stochastic Control Approach For Sampling
https://openreview.net/forum?id=_uCb2ynRu7Y
https://openreview.net/forum?id=_uCb2ynRu7Y
Qinsheng Zhang,Yongxin Chen
ICLR 2022,Poster
We present Path Integral Sampler~(PIS), a novel algorithm to draw samples from unnormalized probability density functions. The PIS is built on the Schr\"odinger bridge problem which aims to recover the most likely evolution of a diffusion process given its initial distribution and terminal distribution. The PIS draws samples from the initial distribution and then propagates the samples through the Schr\"odinger bridge to reach the terminal distribution. Applying the Girsanov theorem, with a simple prior diffusion, we formulate the PIS as a stochastic optimal control problem whose running cost is the control energy and terminal cost is chosen according to the target distribution. By modeling the control as a neural network, we establish a sampling algorithm that can be trained end-to-end. We provide theoretical justification of the sampling quality of PIS in terms of Wasserstein distance when sub-optimal control is used. Moreover, the path integrals theory is used to compute importance weights of the samples to compensate for the bias induced by the sub-optimality of the controller and the time-discretization. We experimentally demonstrate the advantages of PIS compared with other start-of-the-art sampling methods on a variety of tasks.
https://openreview.net/pdf/dfed03e002c8966cbf8b7224471dc61c00f141af.pdf
Model Zoo: A Growing Brain That Learns Continually
https://openreview.net/forum?id=WfvgGBcgbE7
https://openreview.net/forum?id=WfvgGBcgbE7
Rahul Ramesh,Pratik Chaudhari
ICLR 2022,Poster
This paper argues that continual learning methods can benefit by splitting the capacity of the learner across multiple models. We use statistical learning theory and experimental analysis to show how multiple tasks can interact with each other in a non-trivial fashion when a single model is trained on them. The generalization error on a particular task can improve when it is trained with synergistic tasks, but can also deteriorate when trained with competing tasks. This theory motivates our method named Model Zoo which, inspired from the boosting literature, grows an ensemble of small models, each of which is trained during one episode of continual learning. We demonstrate that Model Zoo obtains large gains in accuracy on a wide variety of continual learning benchmark problems.
https://openreview.net/pdf/19902f663252dc5dfc680fe78ec3d27c0a55e73e.pdf
Predicting Physics in Mesh-reduced Space with Temporal Attention
https://openreview.net/forum?id=XctLdNfCmP
https://openreview.net/forum?id=XctLdNfCmP
XU HAN,Han Gao,Tobias Pfaff,Jian-Xun Wang,Liping Liu
ICLR 2022,Poster
Auto-regressive sequence models for physics prediction are often restricted to low-dimensional systems, as memory cost increases with both spatial extents and sequence length. On the other hand, graph-based next-step prediction models have recently been very successful in modeling complex high-dimensional physical systems on irregular meshes, but suffer from error accumulation and drift, due to their short temporal attention span. In this paper, we present a method that marries the strengths of both approaches. We use a GNN to locally summarize features and create coarsened, compact mesh representation of the system state, onto which we apply a transformer-style temporal attention module. We use a second GNN to decode these predictions back to a full-sized graph and perform fine-scale updates. Our method outperforms a competitive GNN baseline on three complex fluid dynamics prediction tasks, from sonic shocks to vascular flow. We demonstrate stable rollouts without the need for training noise and show perfectly phase-stable predictions even for very long sequences. More broadly, we believe our approach paves the way to bringing the benefits of attention-based sequence models to solving high-dimensional complex physics tasks.
https://openreview.net/pdf/00dd7d600fb519f519f11d2cc02d0c629f487ece.pdf
How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
https://openreview.net/forum?id=qiMXBIf4NfB
https://openreview.net/forum?id=qiMXBIf4NfB
Shuai Zhang,Meng Wang,Sijia Liu,Pin-Yu Chen,Jinjun Xiong
ICLR 2022,Poster
Self-training, a semi-supervised learning algorithm, leverages a large amount of unlabeled data to improve learning when the labeled data are limited. Despite empirical successes, its theoretical characterization remains elusive. To the best of our knowledge, this work establishes the first theoretical analysis for the known iterative self-training paradigm and formally proves the benefits of unlabeled data in both training convergence and generalization ability. To make our theoretical analysis feasible, we focus on the case of one-hidden-layer neural networks. However, theoretical understanding of iterative self-training is non-trivial even for a shallow neural network. One of the key challenges is that existing neural network landscape analysis built upon supervised learning no longer holds in the (semi-supervised) self-training paradigm. We address this challenge and prove that iterative self-training converges linearly with both convergence rate and generalization accuracy improved in the order of $1/\sqrt{M}$, where $M$ is the number of unlabeled samples. Extensive experiments from shallow neural networks to deep neural networks are also provided to justify the correctness of our established theoretical insights on self-training.
https://openreview.net/pdf/86dd54f483759d42c389b5c48000ced72778d937.pdf
Learning to Dequantise with Truncated Flows
https://openreview.net/forum?id=fExcSKdDo_
https://openreview.net/forum?id=fExcSKdDo_
Shawn Tan,Chin-Wei Huang,Alessandro Sordoni,Aaron Courville
ICLR 2022,Poster
Dequantisation is a general technique used for transforming data described by a discrete random variable $x$ into a continuous (latent) random variable $z$, for the purpose of it being modeled by likelihood-based density models. Dequantisation was first introduced in the context of ordinal data, such as image pixel values. However, when the data is categorical, the dequantisation scheme is not obvious. We learn such a dequantisation scheme $q(z | x)$, using variational inference with TRUncated FLows (TRUFL) --- a novel flow-based model that allows the dequantiser to have a learnable truncated support. Unlike previous work, the TRUFL dequantiser is (i) capable of embedding the data losslessly in certain cases, since the truncation allows the conditional distributions $q(z | x)$ to have non-overlapping bounded supports, while being (ii) trainable with back-propagation. Addtionally, since the support of the marginal $q(z)$ is bounded and the support of prior $p(z)$ is not, we propose renormalising the prior distribution over the support of $q(z)$. We derive a lower bound for training, and propose a rejection sampling scheme to account for the invalid samples during generation. Experimentally, we benchmark TRUFL on constrained generation tasks, and find that it outperforms prior approaches. In addition, we find that rejection sampling results in higher validity for the constrained problems.
https://openreview.net/pdf/7b8b40bfc88c9c57b2e374766d2ad6dbf203c158.pdf
Curriculum learning as a tool to uncover learning principles in the brain
https://openreview.net/forum?id=TpJMvo0_pu-
https://openreview.net/forum?id=TpJMvo0_pu-
Daniel R. Kepple,Rainer Engelken,Kanaka Rajan
ICLR 2022,Poster
We present a novel approach to use curricula to identify principles by which a system learns. Previous work in curriculum learning has focused on how curricula can be designed to improve learning of a model on particular tasks. We consider the inverse problem: what can a curriculum tell us about how a learning system acquired a task? Using recurrent neural networks (RNNs) and models of common experimental neuroscience tasks, we demonstrate that curricula can be used to differentiate learning principles using target-based and a representation-based loss functions as use cases. In particular, we compare the performance of RNNs using target-based learning rules versus those using representational learning rules on three different curricula in the context of two tasks. We show that the learned state-space trajectories of RNNs trained by these two learning rules under all curricula tested are indistinguishable. However, by comparing learning times during different curricula, we can disambiguate the learning rules and challenge traditional approaches of interrogating learning systems. Although all animals in neuroscience lab settings are trained by curriculum-based procedures called shaping, almost no behavioral or neural data are collected or published on the relative successes or training times under different curricula. Our results motivate the systematic collection and curation of data during shaping by demonstrating curriculum learning in RNNs as a tool to probe and differentiate learning principles used by biological systems, over conventional statistical analyses of learned state spaces.
https://openreview.net/pdf/a4a7a68a3eb6363b1e9a2ccdc13a20acf84334c2.pdf
Optimizer Amalgamation
https://openreview.net/forum?id=VqzXzA9hjaX
https://openreview.net/forum?id=VqzXzA9hjaX
Tianshu Huang,Tianlong Chen,Sijia Liu,Shiyu Chang,Lisa Amini,Zhangyang Wang
ICLR 2022,Poster
Selecting an appropriate optimizer for a given problem is of major interest for researchers and practitioners. Many analytical optimizers have been proposed using a variety of theoretical and empirical approaches; however, none can offer a universal advantage over other competitive optimizers. We are thus motivated to study a new problem named Optimizer Amalgamation: how can we best combine a pool of "teacher" optimizers into a single "student" optimizer that can have stronger problem-specific performance? In this paper, we draw inspiration from the field of "learning to optimize" to use a learnable amalgamation target. First, we define three differentiable amalgamation mechanisms to amalgamate a pool of analytical optimizers by gradient descent. Then, in order to reduce variance of the amalgamation process, we also explore methods to stabilize the amalgamation process by perturbing the amalgamation target. Finally, we present experiments showing the superiority of our amalgamated optimizer compared to its amalgamated components and learning to optimize baselines, and the efficacy of our variance reducing perturbations.
https://openreview.net/pdf/5f3e6351e8a4303197b51d671a2519678ac9fd21.pdf
An Agnostic Approach to Federated Learning with Class Imbalance
https://openreview.net/forum?id=Xo0lbDt975
https://openreview.net/forum?id=Xo0lbDt975
Zebang Shen,Juan Cervino,Hamed Hassani,Alejandro Ribeiro
ICLR 2022,Poster
Federated Learning (FL) has emerged as the tool of choice for training deep models over heterogeneous and decentralized datasets. As a reflection of the experiences from different clients, severe class imbalance issues are observed in real-world FL problems. Moreover, there exists a drastic mismatch between the imbalances from the local and global perspectives, i.e. a local majority class can be the minority of the population. Additionally, the privacy requirement of FL poses an extra challenge, as one should handle class imbalance without identifying the minority class. In this paper we propose a novel agnostic constrained learning formulation to tackle the class imbalance problem in FL, without requiring further information beyond the standard FL objective. A meta algorithm, CLIMB, is designed to solve the target optimization problem, with its convergence property analyzed under certain oracle assumptions. Through an extensive empirical study over various data heterogeneity and class imbalance configurations, we showcase that CLIMB considerably improves the performance in the minority class without compromising the overall accuracy of the classifier, which significantly outperforms previous arts. In fact, we observe the greatest performance boost in the most difficult scenario where every client only holds data from one class. The code can be found here https://github.com/shenzebang/Federated-Learning-Pytorch.
https://openreview.net/pdf/5fb84fb47293dfccb811ff5d9f4c2a1861530f17.pdf
A Fine-Tuning Approach to Belief State Modeling
https://openreview.net/forum?id=ckZY7DGa7FQ
https://openreview.net/forum?id=ckZY7DGa7FQ
Samuel Sokota,Hengyuan Hu,David J Wu,J Zico Kolter,Jakob Nicolaus Foerster,Noam Brown
ICLR 2022,Poster
We investigate the challenge of modeling the belief state of a partially observable Markov system, given sample-access to its dynamics model. This problem setting is often approached using parametric sequential generative modeling methods. However, these methods do not leverage any additional computation at inference time to increase their accuracy. Moreover, applying these methods to belief state modeling in certain multi-agent settings would require passing policies into the belief model---at the time of writing, there have been no successful demonstrations of this. Toward addressing these shortcomings, we propose an inference-time improvement framework for parametric sequential generative modeling methods called belief fine-tuning (BFT). BFT leverages approximate dynamic programming in the form of fine-tuning to determine the model parameters at each time step. It can improve the accuracy of the belief model at test time because it specializes the model to the space of local observations. Furthermore, because this specialization occurs after the action or policy has already been decided, BFT does not require the belief model to process it as input. As a result of the latter point, BFT enables, for the first time, approximate public belief state search in imperfect-information games where the number of possible information states is too large to track tabularly. We exhibit these findings on large-scale variants of the benchmark game Hanabi.
https://openreview.net/pdf/944fbcd08f3183592995e10fc1c179afad1bfc67.pdf
Differentially Private Fine-tuning of Language Models
https://openreview.net/forum?id=Q42f0dfjECO
https://openreview.net/forum?id=Q42f0dfjECO
Da Yu,Saurabh Naik,Arturs Backurs,Sivakanth Gopi,Huseyin A Inan,Gautam Kamath,Janardhan Kulkarni,Yin Tat Lee,Andre Manoel,Lukas Wutschitz,Sergey Yekhanin,Huishuai Zhang
ICLR 2022,Poster
We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-scale pre-trained language models, which achieve the state-of-the-art privacy versus utility tradeoffs on many standard NLP tasks. We propose a meta-framework for this problem, inspired by the recent success of highly parameter-efficient methods for fine-tuning. Our experiments show that differentially private adaptations of these approaches outperform previous private algorithms in three important dimensions: utility, privacy, and the computational and memory cost of private training. On many commonly studied datasets, the utility of private models approaches that of non-private models. For example, on the MNLI dataset we achieve an accuracy of $87.8\%$ using RoBERTa-Large and $83.5\%$ using RoBERTa-Base with a privacy budget of $\epsilon = 6.7$. In comparison, absent privacy constraints, RoBERTa-Large achieves an accuracy of $90.2\%$. Our findings are similar for natural language generation when privately fine-tuning GPT-2. Our experiments also show that larger models are better suited for private fine-tuning: while they are well known to achieve superior accuracy non-privately, we find that they also better maintain their accuracy when privacy is introduced.
https://openreview.net/pdf/a7f73a09ee1b6071d23daf0142ff02e4926db6e9.pdf
P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts
https://openreview.net/forum?id=DhzIU48OcZh
https://openreview.net/forum?id=DhzIU48OcZh
Benjamin Newman,Prafulla Kumar Choubey,Nazneen Rajani
ICLR 2022,Poster
Recent work (e.g. LAMA (Petroni et al., 2019)) has found that the quality of the factual information extracted from Large Language Models (LLMs) depends on the prompts used to query them. This inconsistency is problematic because different users will query LLMs for the same information using different wording, but should receive the same, accurate responses regardless. In this work we aim to address this shortcoming by introducing P-Adapters: lightweight models that sit between the embedding layer and first attention layer of LLMs. They take LLM embeddings as input and output continuous prompts that are used to query the LLM. Additionally, we investigate Mixture of Experts (MoE) models that learn a set of continuous prompts (the "experts") and select one to query the LLM. These require a separate classifier trained on human-annotated data to map natural language prompts to the continuous ones. P-Adapters perform comparably to the more complex MoE models in extracting factual information from BERT and RoBERTa while eliminating the need for additional annotations. P-Adapters show between 12-26% absolute improvement in precision and 36-50% absolute improvement in consistency over a baseline of just using natural language queries alone. Finally, we investigate what makes P-Adapters successful and conclude that a significant factor is access to the LLM's embeddings of the original natural language prompt, particularly the subject of the entity pair being queried.
https://openreview.net/pdf/8e2c7114bf23dadb13338c9b0dcf063a536ff1b3.pdf
Iterated Reasoning with Mutual Information in Cooperative and Byzantine Decentralized Teaming
https://openreview.net/forum?id=giBFoa-uS12
https://openreview.net/forum?id=giBFoa-uS12
Sachin G Konan,Esmaeil Seraj,Matthew Gombolay
ICLR 2022,Poster
Information sharing is key in building team cognition and enables coordination and cooperation. High-performing human teams also benefit from acting strategically with hierarchical levels of iterated communication and rationalizability, meaning a human agent can reason about the actions of their teammates in their decision-making. Yet, the majority of prior work in Multi-Agent Reinforcement Learning (MARL) does not support iterated rationalizability and only encourage inter-agent communication, resulting in a suboptimal equilibrium cooperation strategy. In this work, we show that reformulating an agent's policy to be conditional on the policies of its neighboring teammates inherently maximizes Mutual Information (MI) lower-bound when optimizing under Policy Gradient (PG). Building on the idea of decision-making under bounded rationality and cognitive hierarchy theory, we show that our modified PG approach not only maximizes local agent rewards but also implicitly reasons about MI between agents without the need for any explicit ad-hoc regularization terms. Our approach, InfoPG, outperforms baselines in learning emergent collaborative behaviors and sets the state-of-the-art in decentralized cooperative MARL tasks. Our experiments validate the utility of InfoPG by achieving higher sample efficiency and significantly larger cumulative reward in several complex cooperative multi-agent domains.
https://openreview.net/pdf/6aa48ac42e21539265419c747bb0aedda2b1992d.pdf
Step-unrolled Denoising Autoencoders for Text Generation
https://openreview.net/forum?id=T0GpzBQ1Fg6
https://openreview.net/forum?id=T0GpzBQ1Fg6
Nikolay Savinov,Junyoung Chung,Mikolaj Binkowski,Erich Elsen,Aaron van den Oord
ICLR 2022,Poster
In this paper we propose a new generative model of text, Step-unrolled Denoising Autoencoder (SUNDAE), that does not rely on autoregressive models. Similarly to denoising diffusion techniques, SUNDAE is repeatedly applied on a sequence of tokens, starting from random inputs and improving them each time until convergence. We present a simple new improvement operator that converges in fewer iterations than diffusion methods, while qualitatively producing better samples on natural language datasets. SUNDAE achieves state-of-the-art results (among non-autoregressive methods) on the WMT'14 English-to-German translation task and good qualitative results on unconditional language modeling on the Colossal Cleaned Common Crawl dataset and a dataset of Python code from GitHub. The non-autoregressive nature of SUNDAE opens up possibilities beyond left-to-right prompted generation, by filling in arbitrary blank patterns in a template.
https://openreview.net/pdf/17dfe8a5d0e1934ade282b40fbfe8f2fe421ef39.pdf
Hindsight Foresight Relabeling for Meta-Reinforcement Learning
https://openreview.net/forum?id=P7OVkHEoHOZ
https://openreview.net/forum?id=P7OVkHEoHOZ
Michael Wan,Jian Peng,Tanmay Gangwani
ICLR 2022,Poster
Meta-reinforcement learning (meta-RL) algorithms allow for agents to learn new behaviors from small amounts of experience, mitigating the sample inefficiency problem in RL. However, while meta-RL agents can adapt quickly to new tasks at test time after experiencing only a few trajectories, the meta-training process is still sample-inefficient. Prior works have found that in the multi-task RL setting, relabeling past transitions and thus sharing experience among tasks can improve sample efficiency and asymptotic performance. We apply this idea to the meta-RL setting and devise a new relabeling method called Hindsight Foresight Relabeling (HFR). We construct a relabeling distribution using the combination of "hindsight", which is used to relabel trajectories using reward functions from the training task distribution, and "foresight", which takes the relabeled trajectories and computes the utility of each trajectory for each task. HFR is easy to implement and readily compatible with existing meta-RL algorithms. We find that HFR improves performance when compared to other relabeling methods on a variety of meta-RL tasks.
https://openreview.net/pdf/73f685c6746fdb65b8e4aa08ea1ed0d6b49a34a8.pdf
LoRA: Low-Rank Adaptation of Large Language Models
https://openreview.net/forum?id=nZeVKeeFYf9
https://openreview.net/forum?id=nZeVKeeFYf9
Edward J Hu,yelong shen,Phillip Wallis,Zeyuan Allen-Zhu,Yuanzhi Li,Shean Wang,Lu Wang,Weizhu Chen
ICLR 2022,Poster
An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by a factor of 10,000 and the GPU memory requirement by a factor of 3. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.
https://openreview.net/pdf/5a54aed5265cb0399c62848f44e84c4a617a354b.pdf
Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective
https://openreview.net/forum?id=qRDQi3ocgR3
https://openreview.net/forum?id=qRDQi3ocgR3
Luca Scimeca,Seong Joon Oh,Sanghyuk Chun,Michael Poli,Sangdoo Yun
ICLR 2022,Poster
Deep neural networks (DNNs) often rely on easy–to–learn discriminatory features, or cues, that are not necessarily essential to the problem at hand. For example, ducks in an image may be recognized based on their typical background scenery, such as lakes or streams. This phenomenon, also known as shortcut learning, is emerging as a key limitation of the current generation of machine learning models. In this work, we introduce a set of experiments to deepen our understanding of shortcut learning and its implications. We design a training setup with several shortcut cues, named WCST-ML, where each cue is equally conducive to the visual recognition problem at hand. Even under equal opportunities, we observe that (1) certain cues are preferred to others, (2) solutions biased to the easy–to–learn cues tend to converge to relatively flat minima on the loss surface, and (3) the solutions focusing on those preferred cues are far more abundant in the parameter space. We explain the abundance of certain cues via their Kolmogorov (descriptional) complexity: solutions corresponding to Kolmogorov-simple cues are abundant in the parameter space and are thus preferred by DNNs. Our studies are based on the synthetic dataset DSprites and the face dataset UTKFace. In our WCST-ML, we observe that the inborn bias of models leans toward simple cues, such as color and ethnicity. Our findings emphasize the importance of active human intervention to remove the inborn model biases that may cause negative societal impacts.
https://openreview.net/pdf/3068411e384b48770e3b3fe52bc409e41a40f3f8.pdf
Efficient Computation of Deep Nonlinear Infinite-Width Neural Networks that Learn Features
https://openreview.net/forum?id=tUMr0Iox8XW
https://openreview.net/forum?id=tUMr0Iox8XW
Greg Yang,Michael Santacroce,Edward J Hu
ICLR 2022,Poster
While a popular limit of infinite-width neural networks, the Neural Tangent Kernel (NTK) often exhibits performance gaps from finite-width neural networks on standard datasets, due to lack of feature learning. Although the feature learning *maximal update limit*, or *μ-limit* (Yang and Hu, 2020) of wide networks has closed the gap for 1-hidden-layer linear models, no one has been able to demonstrate this for deep nonlinear multi-layer perceptrons (MLP) because of μ-limit’s computational difficulty in this setting. Here, we solve this problem by proposing a novel feature learning limit, the *π-limit*, that bypasses the computational issues. The π-limit, in short, is the limit of a form of projected gradient descent, and the π-limit of an MLP is roughly another MLP where gradients are appended to weights during training. We prove its almost sure convergence with width using the Tensor Programs technique. We evaluate it on CIFAR10 and Omniglot against NTK as well as finite networks, finding the π-limit outperform finite-width models trained normally (without projection) in both settings, closing the performance gap between finite- and infinite-width neural networks previously left by NTK. Code for this work is available at github.com/santacml/pilim.
https://openreview.net/pdf/da2902df767a2a3e4eec1ef65497a37e7f8dbd2b.pdf
TRAIL: Near-Optimal Imitation Learning with Suboptimal Data
https://openreview.net/forum?id=6q_2b6u0BnJ
https://openreview.net/forum?id=6q_2b6u0BnJ
Mengjiao Yang,Sergey Levine,Ofir Nachum
ICLR 2022,Poster
In imitation learning, one aims to learn task-solving policies using access to near-optimal expert trajectories collected from the task environment. However, high-quality trajectories -- e.g., from human experts -- can be expensive to obtain in practical settings. On the contrary, it is often much easier to obtain large amounts of suboptimal trajectories which can nevertheless provide insight into the structure of the environment, showing what \emph{could} be done in the environment even if not what \emph{should} be done. Is it possible to formalize these conceptual benefits and devise algorithms to use offline datasets to yield \emph{provable} improvements to the sample-efficiency of imitation learning? In this work, we answer this question affirmatively and present training objectives which use an offline dataset to learn an approximate \emph{factored} dynamics model whose structure enables the extraction of a \emph{latent action space}. Our theoretical analysis shows that the learned latent action space can boost the sample-efficiency of downstream imitation learning, effectively reducing the need for large near-optimal expert datasets through the use of auxiliary non-expert data. We evaluate the practicality of our objective through experiments on a set of navigation and locomotion tasks. Our results verify the benefits suggested by our theory and show that our algorithms is able to recover near-optimal policies with fewer expert trajectories.
https://openreview.net/pdf/958e9e602eeb7bacbb7b7ef495ca1126f80bed1f.pdf
On the benefits of maximum likelihood estimation for Regression and Forecasting
https://openreview.net/forum?id=zrW-LVXj2k1
https://openreview.net/forum?id=zrW-LVXj2k1
Pranjal Awasthi,Abhimanyu Das,Rajat Sen,Ananda Theertha Suresh
ICLR 2022,Poster
We advocate for a practical Maximum Likelihood Estimation (MLE) approach towards designing loss functions for regression and forecasting, as an alternative to the typical approach of direct empirical risk minimization on a specific target metric. The MLE approach is better suited to capture inductive biases such as prior domain knowledge in datasets, and can output post-hoc estimators at inference time that can optimize different types of target metrics. We present theoretical results to demonstrate that our approach is competitive with any estimator for the target metric under some general conditions. In two example practical settings, Poisson and Pareto regression, we show that our competitive results can be used to prove that the MLE approach has better excess risk bounds than directly minimizing the target metric. We also demonstrate empirically that our method instantiated with a well-designed general purpose mixture likelihood family can obtain superior performance for a variety of tasks across time-series forecasting and regression datasets with different data distributions.
https://openreview.net/pdf/676edee8f0bcb63695ea6a66f75477179eb324d7.pdf
Effect of scale on catastrophic forgetting in neural networks
https://openreview.net/forum?id=GhVS8_yPeEa
https://openreview.net/forum?id=GhVS8_yPeEa
Vinay Venkatesh Ramasesh,Aitor Lewkowycz,Ethan Dyer
ICLR 2022,Poster
Catastrophic forgetting presents a challenge in developing deep learning models capable of continual learning, i.e. learning tasks sequentially. Recently, both computer vision and natural-language processing have witnessed great progress through the use of large-scale pretrained models. In this work, we present an empirical study of catastrophic forgetting in this pretraining paradigm. Our experiments indicate that large, pretrained ResNets and Transformers are significantly more resistant to forgetting than randomly-initialized, trained-from-scratch models; this robustness systematically improves with scale of both model and pretraining dataset size. We take initial steps towards characterizing what aspect of model representations allows them to perform continual learning so well, finding that in the pretrained models, distinct class representations grow more orthogonal with scale. Our results suggest that, when possible, scale and a diverse pretraining dataset can be useful ingredients in mitigating catastrophic forgetting.
https://openreview.net/pdf/54d452543db390a26979d51399ae9c6ed01f4de1.pdf
Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
https://openreview.net/forum?id=FndDxSz3LxQ
https://openreview.net/forum?id=FndDxSz3LxQ
Morteza Ramezani,Weilin Cong,Mehrdad Mahdavi,Mahmut Kandemir,Anand Sivasubramaniam
ICLR 2022,Poster
Despite the recent success of Graph Neural Networks (GNNs), training GNNs on large graphs remains challenging. The limited resource capacities of the existing servers, the dependency between nodes in a graph, and the privacy concern due to the centralized storage and model learning have spurred the need to design an effective distributed algorithm for GNN training. However, existing distributed GNN training methods impose either excessive communication costs or large memory overheads that hinders their scalability. To overcome these issues, we propose a communication-efficient distributed GNN training technique named $\text{\textit{Learn Locally, Correct Globally}}$ (LLCG). To reduce the communication and memory overhead, each local machine in LLCG first trains a GNN on its local data by ignoring the dependency between nodes among different machines, then sends the locally trained model to the server for periodic model averaging. However, ignoring node dependency could result in significant performance degradation. To solve the performance degradation, we propose to apply $\text{\textit{Global Server Corrections}}$ on the server to refine the locally learned models. We rigorously analyze the convergence of distributed methods with periodic model averaging for training GNNs and show that naively applying periodic model averaging but ignoring the dependency between nodes will suffer from an irreducible residual error. However, this residual error can be eliminated by utilizing the proposed global corrections to entail fast convergence rate. Extensive experiments on real-world datasets show that LLCG can significantly improve the efficiency without hurting the performance.
https://openreview.net/pdf/1cb92429664f8a20c602659b4a0ca7386709ab46.pdf
Conditional Image Generation by Conditioning Variational Auto-Encoders
https://openreview.net/forum?id=7MV6uLzOChW
https://openreview.net/forum?id=7MV6uLzOChW
William Harvey,Saeid Naderiparizi,Frank Wood
ICLR 2022,Poster
We present a conditional variational auto-encoder (VAE) which, to avoid the substantial cost of training from scratch, uses an architecture and training objective capable of leveraging a foundation model in the form of a pretrained unconditional VAE. To train the conditional VAE, we only need to train an artifact to perform amortized inference over the unconditional VAE's latent variables given a conditioning input. We demonstrate our approach on tasks including image inpainting, for which it outperforms state-of-the-art GAN-based approaches at faithfully representing the inherent uncertainty. We conclude by describing a possible application of our inpainting model, in which it is used to perform Bayesian experimental design for the purpose of guiding a sensor.
https://openreview.net/pdf/6a31861c3fd2a8f68a3c37fe8a7e422b0ac57cc4.pdf
Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations
https://openreview.net/forum?id=hm2tNDdgaFK
https://openreview.net/forum?id=hm2tNDdgaFK
Keir Adams,Lagnajit Pattanaik,Connor W. Coley
ICLR 2022,Poster
Molecular chirality, a form of stereochemistry most often describing relative spatial arrangements of bonded neighbors around tetrahedral carbon centers, influences the set of 3D conformers accessible to the molecule without changing its 2D graph connectivity. Chirality can strongly alter (bio)chemical interactions, particularly protein-drug binding. Most 2D graph neural networks (GNNs) designed for molecular property prediction at best use atomic labels to naïvely treat chirality, while E(3)-invariant 3D GNNs are invariant to chirality altogether. To enable representation learning on molecules with defined stereochemistry, we design an SE(3)-invariant model that processes torsion angles of a 3D molecular conformer. We explicitly model conformational flexibility by integrating a novel type of invariance to rotations about internal molecular bonds into the architecture, mitigating the need for multi-conformer data augmentation. We test our model on four benchmarks: contrastive learning to distinguish conformers of different stereoisomers in a learned latent space, classification of chiral centers as R/S, prediction of how enantiomers rotate circularly polarized light, and ranking enantiomers by their docking scores in an enantiosensitive protein pocket. We compare our model, Chiral InterRoto-Invariant Neural Network (ChIRo), with 2D and 3D GNNs to demonstrate that our model achieves state of the art performance when learning chiral-sensitive functions from molecular structures.
https://openreview.net/pdf/0bd5d5f64a3a1afccdb03997dd1aad21fe9c2aaa.pdf
Neural Methods for Logical Reasoning over Knowledge Graphs
https://openreview.net/forum?id=tgcAoUVHRIB
https://openreview.net/forum?id=tgcAoUVHRIB
Alfonso Amayuelas,Shuai Zhang,Xi Susie Rao,Ce Zhang
ICLR 2022,Poster
Reasoning is a fundamental problem for computers and deeply studied in Artificial Intelligence. In this paper, we specifically focus on answering multi-hop logical queries on Knowledge Graphs (KGs). This is a complicated task because, in real world scenarios, the graphs tend to be large and incomplete. Most previous works have been unable to create models that accept full First-Order Logical (FOL) queries, which includes negative queries, and have only been able to process a limited set of query structures. Additionally, most methods present logic operators that can only perform the logical operation they are made for. We introduce a set of models that use Neural Networks to create one-point vector embeddings to answer the queries. The versatility of neural networks allows the framework to handle FOL queries with Conjunction, Disjunction and Negation operators. We demonstrate experimentally the performance of our models through extensive experimentation on well-known benchmarking datasets. Besides having more versatile operators, the models achieve a 10% relative increase over best performing state of the art and more than 30% over the original method based on single-point vector embeddings.
https://openreview.net/pdf/801a4e5791b6166101b4d236bbd874bd1a6916ff.pdf
Consistent Counterfactuals for Deep Models
https://openreview.net/forum?id=St6eyiTEHnG
https://openreview.net/forum?id=St6eyiTEHnG
Emily Black,Zifan Wang,Matt Fredrikson
ICLR 2022,Poster
Counterfactual examples are one of the most commonly-cited methods for explaining the predictions of machine learning models in key areas such as finance and medical diagnosis. Counterfactuals are often discussed under the assumption that the model on which they will be used is static, but in deployment models may be periodically retrained or fine-tuned. This paper studies the consistency of model prediction on counterfactual examples in deep networks under small changes to initial training conditions, such as weight initialization and leave-one-out variations in data, as often occurs during model deployment. We demonstrate experimentally that counterfactual examples for deep models are often inconsistent across such small changes, and that increasing the cost of the counterfactual, a stability-enhancing mitigation suggested by prior work in the context of simpler models, is not a reliable heuristic in deep networks. Rather, our analysis shows that a model's Lipschitz continuity around the counterfactual, along with confidence of its prediction, is key to its consistency across related models. To this end, we propose Stable Neighbor Search as a way to generate more consistent counterfactual explanations, and illustrate the effectiveness of this approach on several benchmark datasets.
https://openreview.net/pdf/9aa3de89ad2eb688c7c39e39d0f8431eb052f3e5.pdf
Unified Visual Transformer Compression
https://openreview.net/forum?id=9jsZiUgkCZP
https://openreview.net/forum?id=9jsZiUgkCZP
Shixing Yu,Tianlong Chen,Jiayi Shen,Huan Yuan,Jianchao Tan,Sen Yang,Ji Liu,Zhangyang Wang
ICLR 2022,Poster
Vision transformers (ViTs) have gained popularity recently. Even without customized image operators such as convolutions, ViTs can yield competitive performance when properly trained on massive data. However, the computational overhead of ViTs remains prohibitive, due to stacking multi-head self-attention modules and else. Compared to the vast literature and prevailing success in compressing convolutional neural networks, the study of Vision Transformer compression has also just emerged, and existing works focused on one or two aspects of compression. This paper proposes a unified ViT compression framework that seamlessly assembles three effective techniques: pruning, layer skipping, and knowledge distillation. We formulate a budget-constrained, end-to-end optimization framework, targeting jointly learning model weights, layer-wise pruning ratios/masks, and skip configurations, under a distillation loss. The optimization problem is then solved using the primal-dual algorithm. Experiments are conducted with several ViT variants, e.g. DeiT and T2T-ViT backbones on the ImageNet dataset, and our approach consistently outperforms recent competitors. For example, DeiT-Tiny can be trimmed down to 50\% of the original FLOPs almost without losing accuracy. Codes are available online:~\url{https://github.com/VITA-Group/UVC}.
https://openreview.net/pdf/8570214b832776a06b451827ee429b3eba50359a.pdf
Transformer-based Transform Coding
https://openreview.net/forum?id=IDwN6xjHnK8
https://openreview.net/forum?id=IDwN6xjHnK8
Yinhao Zhu,Yang Yang,Taco Cohen
ICLR 2022,Poster
Neural data compression based on nonlinear transform coding has made great progress over the last few years, mainly due to improvements in prior models, quantization methods and nonlinear transforms. A general trend in many recent works pushing the limit of rate-distortion performance is to use ever more expensive prior models that can lead to prohibitively slow decoding. Instead, we focus on more expressive transforms that result in a better rate-distortion-computation trade-off. Specifically, we show that nonlinear transforms built on Swin-transformers can achieve better compression efficiency than transforms built on convolutional neural networks (ConvNets), while requiring fewer parameters and shorter decoding time. Paired with a compute-efficient Channel-wise Auto-Regressive Model prior, our SwinT-ChARM model outperforms VTM-12.1 by $3.68\%$ in BD-rate on Kodak with comparable decoding speed. In P-frame video compression setting, we are able to outperform the popular ConvNet-based scale-space-flow model by $12.35\%$ in BD-rate on UVG. We provide model scaling studies to verify the computational efficiency of the proposed solutions and conduct several analyses to reveal the source of coding gain of transformers over ConvNets, including better spatial decorrelation, flexible effective receptive field, and more localized response of latent pixels during progressive decoding.
https://openreview.net/pdf/b5e3776fbe0ee70da5740c3cf525ed60629bca04.pdf
Object Pursuit: Building a Space of Objects via Discriminative Weight Generation
https://openreview.net/forum?id=lbauk6wK2-y
https://openreview.net/forum?id=lbauk6wK2-y
Chuanyu Pan,Yanchao Yang,Kaichun Mo,Yueqi Duan,Leonidas Guibas
ICLR 2022,Poster
We propose a framework to continuously learn object-centric representations for visual learning and understanding. Existing object-centric representations either rely on supervisions that individualize objects in the scene, or perform unsupervised disentanglement that can hardly deal with complex scenes in the real world. To mitigate the annotation burden and relax the constraints on the statistical complexity of the data, our method leverages interactions to effectively sample diverse variations of an object and the corresponding training signals while learning the object-centric representations. Throughout learning, objects are streamed one by one in random order with unknown identities, and are associated with latent codes that can synthesize discriminative weights for each object through a convolutional hypernetwork. Moreover, re-identification of learned objects and forgetting prevention are employed to make the learning process efficient and robust. We perform an extensive study of the key features of the proposed framework and analyze the characteristics of the learned representations. Furthermore, we demonstrate the capability of the proposed framework in learning representations that can improve label efficiency in downstream tasks. Our code and trained models are made publicly available at: https://github.com/pptrick/Object-Pursuit.
https://openreview.net/pdf/32d3902cb1f5cf70159185cfd64ffb5c9089be73.pdf
PAC Prediction Sets Under Covariate Shift
https://openreview.net/forum?id=DhP9L8vIyLc
https://openreview.net/forum?id=DhP9L8vIyLc
Sangdon Park,Edgar Dobriban,Insup Lee,Osbert Bastani
ICLR 2022,Poster
An important challenge facing modern machine learning is how to rigorously quantify the uncertainty of model predictions. Conveying uncertainty is especially important when there are changes to the underlying data distribution that might invalidate the predictive model. Yet, most existing uncertainty quantification algorithms break down in the presence of such shifts. We propose a novel approach that addresses this challenge by constructing \emph{probably approximately correct (PAC)} prediction sets in the presence of covariate shift. Our approach focuses on the setting where there is a covariate shift from the source distribution (where we have labeled training examples) to the target distribution (for which we want to quantify uncertainty). Our algorithm assumes given importance weights that encode how the probabilities of the training examples change under the covariate shift. In practice, importance weights typically need to be estimated; thus, we extend our algorithm to the setting where we are given confidence intervals for the importance weights. We demonstrate the effectiveness of our approach on covariate shifts based on DomainNet and ImageNet. Our algorithm satisfies the PAC constraint, and gives prediction sets with the smallest average normalized size among approaches that always satisfy the PAC constraint.
https://openreview.net/pdf/c0c06e027d7b1f6b50bfe3c8ed8be38595d25d04.pdf
Generalization of Neural Combinatorial Solvers Through the Lens of Adversarial Robustness
https://openreview.net/forum?id=vJZ7dPIjip3
https://openreview.net/forum?id=vJZ7dPIjip3
Simon Geisler,Johanna Sommer,Jan Schuchardt,Aleksandar Bojchevski,Stephan Günnemann
ICLR 2022,Poster
End-to-end (geometric) deep learning has seen first successes in approximating the solution of combinatorial optimization problems. However, generating data in the realm of NP-hard/-complete tasks brings practical and theoretical challenges, resulting in evaluation protocols that are too optimistic. Specifically, most datasets only capture a simpler subproblem and likely suffer from spurious features. We investigate these effects by studying adversarial robustness -a local generalization property- to reveal hard, model-specific instances and spurious features. For this purpose, we derive perturbation models for SAT and TSP. Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound, allowing us to determine the true label of perturbed samples without a solver. Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning. Although such robust solvers exist, we show empirically that the assessed neural solvers do not generalize well w.r.t. small perturbations of the problem instance.
https://openreview.net/pdf/cde50097d313f95849a0784c71b151985455962b.pdf
One After Another: Learning Incremental Skills for a Changing World
https://openreview.net/forum?id=dg79moSRqIo
https://openreview.net/forum?id=dg79moSRqIo
Nur Muhammad Mahi Shafiullah,Lerrel Pinto
ICLR 2022,Poster
Reward-free, unsupervised discovery of skills is an attractive alternative to the bottleneck of hand-designing rewards in environments where task supervision is scarce or expensive. However, current skill pre-training methods, like many RL techniques, make a fundamental assumption -- stationary environments during training. Traditional methods learn all their skills simultaneously, which makes it difficult for them to both quickly adapt to changes in the environment, and to not forget earlier skills after such adaptation. On the other hand, in an evolving or expanding environment, skill learning must be able to adapt fast to new environment situations while not forgetting previously learned skills. These two conditions make it difficult for classic skill discovery to do well in an evolving environment. In this work, we propose a new framework for skill discovery, where skills are learned one after another in an incremental fashion. This framework allows newly learned skills to adapt to new environment or agent dynamics, while the fixed old skills ensure the agent doesn't forget a learned skill. We demonstrate experimentally that in both evolving and static environments, incremental skills significantly outperform current state-of-the-art skill discovery methods on both skill quality and the ability to solve downstream tasks. Videos for learned skills and code are made public on https://notmahi.github.io/disk
https://openreview.net/pdf/1ee5373bdf14fb52571018799bfe53c696d99868.pdf
Graph-Guided Network for Irregularly Sampled Multivariate Time Series
https://openreview.net/forum?id=Kwm8I7dU-l5
https://openreview.net/forum?id=Kwm8I7dU-l5
Xiang Zhang,Marko Zeman,Theodoros Tsiligkaridis,Marinka Zitnik
ICLR 2022,Poster
In many domains, including healthcare, biology, and climate science, time series are irregularly sampled with varying time intervals between successive readouts and different subsets of variables (sensors) observed at different time points. Here, we introduce RAINDROP, a graph neural network that embeds irregularly sampled and multivariate time series while also learning the dynamics of sensors purely from observational data. RAINDROP represents every sample as a separate sensor graph and models time-varying dependencies between sensors with a novel message passing operator. It estimates the latent sensor graph structure and leverages the structure together with nearby observations to predict misaligned readouts. This model can be interpreted as a graph neural network that sends messages over graphs that are optimized for capturing time-varying dependencies among sensors. We use RAINDROP to classify time series and interpret temporal dynamics on three healthcare and human activity datasets. RAINDROP outperforms state-of-the-art methods by up to 11.4% (absolute F1-score points), including techniques that deal with irregular sampling using fixed discretization and set functions. RAINDROP shows superiority in diverse setups, including challenging leave-sensor-out settings.
https://openreview.net/pdf/6486d4fdf5e5db05e22c8ecd34aaab8401569f24.pdf
FILM: Following Instructions in Language with Modular Methods
https://openreview.net/forum?id=qI4542Y2s1D
https://openreview.net/forum?id=qI4542Y2s1D
So Yeon Min,Devendra Singh Chaplot,Pradeep Kumar Ravikumar,Yonatan Bisk,Ruslan Salakhutdinov
ICLR 2022,Poster
Recent methods for embodied instruction following are typically trained end-to-end using imitation learning. This often requires the use of expert trajectories and low-level language instructions. Such approaches assume that neural states will integrate multimodal semantics to perform state tracking, building spatial memory, exploration, and long-term planning. In contrast, we propose a modular method with structured representations that (1) builds a semantic map of the scene and (2) performs exploration with a semantic search policy, to achieve the natural language goal. Our modular method achieves SOTA performance (24.46 %) with a substantial (8.17 % absolute) gap from previous work while using less data by eschewing both expert trajectories and low-level instructions. Leveraging low-level language, however, can further increase our performance (26.49 %). Our findings suggest that an explicit spatial memory and a semantic search policy can provide a stronger and more general representation for state-tracking and guidance, even in the absence of expert trajectories or low-level instructions.
https://openreview.net/pdf/097027dcab1f74c4d68412bd97c6299ec48d6c67.pdf
The Evolution of Uncertainty of Learning in Games
https://openreview.net/forum?id=Fza94Y8VS4a
https://openreview.net/forum?id=Fza94Y8VS4a
Yun Kuen Cheung,Georgios Piliouras,Yixin Tao
ICLR 2022,Poster
Learning in games has become an object of intense interest for ML due to its connections to numerous AI architectures. We study standard online learning in games but from a non-standard perspective. Instead of studying the behavior of a single initial condition and whether it converges to equilibrium or not, we study the behavior of a probability distribution/measure over a set of initial conditions. This initial uncertainty is well-motivated both from a standard game-theoretic perspective (e.g. a modeler's uncertainty about the agents' initial beliefs) as well as from a ML one (e.g. noisy measurements, system initialization from a dataset distribution). Despite this, little is formally known about whether and under what conditions uncertainty is amplified or reduced in these systems. We use the popular measure of differential entropy to quantify the evolution of uncertainty. We find that such analysis shares an intimate relationship with volume analysis, a technique which was recently used to demonstrate the occurrence of Lyapunov chaos when using Multiplicative Weights Update (MWU) or Follow-the-Regularized-Leader (FTRL) algorithms in zero-sum games. This allows us to show that the differential entropy of these learning-in-game systems increases linearly with time, formalizing their increased unpredictability over time. We showcase the power of the framework by applying it in the study of multiple related systems, including different standard online optimization algorithms in numerous games and dynamics of evolutionary game theory.
https://openreview.net/pdf/b60f84c940dd1715e1df1ce0effeb9adde6d73b0.pdf
Explainable GNN-Based Models over Knowledge Graphs
https://openreview.net/forum?id=CrCvGNHAIrz
https://openreview.net/forum?id=CrCvGNHAIrz
David Jaime Tena Cucala,Bernardo Cuenca Grau,Egor V. Kostylev,Boris Motik
ICLR 2022,Poster
Graph Neural Networks (GNNs) are often used to learn transformations of graph data. While effective in practice, such approaches make predictions via numeric manipulations so their output cannot be easily explained symbolically. We propose a new family of GNN-based transformations of graph data that can be trained effectively, but where all predictions can be explained symbolically as logical inferences in Datalog—a well-known rule-based formalism. In particular, we show how to encode an input knowledge graph into a graph with numeric feature vectors, process this graph using a GNN, and decode the result into an output knowledge graph. We use a new class of monotonic GNNs (MGNNs) to ensure that this process is equivalent to a round of application of a set of Datalog rules. We also show that, given an arbitrary MGNN, we can automatically extract rules that completely characterise the transformation. We evaluate our approach by applying it to classification tasks in knowledge graph completion.
https://openreview.net/pdf/1865cb0305fb96472280c0ba5bf054163099c62a.pdf
Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
https://openreview.net/forum?id=OY1A8ejQgEX
https://openreview.net/forum?id=OY1A8ejQgEX
Michiel de Jong,Yury Zemlyanskiy,Nicholas FitzGerald,Fei Sha,William W. Cohen
ICLR 2022,Poster
Natural language understanding tasks such as open-domain question answering often require retrieving and assimilating factual information from multiple sources. We propose to address this problem by integrating a semi-parametric representation of a large text corpus into a Transformer model as a source of factual knowledge. Specifically, our method represents knowledge with ``mention memory'', a table of dense vector representations of every entity mention in a corpus. The proposed model - TOME - is a Transformer that accesses the information through internal memory layers in which each entity mention in the input passage attends to the mention memory. This approach enables synthesis of and reasoning over many disparate sources of information within a single Transformer model. In experiments using a memory of 150 million Wikipedia mentions, TOME achieves strong performance on several open-domain knowledge-intensive tasks, including the claim verification benchmarks HoVer and FEVER and several entity-based QA benchmarks. We also show that the model learns to attend to informative mentions without any direct supervision. Finally we demonstrate that the model can generalize to new unseen entities by updating the memory without retraining.
https://openreview.net/pdf/e112757f72e357db126aa955b1195bd923e59f19.pdf
Training Data Generating Networks: Shape Reconstruction via Bi-level Optimization
https://openreview.net/forum?id=dDo8druYppX
https://openreview.net/forum?id=dDo8druYppX
Biao Zhang,Peter Wonka
ICLR 2022,Poster
We propose a novel 3d shape representation for 3d shape reconstruction from a single image. Rather than predicting a shape directly, we train a network to generate a training set which will be fed into another learning algorithm to define the shape. The nested optimization problem can be modeled by bi-level optimization. Specifically, the algorithms for bi-level optimization are also being used in meta learning approaches for few-shot learning. Our framework establishes a link between 3D shape analysis and few-shot learning. We combine training data generating networks with bi-level optimization algorithms to obtain a complete framework for which all components can be jointly trained. We improve upon recent work on standard benchmarks for 3d shape reconstruction.
https://openreview.net/pdf/4b2efd8acee00b364d8c45ac54321d5a59b31f8c.pdf
Monotonic Differentiable Sorting Networks
https://openreview.net/forum?id=IcUWShptD7d
https://openreview.net/forum?id=IcUWShptD7d
Felix Petersen,Christian Borgelt,Hilde Kuehne,Oliver Deussen
ICLR 2022,Poster
Differentiable sorting algorithms allow training with sorting and ranking supervision, where only the ordering or ranking of samples is known. Various methods have been proposed to address this challenge, ranging from optimal transport-based differentiable Sinkhorn sorting algorithms to making classic sorting networks differentiable. One problem of current differentiable sorting methods is that they are non-monotonic. To address this issue, we propose a novel relaxation of conditional swap operations that guarantees monotonicity in differentiable sorting networks. We introduce a family of sigmoid functions and prove that they produce differentiable sorting networks that are monotonic. Monotonicity ensures that the gradients always have the correct sign, which is an advantage in gradient-based optimization. We demonstrate that monotonic differentiable sorting networks improve upon previous differentiable sorting methods.
https://openreview.net/pdf/8a2a45749e55ea6d302b7f66c647d340fb2bce2a.pdf
CrowdPlay: Crowdsourcing Human Demonstrations for Offline Learning
https://openreview.net/forum?id=qyTBxTztIpQ
https://openreview.net/forum?id=qyTBxTztIpQ
Matthias Gerstgrasser,Rakshit Trivedi,David C. Parkes
ICLR 2022,Poster
Crowdsourcing has been instrumental for driving AI advances that rely on large-scale data. At the same time, reinforcement learning has seen rapid progress through benchmark environments that strike a balance between tractability and real-world complexity, such as ALE and OpenAI Gym. In this paper, we aim to fill a gap at the intersection of these two: The use of crowdsourcing to generate large-scale human demonstration data in the support of advancing research into imitation learning and offline learning. To this end, we present CrowdPlay, a complete crowdsourcing pipeline for any standard RL environment including OpenAI Gym (made available under an open-source license); a large-scale publicly available crowdsourced dataset of human gameplay demonstrations in Atari 2600 games, including multimodal behavior and human-human and human-AI multiagent data; offline learning benchmarks with extensive human data evaluation; and a detailed study of incentives, including real-time feedback to drive high quality data. We hope that this will drive the improvement in design of algorithms that account for the complexity of human, behavioral data and thereby enable a step forward in direction of effective learning for real-world settings. Our code and dataset are available at https://mgerstgrasser.github.io/crowdplay/.
https://openreview.net/pdf/b52b98ab7e0985b4d419899df029ac52613153b4.pdf
Model Agnostic Interpretability for Multiple Instance Learning
https://openreview.net/forum?id=KSSfF5lMIAg
https://openreview.net/forum?id=KSSfF5lMIAg
Joseph Early,Christine Evers,SArvapali Ramchurn
ICLR 2022,Poster
In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements. Our methods are compared against existing inherently interpretable MIL models on several datasets, and achieve an increase in interpretability accuracy of up to 30%. We also examine the ability of the methods to identify interactions between instances and scale to larger datasets, improving their applicability to real-world problems.
https://openreview.net/pdf/661f27fbc9eceb4bfe945b34904428e58f815b5c.pdf
FastSHAP: Real-Time Shapley Value Estimation
https://openreview.net/forum?id=Zq2G_VTV53T
https://openreview.net/forum?id=Zq2G_VTV53T
Neil Jethani,Mukund Sudarshan,Ian Connick Covert,Su-In Lee,Rajesh Ranganath
ICLR 2022,Poster
Although Shapley values are theoretically appealing for explaining black-box models, they are costly to calculate and thus impractical in settings that involve large, high-dimensional models. To remedy this issue, we introduce FastSHAP, a new method for estimating Shapley values in a single forward pass using a learned explainer model. To enable efficient training without requiring ground truth Shapley values, we develop an approach to train FastSHAP via stochastic gradient descent using a weighted least-squares objective function. In our experiments with tabular and image datasets, we compare FastSHAP to existing estimation approaches and find that it generates accurate explanations with an orders-of-magnitude speedup.
https://openreview.net/pdf/5c857a16a6f7a915997089db728adf6e56ad56c2.pdf
When, Why, and Which Pretrained GANs Are Useful?
https://openreview.net/forum?id=4Ycr8oeCoIh
https://openreview.net/forum?id=4Ycr8oeCoIh
Timofey Grigoryev,Andrey Voynov,Artem Babenko
ICLR 2022,Poster
The literature has proposed several methods to finetune pretrained GANs on new datasets, which typically results in higher performance compared to training from scratch, especially in the limited-data regime. However, despite the apparent empirical benefits of GAN pretraining, its inner mechanisms were not analyzed in-depth, and understanding of its role is not entirely clear. Moreover, the essential practical details, e.g., selecting a proper pretrained GAN checkpoint, currently do not have rigorous grounding and are typically determined by trial and error. This work aims to dissect the process of GAN finetuning. First, we show that initializing the GAN training process by a pretrained checkpoint primarily affects the model's coverage rather than the fidelity of individual samples. Second, we explicitly describe how pretrained generators and discriminators contribute to the finetuning process and explain the previous evidence on the importance of pretraining both of them. Finally, as an immediate practical benefit of our analysis, we describe a simple recipe to choose an appropriate GAN checkpoint that is the most suitable for finetuning to a particular target task. Importantly, for most of the target tasks, Imagenet-pretrained GAN, despite having poor visual quality, appears to be an excellent starting point for finetuning, resembling the typical pretraining scenario of discriminative computer vision models.
https://openreview.net/pdf/9895b4007914d217311fed6ca5a856553aa4384f.pdf
A global convergence theory for deep ReLU implicit networks via over-parameterization
https://openreview.net/forum?id=R332S76RjxS
https://openreview.net/forum?id=R332S76RjxS
Tianxiang Gao,Hailiang Liu,Jia Liu,Hridesh Rajan,Hongyang Gao
ICLR 2022,Poster
Implicit deep learning has received increasing attention recently due to the fact that it generalizes the recursive prediction rule of many commonly used neural network architectures. Its prediction rule is provided implicitly based on the solution of an equilibrium equation. Although a line of recent empirical studies has demonstrated its superior performances, the theoretical understanding of implicit neural networks is limited. In general, the equilibrium equation may not be well-posed during the training. As a result, there is no guarantee that a vanilla (stochastic) gradient descent (SGD) training nonlinear implicit neural networks can converge. This paper fills the gap by analyzing the gradient flow of Rectified Linear Unit (ReLU) activated implicit neural networks. For an $m$ width implicit neural network with ReLU activation and $n$ training samples, we show that a randomly initialized gradient descent converges to a global minimum at a linear rate for the square loss function if the implicit neural network is over-parameterized. It is worth noting that, unlike existing works on the convergence of (S)GD on finite-layer over-parameterized neural networks, our convergence results hold for implicit neural networks, where the number of layers is infinite.
https://openreview.net/pdf/3a9349bc7a2ee29f29061dc2aa3664f9d3bed2ed.pdf
Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations
https://openreview.net/forum?id=6VpeS27viTq
https://openreview.net/forum?id=6VpeS27viTq
Weiqi Peng,Jinghui Chen
ICLR 2022,Poster
Owing much to the revolution of information technology, recent progress of deep learning benefits incredibly from the vastly enhanced access to data available in various digital formats. Yet those publicly accessible information also raises a fundamental issue concerning Intellectual Property, that is, how to precisely control legal or illegal exploitation of a dataset for training commercial models. To tackle this issue, this paper introduces and investigates a new concept called ''learnability lock'' for securing the process of data authorization. In particular, we propose adversarial invertible transformation, that can be viewed as a mapping from image to image, to encrypt data samples so that they become ''unlearnable'' by machine learning models with negligible loss of visual features. Meanwhile, authorized clients can use a specific key to unlock the learnability of the protected dataset and train models normally. The proposed learnability lock leverages class-wise perturbation that applies a universal transformation function on data samples of the same label. This ensures that the learnability can be easily restored with a simple inverse transformation while remaining difficult to be detected or reverse-engineered. We empirically demonstrate the success and practicability of our method on visual classification tasks.
https://openreview.net/pdf/2cff7d73a77a8a9d3e795578321db17a9d903f78.pdf
Federated Learning from Only Unlabeled Data with Class-conditional-sharing Clients
https://openreview.net/forum?id=WHA8009laxu
https://openreview.net/forum?id=WHA8009laxu
Nan Lu,Zhao Wang,Xiaoxiao Li,Gang Niu,Qi Dou,Masashi Sugiyama
ICLR 2022,Poster
Supervised federated learning (FL) enables multiple clients to share the trained model without sharing their labeled data. However, potential clients might even be reluctant to label their own data, which could limit the applicability of FL in practice. In this paper, we show the possibility of unsupervised FL whose model is still a classifier for predicting class labels, if the class-prior probabilities are shifted while the class-conditional distributions are shared among the unlabeled data owned by the clients. We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model. FedUL is a very general solution to unsupervised FL: it is compatible with many supervised FL methods, and the recovery of the wanted model can be theoretically guaranteed as if the data have been labeled. Experiments on benchmark and real-world datasets demonstrate the effectiveness of FedUL. Code is available at https://github.com/lunanbit/FedUL.
https://openreview.net/pdf/9dbc6100a2a81a4d65580c913a1a7974a6fe470d.pdf
Transformer Embeddings of Irregularly Spaced Events and Their Participants
https://openreview.net/forum?id=Rty5g9imm7H
https://openreview.net/forum?id=Rty5g9imm7H
Hongyuan Mei,Chenghao Yang,Jason Eisner
ICLR 2022,Poster
The neural Hawkes process (Mei & Eisner, 2017) is a generative model of irregularly spaced sequences of discrete events. To handle complex domains with many event types, Mei et al. (2020a) further consider a setting in which each event in the sequence updates a deductive database of facts (via domain-specific pattern-matching rules); future events are then conditioned on the database contents. They show how to convert such a symbolic system into a neuro-symbolic continuous-time generative model, in which each database fact and possible event has a time-varying embedding that is derived from its symbolic provenance. In this paper, we modify both models, replacing their recurrent LSTM-based architectures with flatter attention-based architectures (Vaswani et al., 2017), which are simpler and more parallelizable. This does not appear to hurt our accuracy, which is comparable to or better than that of the original models as well as (where applicable) previous attention-based methods (Zuo et al., 2020; Zhang et al., 2020a).
https://openreview.net/pdf/7d691bc93acd9cb9bd82501ce85784e489d480b0.pdf
Fast Model Editing at Scale
https://openreview.net/forum?id=0DcZxeWfOPt
https://openreview.net/forum?id=0DcZxeWfOPt
Eric Mitchell,Charles Lin,Antoine Bosselut,Chelsea Finn,Christopher D Manning
ICLR 2022,Poster
While large pre-trained models have enabled impressive results on a variety of downstream tasks, the largest existing models still make errors, and even accurate predictions may become outdated over time. Because detecting all such failures at training time is impossible, enabling both developers and end users of such models to correct inaccurate outputs while leaving the model otherwise intact is desirable. However, the distributed, black-box nature of the representations learned by large neural networks makes producing such targeted edits difficult. If presented with only a single problematic input and new desired output, fine-tuning approaches tend to overfit; other editing algorithms are either computationally infeasible or simply ineffective when applied to very large models. To enable easy post-hoc editing at scale, we propose Model Editor Networks using Gradient Decomposition (MEND), a collection of small auxiliary editing networks that use a single desired input-output pair to make fast, local edits to a pre-trained model's behavior. MEND learns to transform the gradient obtained by standard fine-tuning, using a low-rank decomposition of the gradient to make the parameterization of this transformation tractable. MEND can be trained on a single GPU in less than a day even for 10 billion+ parameter models; once trained MEND enables rapid application of new edits to the pre-trained model. Our experiments with T5, GPT, BERT, and BART models show that MEND is the only approach to model editing that effectively edits the behavior of models with more than 10 billion parameters. Code available at https://sites.google.com/view/mend-editing.
https://openreview.net/pdf/f647f6c18613fc91a31c3ef0b98dbe3b782d01f8.pdf
Eigencurve: Optimal Learning Rate Schedule for SGD on Quadratic Objectives with Skewed Hessian Spectrums
https://openreview.net/forum?id=rTAclwH46Tb
https://openreview.net/forum?id=rTAclwH46Tb
Rui Pan,Haishan Ye,Tong Zhang
ICLR 2022,Poster
Learning rate schedulers have been widely adopted in training deep neural networks. Despite their practical importance, there is a discrepancy between its practice and its theoretical analysis. For instance, it is not known what schedules of SGD achieve best convergence, even for simple problems such as optimizing quadratic objectives. In this paper, we propose Eigencurve, the first family of learning rate schedules that can achieve minimax optimal convergence rates (up to a constant) for SGD on quadratic objectives when the eigenvalue distribution of the underlying Hessian matrix is skewed. The condition is quite common in practice. Experimental results show that Eigencurve can significantly outperform step decay in image classification tasks on CIFAR-10, especially when the number of epochs is small. Moreover, the theory inspires two simple learning rate schedulers for practical applications that can approximate eigencurve. For some problems, the optimal shape of the proposed schedulers resembles that of cosine decay, which sheds light to the success of cosine decay for such situations. For other situations, the proposed schedulers are superior to cosine decay.
https://openreview.net/pdf/49443e75c10da1ddfa39b8fcaf44859bf09c842d.pdf
An Autoregressive Flow Model for 3D Molecular Geometry Generation from Scratch
https://openreview.net/forum?id=C03Ajc-NS5W
https://openreview.net/forum?id=C03Ajc-NS5W
Youzhi Luo,Shuiwang Ji
ICLR 2022,Poster
We consider the problem of generating 3D molecular geometries from scratch. While multiple methods have been developed for generating molecular graphs, generating 3D molecular geometries from scratch is largely under-explored. In this work, we propose G-SphereNet, a novel autoregressive flow model for generating 3D molecular geometries. G-SphereNet employs a flexible sequential generation scheme by placing atoms in 3D space step-by-step. Instead of generating 3D coordinates directly, we propose to determine 3D positions of atoms by generating distances, angles and torsion angles, thereby ensuring both invariance and equivariance properties. In addition, we propose to use spherical message passing and attention mechanism for conditional information extraction. Experimental results show that G-SphereNet outperforms previous methods on random molecular geometry generation and targeted molecule discovery tasks. Our code is publicly available as part of the DIG package (https://github.com/divelab/DIG).
https://openreview.net/pdf/d95291386d651afeb3ab8482526cdc9564159d8e.pdf
On Incorporating Inductive Biases into VAEs
https://openreview.net/forum?id=nzvbBD_3J-g
https://openreview.net/forum?id=nzvbBD_3J-g
Ning Miao,Emile Mathieu,Siddharth N,Yee Whye Teh,Tom Rainforth
ICLR 2022,Poster
We explain why directly changing the prior can be a surprisingly ineffective mechanism for incorporating inductive biases into variational auto-encoders (VAEs), and introduce a simple and effective alternative approach: Intermediary Latent Space VAEs (InteL-VAEs). InteL-VAEs use an intermediary set of latent variables to control the stochasticity of the encoding process, before mapping these in turn to the latent representation using a parametric function that encapsulates our desired inductive bias(es). This allows us to impose properties like sparsity or clustering on learned representations, and incorporate human knowledge into the generative model. Whereas changing the prior only indirectly encourages behavior through regularizing the encoder, InteL-VAEs are able to directly enforce desired characteristics. Moreover, they bypass the computation and encoder design issues caused by non-Gaussian priors, while allowing for additional flexibility through training of the parametric mapping function. We show that these advantages, in turn, lead to both better generative models and better representations being learned.
https://openreview.net/pdf/49391965f088fdd9d5dfe8de35cd9f4836edb6db.pdf
DiffSkill: Skill Abstraction from Differentiable Physics for Deformable Object Manipulations with Tools
https://openreview.net/forum?id=Kef8cKdHWpP
https://openreview.net/forum?id=Kef8cKdHWpP
Xingyu Lin,Zhiao Huang,Yunzhu Li,Joshua B. Tenenbaum,David Held,Chuang Gan
ICLR 2022,Poster
We consider the problem of sequential robotic manipulation of deformable objects using tools. Previous works have shown that differentiable physics simulators provide gradients to the environment state and help trajectory optimization to converge orders of magnitude faster than model-free reinforcement learning algorithms for deformable object manipulation. However, such gradient-based trajectory optimization typically requires access to the full simulator states and can only solve short-horizon, single-skill tasks due to local optima. In this work, we propose a novel framework, named DiffSkill, that uses a differentiable physics simulator for skill abstraction to solve long-horizon deformable object manipulation tasks from sensory observations. In particular, we first obtain short-horizon skills using individual tools from a gradient-based optimizer, using the full state information in a differentiable simulator; we then learn a neural skill abstractor from the demonstration trajectories which takes RGBD images as input. Finally, we plan over the skills by finding the intermediate goals and then solve long-horizon tasks. We show the advantages of our method in a new set of sequential deformable object manipulation tasks compared to previous reinforcement learning algorithms and compared to the trajectory optimizer.
https://openreview.net/pdf/6beab35ef096a8de0c95debfaf6cca20cec5b33c.pdf
On the Existence of Universal Lottery Tickets
https://openreview.net/forum?id=SYB4WrJql1n
https://openreview.net/forum?id=SYB4WrJql1n
Rebekka Burkholz,Nilanjana Laha,Rajarshi Mukherjee,Alkis Gotovos
ICLR 2022,Poster
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly initialized deep neural networks that can be successfully trained in isolation. Recent work has experimentally observed that some of these tickets can be practically reused across a variety of tasks, hinting at some form of universality. We formalize this concept and theoretically prove that not only do such universal tickets exist but they also do not require further training. Our proofs introduce a couple of technical innovations related to pruning for strong lottery tickets, including extensions of subset sum results and a strategy to leverage higher amounts of depth. Our explicit sparse constructions of universal function families might be of independent interest, as they highlight representational benefits induced by univariate convolutional architectures.
https://openreview.net/pdf/1a20f64ab36535ca4158e348dfd5510e876e1d69.pdf
Pre-training Molecular Graph Representation with 3D Geometry
https://openreview.net/forum?id=xQUe1pOKPam
https://openreview.net/forum?id=xQUe1pOKPam
Shengchao Liu,Hanchen Wang,Weiyang Liu,Joan Lasenby,Hongyu Guo,Jian Tang
ICLR 2022,Poster
Molecular graph representation learning is a fundamental problem in modern drug and material discovery. Molecular graphs are typically modeled by their 2D topological structures, but it has been recently discovered that 3D geometric information plays a more vital role in predicting molecular functionalities. However, the lack of 3D information in real-world scenarios has significantly impeded the learning of geometric graph representation. To cope with this challenge, we propose the Graph Multi-View Pre-training (GraphMVP) framework where self-supervised learning (SSL) is performed by leveraging the correspondence and consistency between 2D topological structures and 3D geometric views. GraphMVP effectively learns a 2D molecular graph encoder that is enhanced by richer and more discriminative 3D geometry. We further provide theoretical insights to justify the effectiveness of GraphMVP. Finally, comprehensive experiments show that GraphMVP can consistently outperform existing graph SSL methods. Code is available on GitHub: https://github.com/chao1224/GraphMVP.
https://openreview.net/pdf/a22cd0fe68ecdc04b0ffcd2cdeda6ba42dc8dffd.pdf
PER-ETD: A Polynomially Efficient Emphatic Temporal Difference Learning Method
https://openreview.net/forum?id=-HSOjDPfhBJ
https://openreview.net/forum?id=-HSOjDPfhBJ
Ziwei Guan,Tengyu Xu,Yingbin Liang
ICLR 2022,Poster
Emphatic temporal difference (ETD) learning (Sutton et al., 2016) is a successful method to conduct the off-policy value function evaluation with function approximation. Although ETD has been shown to converge asymptotically to a desirable value function, it is well-known that ETD often encounters a large variance so that its sample complexity can increase exponentially fast with the number of iterations. In this work, we propose a new ETD method, called PER-ETD (i.e., PEriodically Restarted-ETD), which restarts and updates the follow-on trace only for a finite period for each iteration of the evaluation parameter. Further, PER-ETD features a design of the logarithmical increase of the restart period with the number of iterations, which guarantees the best trade-off between the variance and bias and keeps both vanishing sublinearly. We show that PER-ETD converges to the same desirable fixed point as ETD, but improves the exponential sample complexity of ETD to be polynomials. Our experiments validate the superior performance of PER-ETD and its advantage over ETD.
https://openreview.net/pdf/d54fba2211a4e058d2052bf8de3fb0bdb631060b.pdf
Taming Sparsely Activated Transformer with Stochastic Experts
https://openreview.net/forum?id=B72HXs80q4
https://openreview.net/forum?id=B72HXs80q4
Simiao Zuo,Xiaodong Liu,Jian Jiao,Young Jin Kim,Hany Hassan,Ruofei Zhang,Jianfeng Gao,Tuo Zhao
ICLR 2022,Poster
Sparsely activated models (SAMs), such as Mixture-of-Experts (MoE), can easily scale to have outrageously large amounts of parameters without significant increase in computational cost. However, SAMs are reported to be parameter inefficient such that larger models do not always lead to better performance. While most on-going research focuses on improving SAMs models by exploring methods of routing inputs to experts, our analysis reveals that such research might not lead to the solution we expect, i.e., the commonly-used routing methods based on gating mechanisms do not work better than randomly routing inputs to experts. In this paper, we propose a new expert-based model, THOR ($\underline{\textbf{T}}$ransformer wit$\underline{\textbf{H}}$ St$\underline{\textbf{O}}$chastic Expe$\underline{\textbf{R}}$ts). Unlike classic expert-based models, such as the Switch Transformer, experts in THOR are randomly activated for each input during training and inference. THOR models are trained using a consistency regularized loss, where experts learn not only from training data but also from other experts as teachers, such that all the experts make consistent predictions. We validate the effectiveness of THOR on machine translation tasks. Results show that THOR models are more parameter efficient in that they significantly outperform the Transformer and MoE models across various settings. For example, in multilingual translation, THOR outperforms the Switch Transformer by 2 BLEU scores, and obtains the same BLEU score as that of a state-of-the-art MoE model that is 18 times larger. Our code is publicly available at: https://github.com/microsoft/Stochastic-Mixture-of-Experts.
https://openreview.net/pdf/55a2b1a443f2621d2199769150f3845eefe41ba6.pdf
Hierarchical Variational Memory for Few-shot Learning Across Domains
https://openreview.net/forum?id=i3RI65sR7N
https://openreview.net/forum?id=i3RI65sR7N
Yingjun Du,Xiantong Zhen,Ling Shao,Cees G. M. Snoek
ICLR 2022,Poster
Neural memory enables fast adaptation to new tasks with just a few training samples. Existing memory models store features only from the single last layer, which does not generalize well in presence of a domain shift between training and test distributions. Rather than relying on a flat memory, we propose a hierarchical alternative that stores features at different semantic levels. We introduce a hierarchical prototype model, where each level of the prototype fetches corresponding information from the hierarchical memory. The model is endowed with the ability to flexibly rely on features at different semantic levels if the domain shift circumstances so demand. We meta-learn the model by a newly derived hierarchical variational inference framework, where hierarchical memory and prototypes are jointly optimized. To explore and exploit the importance of different semantic levels, we further propose to learn the weights associated with the prototype at each level in a data-driven way, which enables the model to adaptively choose the most generalizable features. We conduct thorough ablation studies to demonstrate the effectiveness of each component in our model. The new state-of-the-art performance on cross-domain and competitive performance on traditional few-shot classification further substantiates the benefit of hierarchical variational memory.
https://openreview.net/pdf/97127f112b9804e1962c3670a2de7a6c351ac591.pdf
Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction
https://openreview.net/forum?id=Z1Qlm11uOM
https://openreview.net/forum?id=Z1Qlm11uOM
Bowen Shi,Wei-Ning Hsu,Kushal Lakhotia,Abdelrahman Mohamed
ICLR 2022,Poster
Video recordings of speech contain correlated audio and visual information, providing a strong signal for speech representation learning from the speaker’s lip movements and the produced sound. We introduce Audio-Visual Hidden Unit BERT (AV-HuBERT), a self-supervised representation learning framework for audio-visual speech, which masks multi-stream video input and predicts automatically discovered and iteratively refined multimodal hidden units. AV-HuBERT learns powerful audio-visual speech representation benefiting both lip-reading and automatic speech recognition. On the largest public lip-reading benchmark LRS3 (433 hours), AV-HuBERT achieves 32.5% WER with only 30 hours of labeled data, outperforming the former state-of-the-art approach (33.6%) trained with a thousand times more transcribed video data (31K hours) (Makino et al., 2019). The lip-reading WER is further reduced to 26.9% when using all 433 hours of labeled data from LRS3 and combined with self-training. Using our audio-visual representation on the same benchmark for audio-only speech recognition leads to a 40% relative WER reduction over the state-of-the-art performance (1.3% vs 2.3%). Our code and models are available at https://github.com/facebookresearch/av_hubert.
https://openreview.net/pdf/db3061a63dfde7babf9a5fa76d250390f56b8771.pdf
An Explanation of In-context Learning as Implicit Bayesian Inference
https://openreview.net/forum?id=RdJVFCHjUMI
https://openreview.net/forum?id=RdJVFCHjUMI
Sang Michael Xie,Aditi Raghunathan,Percy Liang,Tengyu Ma
ICLR 2022,Poster
Large language models (LMs) such as GPT-3 have the surprising ability to do in-context learning, where the model learns to do a downstream task simply by conditioning on a prompt consisting of input-output examples. The LM learns from these examples without being explicitly pretrained to learn. Thus, it is unclear what enables in-context learning. In this paper, we study how in-context learning can emerge when pretraining documents have long-range coherence. Here, the LM must infer a latent document-level concept to generate coherent next tokens during pretraining. At test time, in-context learning occurs when the LM also infers a shared latent concept between examples in a prompt. We prove when this occurs despite a distribution mismatch between prompts and pretraining data in a setting where the pretraining distribution is a mixture of HMMs. In contrast to messy large-scale datasets used to train LMs capable of in-context learning, we generate a small-scale synthetic dataset (GINC) where Transformers and LSTMs both exhibit in-context learning. Beyond the theory, experiments on GINC exhibit large-scale real-world phenomena including improved in-context performance with model scaling (despite the same pretraining loss), sensitivity to example order, and instances where zero-shot is better than few-shot in-context learning.
https://openreview.net/pdf/34aa1a37f7afecc48d15dbab476f32a40db2fe1a.pdf
Differentiable Scaffolding Tree for Molecule Optimization
https://openreview.net/forum?id=w_drCosT76
https://openreview.net/forum?id=w_drCosT76
Tianfan Fu,Wenhao Gao,Cao Xiao,Jacob Yasonik,Connor W. Coley,Jimeng Sun
ICLR 2022,Poster
The structural design of functional molecules, also called molecular optimization, is an essential chemical science and engineering task with important applications, such as drug discovery. Deep generative models and combinatorial optimization methods achieve initial success but still struggle with directly modeling discrete chemical structures and often heavily rely on brute-force enumeration. The challenge comes from the discrete and non-differentiable nature of molecule structures. To address this, we propose differentiable scaffolding tree (DST) that utilizes a learned knowledge network to convert discrete chemical structures to locally differentiable ones. DST enables a gradient-based optimization on a chemical graph structure by back-propagating the derivatives from the target properties through a graph neural network (GNN). Our empirical studies show the gradient-based molecular optimizations are both effective and sample efficient (in terms of oracle calling number). Furthermore, the learned graph parameters can also provide an explanation that helps domain experts understand the model output. The code repository (including processed data, trained model, demonstration, molecules with the highest property) is available at https://github.com/futianfan/DST.
https://openreview.net/pdf/af1f7e5614adea0e8575cc0ff017f48ce21789e3.pdf
Eliminating Sharp Minima from SGD with Truncated Heavy-tailed Noise
https://openreview.net/forum?id=B3Nde6lvab
https://openreview.net/forum?id=B3Nde6lvab
Xingyu Wang,Sewoong Oh,Chang-Han Rhee
ICLR 2022,Poster
The empirical success of deep learning is often attributed to SGD’s mysterious ability to avoid sharp local minima in the loss landscape, as sharp minima are known to lead to poor generalization. Recently, empirical evidence of heavy-tailed gradient noise was reported in many deep learning tasks; and it was shown in (Simsekli et al., 2019a;b) that SGD can escape sharp local minima under the presence of such heavy-tailed gradient noise, providing a partial solution to the mystery. In this work, we analyze a popular variant of SGD where gradients are truncated above a fixed threshold. We show that it achieves a stronger notion of avoiding sharp minima: it can effectively eliminate sharp local minima entirely from its training trajectory. We characterize the dynamics of truncated SGD driven by heavy-tailed noises. First, we show that the truncation threshold and width of the attraction field dictate the order of the first exit time from the associated local minimum. Moreover, when the objective function satisfies appropriate structural conditions, we prove that as the learning rate decreases, the dynamics of the heavy-tailed truncated SGD closely resemble those of a continuous-time Markov chain that never visits any sharp minima. Real data experiments on deep learning confirm our theoretical prediction that heavy-tailed SGD with gradient clipping finds a flatter local minima and achieves better generalization.
https://openreview.net/pdf/cdd56f4a4d029af79c9c45e1127747b70dee7cfb.pdf
Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System
https://openreview.net/forum?id=uxxFrDwrE7Y
https://openreview.net/forum?id=uxxFrDwrE7Y
Elahe Arani,Fahad Sarfraz,Bahram Zonooz
ICLR 2022,Poster
Humans excel at continually learning from an ever-changing environment whereas it remains a challenge for deep neural networks which exhibit catastrophic forgetting. The complementary learning system (CLS) theory suggests that the interplay between rapid instance-based learning and slow structured learning in the brain is crucial for accumulating and retaining knowledge. Here, we propose CLS-ER, a novel dual memory experience replay (ER) method which maintains short-term and long-term semantic memories that interact with the episodic memory. Our method employs an effective replay mechanism whereby new knowledge is acquired while aligning the decision boundaries with the semantic memories. CLS-ER does not utilize the task boundaries or make any assumption about the distribution of the data which makes it versatile and suited for ``general continual learning''. Our approach achieves state-of-the-art performance on standard benchmarks as well as more realistic general continual learning settings.
https://openreview.net/pdf/03691b7344e6968cc95744a627d25e3b5e60ca3e.pdf
FedChain: Chained Algorithms for Near-optimal Communication Cost in Federated Learning
https://openreview.net/forum?id=ZaVVVlcdaN
https://openreview.net/forum?id=ZaVVVlcdaN
Charlie Hou,Kiran Koshy Thekumparampil,Giulia Fanti,Sewoong Oh
ICLR 2022,Poster
Federated learning (FL) aims to minimize the communication complexity of training a model over heterogeneous data distributed across many clients. A common approach is local methods, where clients take multiple optimization steps over local data before communicating with the server (e.g., FedAvg). Local methods can exploit similarity between clients' data. However, in existing analyses, this comes at the cost of slow convergence in terms of the dependence on the number of communication rounds R. On the other hand, global methods, where clients simply return a gradient vector in each round (e.g., SGD), converge faster in terms of R but fail to exploit the similarity between clients even when clients are homogeneous. We propose FedChain, an algorithmic framework that combines the strengths of local methods and global methods to achieve fast convergence in terms of R while leveraging the similarity between clients. Using FedChain, we instantiate algorithms that improve upon previously known rates in the general convex and PL settings, and are near-optimal (via an algorithm-independent lower bound that we show) for problems that satisfy strong convexity. Empirical results support this theoretical gain over existing methods.
https://openreview.net/pdf/4b6bb8ca9713c12151f31fef5fda64b39e34e734.pdf
What Do We Mean by Generalization in Federated Learning?
https://openreview.net/forum?id=VimqQq-i_Q
https://openreview.net/forum?id=VimqQq-i_Q
Honglin Yuan,Warren Richard Morningstar,Lin Ning,Karan Singhal
ICLR 2022,Poster
Federated learning data is drawn from a distribution of distributions: clients are drawn from a meta-distribution, and their data are drawn from local data distributions. Generalization studies in federated learning should separate performance gaps from unseen client data (out-of-sample gap) from performance gaps from unseen client distributions (participation gap). In this work, we propose a framework for disentangling these performance gaps. Using this framework, we observe and explain differences in behavior across natural and synthetic federated datasets, indicating that dataset synthesis strategy can be important for realistic simulations of generalization in federated learning. We propose a semantic synthesis strategy that enables realistic simulation without naturally partitioned data. Informed by our findings, we call out community suggestions for future federated learning works.
https://openreview.net/pdf/52ed8e81d3311fe4d43cf85b2faadfa3fcdaa83a.pdf
Frequency-aware SGD for Efficient Embedding Learning with Provable Benefits
https://openreview.net/forum?id=ibqTBNfJmi
https://openreview.net/forum?id=ibqTBNfJmi
Yan Li,Dhruv Choudhary,Xiaohan Wei,Baichuan Yuan,Bhargav Bhushanam,Tuo Zhao,Guanghui Lan
ICLR 2022,Poster
Embedding learning has found widespread applications in recommendation systems and natural language modeling, among other domains. To learn quality embeddings efficiently, adaptive learning rate algorithms have demonstrated superior empirical performance over SGD, largely accredited to their token-dependent learning rate. However, the underlying mechanism for the efficiency of token-dependent learning rate remains underexplored. We show that incorporating frequency information of tokens in the embedding learning problems leads to provably efficient algorithms, and demonstrate that common adaptive algorithms implicitly exploit the frequency information to a large extent. Specifically, we propose (Counter-based) Frequency-aware Stochastic Gradient Descent, which applies a frequency-dependent learning rate for each token, and exhibits provable speed-up compared to SGD when the token distribution is imbalanced. Empirically, we show the proposed algorithms are able to improve or match the performance of adaptive algorithms on benchmark recommendation tasks and a large-scale industrial recommendation system, closing the performance gap between SGD and adaptive algorithms. Our results are the first to show token-dependent learning rate provably improves convergence for non-convex embedding learning problems.
https://openreview.net/pdf/7ff001fda6de89be1d9e937afb3d64fe33c689b7.pdf
Learning Curves for Gaussian Process Regression with Power-Law Priors and Targets
https://openreview.net/forum?id=KeI9E-gsoB
https://openreview.net/forum?id=KeI9E-gsoB
Hui Jin,Pradeep Kr. Banerjee,Guido Montufar
ICLR 2022,Poster
We characterize the power-law asymptotics of learning curves for Gaussian process regression (GPR) under the assumption that the eigenspectrum of the prior and the eigenexpansion coefficients of the target function follow a power law. Under similar assumptions, we leverage the equivalence between GPR and kernel ridge regression (KRR) to show the generalization error of KRR. Infinitely wide neural networks can be related to GPR with respect to the neural network GP kernel and the neural tangent kernel, which in several cases is known to have a power-law spectrum. Hence our methods can be applied to study the generalization error of infinitely wide neural networks. We present toy experiments demonstrating the theory.
https://openreview.net/pdf/3adc0d00292f63c360d4960db93cefcdf6c824b7.pdf
Fast topological clustering with Wasserstein distance
https://openreview.net/forum?id=0kPL3xO4R5
https://openreview.net/forum?id=0kPL3xO4R5
Tananun Songdechakraiwut,Bryan M Krause,Matthew I Banks,Kirill V Nourski,Barry D Van Veen
ICLR 2022,Poster
The topological patterns exhibited by many real-world networks motivate the development of topology-based methods for assessing the similarity of networks. However, extracting topological structure is difficult, especially for large and dense networks whose node degrees range over multiple orders of magnitude. In this paper, we propose a novel and computationally practical topological clustering method that clusters complex networks with intricate topology using principled theory from persistent homology and optimal transport. Such networks are aggregated into clusters through a centroid-based clustering strategy based on both their topological and geometric structure, preserving correspondence between nodes in different networks. The notions of topological proximity and centroid are characterized using a novel and efficient approach to computation of the Wasserstein distance and barycenter for persistence barcodes associated with connected components and cycles. The proposed method is demonstrated to be effective using both simulated networks and measured functional brain networks.
https://openreview.net/pdf/1ca6035f2e30fb149d5edca41b2f47c7dbefcbbf.pdf
Autonomous Reinforcement Learning: Formalism and Benchmarking
https://openreview.net/forum?id=nkaba3ND7B5
https://openreview.net/forum?id=nkaba3ND7B5
Archit Sharma,Kelvin Xu,Nikhil Sardana,Abhishek Gupta,Karol Hausman,Sergey Levine,Chelsea Finn
ICLR 2022,Poster
Reinforcement learning (RL) provides a naturalistic framing for learning through trial and error, which is appealing both because of its simplicity and effectiveness and because of its resemblance to how humans and animals acquire skills through experience. However, real-world embodied learning, such as that performed by humans and animals, is situated in a continual, non-episodic world, whereas common benchmark tasks in RL are episodic, with the environment resetting between trials to provide the agent with multiple attempts. This discrepancy presents a major challenge when we attempt to take RL algorithms developed for episodic simulated environments and run them on real-world platforms, such as robots. In this paper, we aim to address this discrepancy by laying out a framework for Autonomous Reinforcement Learning (ARL): reinforcement learning where the agent not only learns through its own experience, but also contends with lack of human supervision to reset between trials. We introduce a simulated benchmark EARL based on this framework, containing a set of diverse and challenging simulated tasks reflective of the hurdles introduced to learning when only a minimal reliance on extrinsic intervention can be assumed. We show that standard approaches to episodic RL and existing approaches struggle as interventions are minimized, underscoring the need for developing new algorithms for reinforcement learning with a greater focus on autonomy.
https://openreview.net/pdf/f572a28516f88b10b825a32cd24ba9922c1d015e.pdf
GRAND++: Graph Neural Diffusion with A Source Term
https://openreview.net/forum?id=EMxu-dzvJk
https://openreview.net/forum?id=EMxu-dzvJk
Matthew Thorpe,Tan Minh Nguyen,Hedi Xia,Thomas Strohmer,Andrea Bertozzi,Stanley Osher,Bao Wang
ICLR 2022,Poster
We propose GRAph Neural Diffusion with a source term (GRAND++) for graph deep learning with a limited number of labeled nodes, i.e., low-labeling rate. GRAND++ is a class of continuous-depth graph deep learning architectures whose theoretical underpinning is the diffusion process on graphs with a source term. The source term guarantees two interesting theoretical properties of GRAND++: (i) the representation of graph nodes, under the dynamics of GRAND++, will not converge to a constant vector over all nodes even as the time goes to infinity, which mitigates the over-smoothing issue of graph neural networks and enables graph learning in very deep architectures. (ii) GRAND++ can provide accurate classification even when the model is trained with a very limited number of labeled training data. We experimentally verify the above two advantages on various graph deep learning benchmark tasks, showing a significant improvement over many existing graph neural networks.
https://openreview.net/pdf/d77569d8b7f28f14fd7a71bc8f199ed42934a3d4.pdf
Case-based reasoning for better generalization in textual reinforcement learning
https://openreview.net/forum?id=ZDaSIkWT-AP
https://openreview.net/forum?id=ZDaSIkWT-AP
Mattia Atzeni,Shehzaad Zuzar Dhuliawala,Keerthiram Murugesan,Mrinmaya Sachan
ICLR 2022,Poster
Text-based games (TBG) have emerged as promising environments for driving research in grounded language understanding and studying problems like generalization and sample efficiency. Several deep reinforcement learning (RL) methods with varying architectures and learning schemes have been proposed for TBGs. However, these methods fail to generalize efficiently, especially under distributional shifts. In a departure from deep RL approaches, in this paper, we propose a general method inspired by case-based reasoning to train agents and generalize out of the training distribution. The case-based reasoner collects instances of positive experiences from the agent's interaction with the world and later reuses the collected experiences to act efficiently. The method can be used in conjunction with any existing on-policy neural agent introduced in the literature for TBGs. Our experiments show that the proposed approach consistently improves existing methods, obtains good out-of-distribution generalization and achieves new state-of-the-art results on widely used environments.
https://openreview.net/pdf/426ad0e0a618d416dbdc8a2fbaa7f29661e1f920.pdf
Neural Deep Equilibrium Solvers
https://openreview.net/forum?id=B0oHOwT5ENL
https://openreview.net/forum?id=B0oHOwT5ENL
Shaojie Bai,Vladlen Koltun,J Zico Kolter
ICLR 2022,Poster
A deep equilibrium (DEQ) model abandons traditional depth by solving for the fixed point of a single nonlinear layer $f_\theta$. This structure enables decoupling the internal structure of the layer (which controls representational capacity) from how the fixed point is actually computed (which impacts inference-time efficiency), which is usually via classic techniques such as Broyden's method or Anderson acceleration. In this paper, we show that one can exploit such decoupling and substantially enhance this fixed point computation using a custom neural solver. Specifically, our solver uses a parameterized network to both guess an initial value of the optimization and perform iterative updates, in a method that generalizes a learnable form of Anderson acceleration and can be trained end-to-end in an unsupervised manner. Such a solution is particularly well suited to the implicit model setting, because inference in these models requires repeatedly solving for a fixed point of the same nonlinear layer for different inputs, a task at which our network excels. Our experiments show that these neural equilibrium solvers are fast to train (only taking an extra 0.9-1.1% over the original DEQ's training time), require few additional parameters (1-3% of the original model size), yet lead to a $2\times$ speedup in DEQ network inference without any degradation in accuracy across numerous domains and tasks.
https://openreview.net/pdf/88bf23fbc8248929fab4670e786459e844cce30c.pdf
A Theoretical Analysis on Feature Learning in Neural Networks: Emergence from Inputs and Advantage over Fixed Features
https://openreview.net/forum?id=wMpS-Z_AI_E
https://openreview.net/forum?id=wMpS-Z_AI_E
Zhenmei Shi,Junyi Wei,Yingyu Liang
ICLR 2022,Poster
An important characteristic of neural networks is their ability to learn representations of the input data with effective features for prediction, which is believed to be a key factor to their superior empirical performance. To better understand the source and benefit of feature learning in neural networks, we consider learning problems motivated by practical data, where the labels are determined by a set of class relevant patterns and the inputs are generated from these along with some background patterns. We prove that neural networks trained by gradient descent can succeed on these problems. The success relies on the emergence and improvement of effective features, which are learned among exponentially many candidates efficiently by exploiting the data (in particular, the structure of the input distribution). In contrast, no linear models on data-independent features of polynomial sizes can learn to as good errors. Furthermore, if the specific input structure is removed, then no polynomial algorithm in the Statistical Query model can learn even weakly. These results provide theoretical evidence showing that feature learning in neural networks depends strongly on the input structure and leads to the superior performance. Our preliminary experimental results on synthetic and real data also provide positive support.
https://openreview.net/pdf/9c506df8cb3b84afcbc08295792409a5a61c3f40.pdf
CADDA: Class-wise Automatic Differentiable Data Augmentation for EEG Signals
https://openreview.net/forum?id=6IYp-35L-xJ
https://openreview.net/forum?id=6IYp-35L-xJ
Cédric Rommel,Thomas Moreau,Joseph Paillard,Alexandre Gramfort
ICLR 2022,Poster
Data augmentation is a key element of deep learning pipelines, as it informs the network during training about transformations of the input data that keep the label unchanged. Manually finding adequate augmentation methods and parameters for a given pipeline is however rapidly cumbersome. In particular, while intuition can guide this decision for images, the design and choice of augmentation policies remains unclear for more complex types of data, such as neuroscience signals. Besides, class-dependent augmentation strategies have been surprisingly unexplored in the literature, although it is quite intuitive: changing the color of a car image does not change the object class to be predicted, but doing the same to the picture of an orange does. This paper investigates gradient-based automatic data augmentation algorithms amenable to class-wise policies with exponentially larger search spaces. Motivated by supervised learning applications using EEG signals for which good augmentation policies are mostly unknown, we propose a new differentiable relaxation of the problem. In the class-agnostic setting, results show that our new relaxation leads to optimal performance with faster training than competing gradient-based methods, while also outperforming gradient-free methods in the class-wise setting. This work proposes also novel differentiable augmentation operations relevant for sleep stage classification.
https://openreview.net/pdf/b32b128de32627fc518e6113f8cfb91bed89bc23.pdf
Label Leakage and Protection in Two-party Split Learning
https://openreview.net/forum?id=cOtBRgsf2fO
https://openreview.net/forum?id=cOtBRgsf2fO
Oscar Li,Jiankai Sun,Xin Yang,Weihao Gao,Hongyi Zhang,Junyuan Xie,Virginia Smith,Chong Wang
ICLR 2022,Poster
Two-party split learning is a popular technique for learning a model across feature-partitioned data. In this work, we explore whether it is possible for one party to steal the private label information from the other party during split training, and whether there are methods that can protect against such attacks. Specifically, we first formulate a realistic threat model and propose a privacy loss metric to quantify label leakage in split learning. We then show that there exist two simple yet effective methods within the threat model that can allow one party to accurately recover private ground-truth labels owned by the other party. To combat these attacks, we propose several random perturbation techniques, including $\texttt{Marvell}$, an approach that strategically finds the structure of the noise perturbation by minimizing the amount of label leakage (measured through our quantification metric) of a worst-case adversary. We empirically demonstrate the effectiveness of our protection techniques against the identified attacks, and show that $\texttt{Marvell}$ in particular has improved privacy-utility tradeoffs relative to baseline approaches.
https://openreview.net/pdf/d1b9d2d34049b477cd71dab64cd68252285e2090.pdf
Semi-relaxed Gromov-Wasserstein divergence and applications on graphs
https://openreview.net/forum?id=RShaMexjc-x
https://openreview.net/forum?id=RShaMexjc-x
Cédric Vincent-Cuaz,Rémi Flamary,Marco Corneli,Titouan Vayer,Nicolas Courty
ICLR 2022,Poster
Comparing structured objects such as graphs is a fundamental operation involved in many learning tasks. To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects. More specifically, through the nodes connectivity relations, GW operates on graphs, seen as probability measures over specific spaces. At the core of OT is the idea of conservation of mass, which imposes a coupling between all the nodes from the two considered graphs. We argue in this paper that this property can be detrimental for tasks such as graph dictionary or partition learning, and we relax it by proposing a new semi-relaxed Gromov-Wasserstein divergence. Aside from immediate computational benefits, we discuss its properties, and show that it can lead to an efficient graph dictionary learning algorithm. We empirically demonstrate its relevance for complex tasks on graphs such as partitioning, clustering and completion.
https://openreview.net/pdf/fe28f3e616fabe8ac2ea06baba193a597fc572a7.pdf
CodeTrek: Flexible Modeling of Code using an Extensible Relational Representation
https://openreview.net/forum?id=WQc075jmBmf
https://openreview.net/forum?id=WQc075jmBmf
Pardis Pashakhanloo,Aaditya Naik,Yuepeng Wang,Hanjun Dai,Petros Maniatis,Mayur Naik
ICLR 2022,Poster
Designing a suitable representation for code-reasoning tasks is challenging in aspects such as the kinds of program information to model, how to combine them, and how much context to consider. We propose CodeTrek, a deep learning approach that addresses these challenges by representing codebases as databases that conform to rich relational schemas. The relational representation not only allows CodeTrek to uniformly represent diverse kinds of program information, but also to leverage program-analysis queries to derive new semantic relations, which can be readily incorporated without further architectural engineering. CodeTrek embeds this relational representation using a set of walks that can traverse different relations in an unconstrained fashion, and incorporates all relevant attributes along the way. We evaluate CodeTrek on four diverse and challenging Python tasks: variable misuse, exception prediction, unused definition, and variable shadowing. CodeTrek achieves an accuracy of 91%, 63%, 98%, and 94% on these tasks respectively, and outperforms state-of-the-art neural models by 2-19% points.
https://openreview.net/pdf/a38e66149abc580fe950eea8b0dfd0e24a7ae1bf.pdf
Bridging Recommendation and Marketing via Recurrent Intensity Modeling
https://openreview.net/forum?id=TZeArecH2Nf
https://openreview.net/forum?id=TZeArecH2Nf
Yifei Ma,Ge Liu,Anoop Deoras
ICLR 2022,Poster
This paper studies some under-explored connections between personalized recommendation and marketing systems. Obviously, these two systems are different, in two main ways. Firstly, personalized item-recommendation (ItemRec) is user-centric, whereas marketing recommends the best user-state segments (UserRec) on behalf of its item providers. (We treat different temporal states of the same user as separate marketing opportunities.) To overcome this difference, we realize a novel connection to Marked-Temporal Point Processes (MTPPs), where we view both problems as different projections from a unified temporal intensity model for all user-item pairs. Correspondingly, we derive Recurrent Intensity Models (RIMs) to extend from recurrent ItemRec models with minimal changes. The second difference between recommendation and marketing is in the temporal domains where they operate. While recommendation demands immediate responses in real-time, marketing campaigns are often long-term, setting goals to cover a given percentage of all opportunities for a given item in a given period of time. We formulate both considerations into a constrained optimization problem we call online match (OnlnMtch) and derive a solution we call Dual algorithm. Simply put, Dual modifies the real-time ItemRec scores such that the marketing constraints can be met with least compromises in user-centric utilities. Finally, our connections between recommendation and marketing may lead to novel applications. We run experiments where we use marketing as an alternative to cold-start item exploration, by setting a minimal-exposure constraint for every item in the audience base. Our experiments are available at \url{https://github.com/awslabs/recurrent-intensity-model-experiments}
https://openreview.net/pdf/cbd0578c075213db4bf240d2f69af59b6c47a4af.pdf
Sparse Attention with Learning to Hash
https://openreview.net/forum?id=VGnOJhd5Q1q
https://openreview.net/forum?id=VGnOJhd5Q1q
Zhiqing Sun,Yiming Yang,Shinjae Yoo
ICLR 2022,Poster
Transformer has become ubiquitous in sequence modeling tasks. As a key component of Transformer, self-attention does not scale to long sequences due to its quadratic time and space complexity with respect to the sequence length. To tackle this problem, recent work developed dynamic attention sparsification techniques based on Approximate Nearest Neighbor (ANN) methods, where similar queries and keys are allocated to the same hash bucket with high probability. However, the effectiveness of those ANN methods relies on the assumption that queries and keys should lie in the same space, which is not well justified. Besides, some of the ANN methods such as Locality-Sensitive Hashing (LSH) are randomized and cannot fully utilize the available real data distributions. To overcome these issues, this paper proposes a new strategy for sparse attention, namely LHA (Learning-to-Hash Attention), which directly learns separate parameterized hash functions for queries and keys, respectively. Another advantage of LHA is that it does not impose extra constraints for queries and keys, which makes it applicable to the wide range of pre-trained Transformer models. Our experiments on evaluation of the WikiText-103 dataset for language modeling, the GLUE benchmark for natural language understanding, and the Lang-Range-Arena benchmark for multiple tasks (text/image classification, retrieval, etc.) show the superior performance of LHA over other strong Transformer variants.
https://openreview.net/pdf/a9205753097eb2790a365828e3e20af6261bf8b1.pdf
Controlling the Complexity and Lipschitz Constant improves Polynomial Nets
https://openreview.net/forum?id=dQ7Cy_ndl1s
https://openreview.net/forum?id=dQ7Cy_ndl1s
Zhenyu Zhu,Fabian Latorre,Grigorios Chrysos,Volkan Cevher
ICLR 2022,Poster
While the class of Polynomial Nets demonstrates comparable performance to neural networks (NN), it currently has neither theoretical generalization characterization nor robustness guarantees. To this end, we derive new complexity bounds for the set of Coupled CP-Decomposition (CCP) and Nested Coupled CP-decomposition (NCP) models of Polynomial Nets in terms of the $\ell_\infty$-operator-norm and the $\ell_2$-operator norm. In addition, we derive bounds on the Lipschitz constant for both models to establish a theoretical certificate for their robustness. The theoretical results enable us to propose a principled regularization scheme that we also evaluate experimentally and show that it improves the accuracy as well as the robustness of the models to adversarial perturbations. We showcase how this regularization can be combined with adversarial training, resulting in further improvements.
https://openreview.net/pdf/47cd3800add7f5f04f67fe02056b0ba28b0f2af0.pdf
Finding an Unsupervised Image Segmenter in each of your Deep Generative Models
https://openreview.net/forum?id=Ug-bgjgSlKV
https://openreview.net/forum?id=Ug-bgjgSlKV
Luke Melas-Kyriazi,Christian Rupprecht,Iro Laina,Andrea Vedaldi
ICLR 2022,Poster
Recent research has shown that numerous human-interpretable directions exist in the latent space of GANs. In this paper, we develop an automatic procedure for finding directions that lead to foreground-background image separation, and we use these directions to train an image segmentation model without human supervision. Our method is generator-agnostic, producing strong segmentation results with a wide range of different GAN architectures. Furthermore, by leveraging GANs pretrained on large datasets such as ImageNet, we are able to segment images from a range of domains without further training or finetuning. Evaluating our method on image segmentation benchmarks, we compare favorably to prior work while using neither human supervision nor access to the training data. Broadly, our results demonstrate that automatically extracting foreground-background structure from pretrained deep generative models can serve as a remarkably effective substitute for human supervision.
https://openreview.net/pdf/f19aff475bf4c294325c50324fea58c92aaa4cb3.pdf
Solving Inverse Problems in Medical Imaging with Score-Based Generative Models
https://openreview.net/forum?id=vaRCHVj0uGI
https://openreview.net/forum?id=vaRCHVj0uGI
Yang Song,Liyue Shen,Lei Xing,Stefano Ermon
ICLR 2022,Poster
Reconstructing medical images from partial measurements is an important inverse problem in Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Existing solutions based on machine learning typically train a model to directly map measurements to medical images, leveraging a training dataset of paired images and measurements. These measurements are typically synthesized from images using a fixed physical model of the measurement process, which hinders the generalization capability of models to unknown measurement processes. To address this issue, we propose a fully unsupervised technique for inverse problem solving, leveraging the recently introduced score-based generative models. Specifically, we first train a score-based generative model on medical images to capture their prior distribution. Given measurements and a physical model of the measurement process at test time, we introduce a sampling method to reconstruct an image consistent with both the prior and the observed measurements. Our method does not assume a fixed measurement process during training, and can thus be flexibly adapted to different measurement processes at test time. Empirically, we observe comparable or better performance to supervised learning techniques in several medical imaging tasks in CT and MRI, while demonstrating significantly better generalization to unknown measurement processes.
https://openreview.net/pdf/1f9104dcb6aabcde00c31c083eb291b3d07e5ed8.pdf
BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis
https://openreview.net/forum?id=L7wzpQttNO
https://openreview.net/forum?id=L7wzpQttNO
Max W. Y. Lam,Jun Wang,Dan Su,Dong Yu
ICLR 2022,Poster
Diffusion probabilistic models (DPMs) and their extensions have emerged as competitive generative models yet confront challenges of efficient sampling. We propose a new bilateral denoising diffusion model (BDDM) that parameterizes both the forward and reverse processes with a schedule network and a score network, which can train with a novel bilateral modeling objective. We show that the new surrogate objective can achieve a lower bound of the log marginal likelihood tighter than a conventional surrogate. We also find that BDDM allows inheriting pre-trained score network parameters from any DPMs and consequently enables speedy and stable learning of the schedule network and optimization of a noise schedule for sampling. Our experiments demonstrate that BDDMs can generate high-fidelity audio samples with as few as three sampling steps. Moreover, compared to other state-of-the-art diffusion-based neural vocoders, BDDMs produce comparable or higher quality samples indistinguishable from human speech, notably with only seven sampling steps (143x faster than WaveGrad and 28.6x faster than DiffWave). We release our code at https://github.com/tencent-ailab/bddm.
https://openreview.net/pdf/8ac09c0c92b7fc2905b7bbd470f4a2a4607be605.pdf
Sample Efficient Stochastic Policy Extragradient Algorithm for Zero-Sum Markov Game
https://openreview.net/forum?id=IvepFxYRDG
https://openreview.net/forum?id=IvepFxYRDG
Ziyi Chen,Shaocong Ma,Yi Zhou
ICLR 2022,Poster
Two-player zero-sum Markov game is a fundamental problem in reinforcement learning and game theory. Although many algorithms have been proposed for solving zero-sum Markov games in the existing literature, many of them either require a full knowledge of the environment or are not sample-efficient. In this paper, we develop a fully decentralized and sample-efficient stochastic policy extragradient algorithm for solving tabular zero-sum Markov games. In particular, our algorithm utilizes multiple stochastic estimators to accurately estimate the value functions involved in the stochastic updates, and leverages entropy regularization to accelerate the convergence. Specifically, with a proper entropy-regularization parameter, we prove that the stochastic policy extragradient algorithm has a sample complexity of the order $\widetilde{\mathcal{O}}(\frac{A_{\max}}{\mu_{\text{min}}\epsilon^{5.5}(1-\gamma)^{13.5}})$ for finding a solution that achieves $\epsilon$-Nash equilibrium duality gap, where $A_{\max}$ is the maximum number of actions between the players, $\mu_{\min}$ is the lower bound of state stationary distribution, and $\gamma$ is the discount factor. Such a sample complexity result substantially improves the state-of-the-art complexity result.
https://openreview.net/pdf/a741fb9d3540a4bbeb6e7a7d823e6777a7b60fa5.pdf
The Uncanny Similarity of Recurrence and Depth
https://openreview.net/forum?id=3wNcr5nq56
https://openreview.net/forum?id=3wNcr5nq56
Avi Schwarzschild,Arjun Gupta,Amin Ghiasi,Micah Goldblum,Tom Goldstein
ICLR 2022,Poster
It is widely believed that deep neural networks contain layer specialization, wherein networks extract hierarchical features representing edges and patterns in shallow layers and complete objects in deeper layers. Unlike common feed-forward models that have distinct filters at each layer, recurrent networks reuse the same parameters at various depths. In this work, we observe that recurrent models exhibit the same hierarchical behaviors and the same performance benefits as depth despite reusing the same filters at every recurrence. By training models of various feed-forward and recurrent architectures on several datasets for image classification as well as maze solving, we show that recurrent networks have the ability to closely emulate the behavior of non-recurrent deep models, often doing so with far fewer parameters.
https://openreview.net/pdf/c5dbd4bef9cb2e2f2c7e32f017e666939851c682.pdf
Implicit Bias of Adversarial Training for Deep Neural Networks
https://openreview.net/forum?id=l8It-0lE5e7
https://openreview.net/forum?id=l8It-0lE5e7
Bochen Lv,Zhanxing Zhu
ICLR 2022,Poster
We provide theoretical understandings of the implicit bias imposed by adversarial training for homogeneous deep neural networks without any explicit regularization. In particular, for deep linear networks adversarially trained by gradient descent on a linearly separable dataset, we prove that the direction of the product of weight matrices converges to the direction of the max-margin solution of the original dataset. Furthermore, we generalize this result to the case of adversarial training for non-linear homogeneous deep neural networks without the linear separability of the dataset. We show that, when the neural network is adversarially trained with $\ell_2$ or $\ell_{\infty}$ FGSM, FGM and PGD perturbations, the direction of the limit point of normalized parameters of the network along the trajectory of the gradient flow converges to a KKT point of a constrained optimization problem that aims to maximize the margin for adversarial examples. Our results theoretically justify the longstanding conjecture that adversarial training modifies the decision boundary by utilizing adversarial examples to improve robustness, and potentially provides insights for designing new robust training strategies.
https://openreview.net/pdf/334522a6b3b9a814fb10fe5d7edc0607f92383ff.pdf
Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
https://openreview.net/forum?id=_SJ-_yyes8
https://openreview.net/forum?id=_SJ-_yyes8
Denis Yarats,Rob Fergus,Alessandro Lazaric,Lerrel Pinto
ICLR 2022,Poster
We present DrQ-v2, a model-free reinforcement learning (RL) algorithm for visual continuous control. DrQ-v2 builds on DrQ, an off-policy actor-critic approach that uses data augmentation to learn directly from pixels. We introduce several improvements that yield state-of-the-art results on the DeepMind Control Suite. Notably, DrQ-v2 is able to solve complex humanoid locomotion tasks directly from pixel observations, previously unattained by model-free RL. DrQ-v2 is conceptually simple, easy to implement, and provides significantly better computational footprint compared to prior work, with the majority of tasks taking just 8 hours to train on a single GPU. Finally, we publicly release DrQ-v2 's implementation to provide RL practitioners with a strong and computationally efficient baseline.
https://openreview.net/pdf/41fe3ddc3b234237036d092d185d57e0ad50b43c.pdf
$\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization
https://openreview.net/forum?id=MMAeCXIa89
https://openreview.net/forum?id=MMAeCXIa89
Carl Hvarfner,Danny Stoll,Artur Souza,Marius Lindauer,Frank Hutter,Luigi Nardi
ICLR 2022,Poster
Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thus, BO disregards a valuable source of information, reducing its appeal to ML practitioners. To address this issue, we propose $\pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. In contrast to previous approaches, $\pi$BO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. We provide regret bounds when $\pi$BO is applied to the common Expected Improvement acquisition function and prove convergence at regular rates independently of the prior. Further, our experiments show that $\pi$BO outperforms competing approaches across a wide suite of benchmarks and prior characteristics. We also demonstrate that $\pi$BO improves on the state-of-the-art performance for a popular deep learning task, with a $12.5\times$ time-to-accuracy speedup over prominent BO approaches.
https://openreview.net/pdf/e27b0600c99cc8f40e03a89e247d9b309bd479df.pdf
A Generalized Weighted Optimization Method for Computational Learning and Inversion
https://openreview.net/forum?id=14F3fI6MGxX
https://openreview.net/forum?id=14F3fI6MGxX
Kui Ren,Yunan Yang,Björn Engquist
ICLR 2022,Poster
The generalization capacity of various machine learning models exhibits different phenomena in the under- and over-parameterized regimes. In this paper, we focus on regression models such as feature regression and kernel regression and analyze a generalized weighted least-squares optimization method for computational learning and inversion with noisy data. The highlight of the proposed framework is that we allow weighting in both the parameter space and the data space. The weighting scheme encodes both a priori knowledge on the object to be learned and a strategy to weight the contribution of different data points in the loss function. Here, we characterize the impact of the weighting scheme on the generalization error of the learning method, where we derive explicit generalization errors for the random Fourier feature model in both the under- and over-parameterized regimes. For more general feature maps, error bounds are provided based on the singular values of the feature matrix. We demonstrate that appropriate weighting from prior knowledge can improve the generalization capability of the learned model.
https://openreview.net/pdf/50fdf2cdf1d618eebd98fd1294f4d5e3c80bd9ea.pdf
DriPP: Driven Point Processes to Model Stimuli Induced Patterns in M/EEG Signals
https://openreview.net/forum?id=d_2lcDh0Y9c
https://openreview.net/forum?id=d_2lcDh0Y9c
Cédric Allain,Alexandre Gramfort,Thomas Moreau
ICLR 2022,Poster
The quantitative analysis of non-invasive electrophysiology signals from electroencephalography (EEG) and magnetoencephalography (MEG) boils down to the identification of temporal patterns such as evoked responses, transient bursts of neural oscillations but also blinks or heartbeats for data cleaning. Several works have shown that these patterns can be extracted efficiently in an unsupervised way, e.g., using Convolutional Dictionary Learning. This leads to an event-based description of the data. Given these events, a natural question is to estimate how their occurrences are modulated by certain cognitive tasks and experimental manipulations. To address it, we propose a point process approach. While point processes have been used in neuroscience in the past, in particular for single cell recordings (spike trains), techniques such as Convolutional Dictionary Learning make them amenable to human studies based on EEG/MEG signals. We develop a novel statistical point process model – called driven temporal point processes (DriPP) – where the intensity function of the point process model is linked to a set of point processes corresponding to stimulation events. We derive a fast and principled expectation-maximization algorithm to estimate the parameters of this model. Simulations reveal that model parameters can be identified from long enough signals. Results on standard MEG datasets demonstrate that our methodology reveals event-related neural responses – both evoked and induced – and isolates non-task specific temporal patterns.
https://openreview.net/pdf/cddccdd359e2f4b1a5def2498a394cb4cef57397.pdf
Stiffness-aware neural network for learning Hamiltonian systems
https://openreview.net/forum?id=uVXEKeqJbNa
https://openreview.net/forum?id=uVXEKeqJbNa
SENWEI Liang,Zhongzhan Huang,Hong Zhang
ICLR 2022,Poster
We propose stiffness-aware neural network (SANN), a new method for learning Hamiltonian dynamical systems from data. SANN identifies and splits the training data into stiff and nonstiff portions based on a stiffness-aware index, a simple, yet effective metric we introduce to quantify the stiffness of the dynamical system. This classification along with a resampling technique allows us to apply different time integration strategies such as step size adaptation to better capture the dynamical characteristics of the Hamiltonian vector fields. We evaluate SANN on complex physical systems including a three-body problem and billiard model. We show that SANN is more stable and can better preserve energy when compared with the state-of-the-art methods, leading to significant improvement in accuracy.
https://openreview.net/pdf/862118c84d67ec76a2491e8bff7f7d288934bc92.pdf
CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting
https://openreview.net/forum?id=PilZY3omXV2
https://openreview.net/forum?id=PilZY3omXV2
Gerald Woo,Chenghao Liu,Doyen Sahoo,Akshat Kumar,Steven Hoi
ICLR 2022,Poster
Deep learning has been actively studied for time series forecasting, and the mainstream paradigm is based on the end-to-end training of neural network architectures, ranging from classical LSTM/RNNs to more recent TCNs and Transformers. Motivated by the recent success of representation learning in computer vision and natural language processing, we argue that a more promising paradigm for time series forecasting, is to first learn disentangled feature representations, followed by a simple regression fine-tuning step -- we justify such a paradigm from a causal perspective. Following this principle, we propose a new time series representation learning framework for long sequence time series forecasting named CoST, which applies contrastive learning methods to learn disentangled seasonal-trend representations. CoST comprises both time domain and frequency domain contrastive losses to learn discriminative trend and seasonal representations, respectively. Extensive experiments on real-world datasets show that CoST consistently outperforms the state-of-the-art methods by a considerable margin, achieving a 21.3% improvement in MSE on multivariate benchmarks. It is also robust to various choices of backbone encoders, as well as downstream regressors. Code is available at https://github.com/salesforce/CoST.
https://openreview.net/pdf/004dc9aa27163347637dedb217441afb5184d2c0.pdf
CoordX: Accelerating Implicit Neural Representation with a Split MLP Architecture
https://openreview.net/forum?id=oAy7yPmdNz
https://openreview.net/forum?id=oAy7yPmdNz
Ruofan Liang,Hongyi Sun,Nandita Vijaykumar
ICLR 2022,Poster
Implicit neural representations with multi-layer perceptrons (MLPs) have recently gained prominence for a wide variety of tasks such as novel view synthesis and 3D object representation and rendering. However, a significant challenge with these representations is that both training and inference with an MLP over a large number of input coordinates to learn and represent an image, video, or 3D object, require large amounts of computation and incur long processing times. In this work, we aim to accelerate inference and training of coordinate-based MLPs for implicit neural representations by proposing a new split MLP architecture, CoordX. With CoordX, the initial layers are split to learn each dimension of the input coordinates separately. The intermediate features are then fused by the last layers to generate the learned signal at the corresponding coordinate point. This significantly reduces the amount of computation required and leads to large speedups in training and inference, while achieving similar accuracy as the baseline MLP. This approach thus aims at first learning functions that are a decomposition of the original signal and then fusing them to generate the learned signal. Our proposed architecture can be generally used for many implicit neural representation tasks with no additional memory overheads. We demonstrate a speedup of up to 2.92x compared to the baseline model for image, video, and 3D shape representation and rendering tasks.
https://openreview.net/pdf/883503a39b83b819141ae8569f3496924381ba8b.pdf
Plant 'n' Seek: Can You Find the Winning Ticket?
https://openreview.net/forum?id=9n9c8sf0xm
https://openreview.net/forum?id=9n9c8sf0xm
Jonas Fischer,Rebekka Burkholz
ICLR 2022,Poster
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to reduce the computational costs associated with deep learning during training and model deployment. Currently, such algorithms are primarily evaluated on imaging data, for which we lack ground truth information and thus the understanding of how sparse lottery tickets could be. To fill this gap, we develop a framework that allows us to plant and hide winning tickets with desirable properties in randomly initialized neural networks. To analyze the ability of state-of-the-art pruning to identify tickets of extreme sparsity, we design and hide such tickets solving four challenging tasks. In extensive experiments, we observe similar trends as in imaging studies, indicating that our framework can provide transferable insights into realistic problems. Additionally, we can now see beyond such relative trends and highlight limitations of current pruning methods. Based on our results, we conclude that the current limitations in ticket sparsity are likely of algorithmic rather than fundamental nature. We anticipate that comparisons to planted tickets will facilitate future developments of efficient pruning algorithms.
https://openreview.net/pdf/233a820014b8dad2af42950b7ca1b07460b76ebe.pdf
Coherence-based Label Propagation over Time Series for Accelerated Active Learning
https://openreview.net/forum?id=gjNcH0hj0LM
https://openreview.net/forum?id=gjNcH0hj0LM
Yooju Shin,Susik Yoon,Sundong Kim,Hwanjun Song,Jae-Gil Lee,Byung Suk Lee
ICLR 2022,Poster
Time-series data are ubiquitous these days, but lack of the labels in time-series data is regarded as a hurdle for its broad applicability. Meanwhile, active learning has been successfully adopted to reduce the labeling efforts in various tasks. Thus, this paper addresses an important issue, time-series active learning. Inspired by the temporal coherence in time-series data, where consecutive data points tend to have the same label, our label propagation framework, called TCLP, automatically assigns a queried label to the data points within an accurately estimated time-series segment, thereby significantly boosting the impact of an individual query. Compared with traditional time-series active learning, TCLP is shown to improve the classification accuracy by up to 7.1 times when only 0.8% of data points in the entire time series are queried for their labels.
https://openreview.net/pdf/adfec039282ce54c1aed434a034e6b291eb9364f.pdf
A Class of Short-term Recurrence Anderson Mixing Methods and Their Applications
https://openreview.net/forum?id=_X90SIKbHa
https://openreview.net/forum?id=_X90SIKbHa
Fuchao Wei,Chenglong Bao,Yang Liu
ICLR 2022,Poster
Anderson mixing (AM) is a powerful acceleration method for fixed-point iterations, but its computation requires storing many historical iterations. The extra memory footprint can be prohibitive when solving high-dimensional problems in a resource-limited machine. To reduce the memory overhead, we propose a novel class of short-term recurrence AM methods (ST-AM). The ST-AM methods only store two previous iterations with cheap corrections. We prove that the basic version of ST-AM is equivalent to the full-memory AM in strongly convex quadratic optimization, and with minor changes it has local linear convergence for solving general nonlinear fixed-point problems. We further analyze the convergence properties of the regularized ST-AM for nonconvex (stochastic) optimization. Finally, we apply ST-AM to several applications including solving root-finding problems and training neural networks. Experimental results show that ST-AM is competitive with the long-memory AM and outperforms many existing optimizers.
https://openreview.net/pdf/408f39a77474837a59c452c7fbe11596507a9c6d.pdf
The Geometry of Memoryless Stochastic Policy Optimization in Infinite-Horizon POMDPs
https://openreview.net/forum?id=A05I5IvrdL-
https://openreview.net/forum?id=A05I5IvrdL-
Johannes Müller,Guido Montufar
ICLR 2022,Poster
We consider the problem of finding the best memoryless stochastic policy for an infinite-horizon partially observable Markov decision process (POMDP) with finite state and action spaces with respect to either the discounted or mean reward criterion. We show that the (discounted) state-action frequencies and the expected cumulative reward are rational functions of the policy, whereby the degree is determined by the degree of partial observability. We then describe the optimization problem as a linear optimization problem in the space of feasible state-action frequencies subject to polynomial constraints that we characterize explicitly. This allows us to address the combinatorial and geometric complexity of the optimization problem using recent tools from polynomial optimization. In particular, we demonstrate how the partial observability constraints can lead to multiple smooth and non-smooth local optimizers and we estimate the number of critical points.
https://openreview.net/pdf/d9402e7eae55cb2e0a33dfaedb88a525a88beb52.pdf