title
stringlengths 12
151
| url
stringlengths 41
43
| detail_url
stringlengths 41
43
| authors
stringlengths 6
562
| tags
stringclasses 3
values | abstract
stringlengths 519
2.34k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks | https://openreview.net/forum?id=n0OeTdNRG0Q | https://openreview.net/forum?id=n0OeTdNRG0Q | Jiawei Du,Hanshu Yan,Jiashi Feng,Joey Tianyi Zhou,Liangli Zhen,Rick Siow Mong Goh,Vincent Tan | ICLR 2022,Poster | Overparametrized Deep Neural Networks (DNNs) often achieve astounding performances, but may potentially result in severe generalization error. Recently, the relation between the sharpness of the loss landscape and the generalization error has been established by Foret et al. (2020), in which the Sharpness Aware Minimizer (SAM) was proposed to mitigate the degradation of the generalization. Unfortunately, SAM’s computational cost is roughly double that of base optimizers, such as Stochastic Gradient Descent (SGD). This paper thus proposes Efficient Sharpness Aware Minimizer (ESAM), which boosts SAM’s efficiency at no cost to its generalization performance. ESAM includes two novel and efficient training strategies—StochasticWeight Perturbation and Sharpness-Sensitive Data Selection. In the former, the sharpness measure is approximated by perturbing a stochastically chosen set of weights in each iteration; in the latter, the SAM loss is optimized using only a judiciously selected subset of data that is sensitive to the sharpness. We provide theoretical explanations as to why these strategies perform well. We also show, via extensive experiments on the CIFAR and ImageNet
datasets, that ESAM enhances the efficiency over SAM from requiring 100% extra computations to 40% vis-`a-vis base optimizers, while test accuracies are preserved or even improved. | https://openreview.net/pdf/ded078d40e07efa5958c1cbeb447de7e55420ae0.pdf |
Lipschitz-constrained Unsupervised Skill Discovery | https://openreview.net/forum?id=BGvt0ghNgA | https://openreview.net/forum?id=BGvt0ghNgA | Seohong Park,Jongwook Choi,Jaekyeom Kim,Honglak Lee,Gunhee Kim | ICLR 2022,Poster | We study the problem of unsupervised skill discovery, whose goal is to learn a set of diverse and useful skills with no external reward. There have been a number of skill discovery methods based on maximizing the mutual information (MI) between skills and states. However, we point out that their MI objectives usually prefer static skills to dynamic ones, which may hinder the application for downstream tasks. To address this issue, we propose Lipschitz-constrained Skill Discovery (LSD), which encourages the agent to discover more diverse, dynamic, and far-reaching skills. Another benefit of LSD is that its learned representation function can be utilized for solving goal-following downstream tasks even in a zero-shot manner — i.e., without further training or complex planning. Through experiments on various MuJoCo robotic locomotion and manipulation environments, we demonstrate that LSD outperforms previous approaches in terms of skill diversity, state space coverage, and performance on seven downstream tasks including the challenging task of following multiple goals on Humanoid. Our code and videos are available at https://shpark.me/projects/lsd/. | https://openreview.net/pdf/8651c40702367a7edf4361ffff2a8a4cd82b9cba.pdf |
Learning Generalizable Representations for Reinforcement Learning via Adaptive Meta-learner of Behavioral Similarities | https://openreview.net/forum?id=zBOI9LFpESK | https://openreview.net/forum?id=zBOI9LFpESK | Jianda Chen,Sinno Pan | ICLR 2022,Poster | How to learn an effective reinforcement learning-based model for control tasks from high-level visual observations is a practical and challenging problem. A key to solving this problem is to learn low-dimensional state representations from observations, from which an effective policy can be learned. In order to boost the learning of state encoding, recent works are focused on capturing behavioral similarities between state representations or applying data augmentation on visual observations. In this paper, we propose a novel meta-learner-based framework for representation learning regarding behavioral similarities for reinforcement learning. Specifically, our framework encodes the high-dimensional observations into two decomposed embeddings regarding reward and dynamics in a Markov Decision Process (MDP). A pair of meta-learners are developed, one of which quantifies the reward similarity and the other quantifies dynamics similarity over the correspondingly decomposed embeddings. The meta-learners are self-learned to update the state embeddings by approximating two disjoint terms in on-policy bisimulation metric. To incorporate the reward and dynamics terms, we further develop a strategy to adaptively balance their impacts based on different tasks or environments. We empirically demonstrate that our proposed framework outperforms state-of-the-art baselines on several benchmarks, including conventional DM Control Suite, Distracting DM Control Suite and a self-driving task CARLA. | https://openreview.net/pdf/6abafd4c0174bd9684e716b9b972429d4b2ae350.pdf |
Effective Model Sparsification by Scheduled Grow-and-Prune Methods | https://openreview.net/forum?id=xa6otUDdP2W | https://openreview.net/forum?id=xa6otUDdP2W | Xiaolong Ma,Minghai Qin,Fei Sun,Zejiang Hou,Kun Yuan,Yi Xu,Yanzhi Wang,Yen-Kuang Chen,Rong Jin,Yuan Xie | ICLR 2022,Poster | Deep neural networks (DNNs) are effective in solving many real-world problems. Larger DNN models usually exhibit better quality (e.g., accuracy) but their excessive computation results in long inference time. Model sparsification can reduce the computation and memory cost while maintaining model quality. Most existing sparsification algorithms unidirectionally remove weights, while others randomly or greedily explore a small subset of weights in each layer for pruning. The limitations of these algorithms reduce the level of achievable sparsity. In addition, many algorithms still require pre-trained dense models and thus suffer from large memory footprint. In this paper, we propose a novel scheduled grow-and-prune (GaP) methodology without having to pre-train a dense model. It addresses the shortcomings of the previous works by repeatedly growing a subset of layers to dense and then pruning them back to sparse after some training. Experiments show that the models pruned using the proposed methods match or beat the quality of the highly optimized dense models at 80% sparsity on a variety of tasks, such as image classification, objective detection, 3D object part segmentation, and translation. They also outperform other state-of-the-art (SOTA) methods for model sparsification. As an example, a 90% non-uniform sparse ResNet-50 model obtained via GaP achieves 77.9% top-1 accuracy on ImageNet, improving the previous SOTA results by 1.5%. Code available at: https://github.com/boone891214/GaP. | https://openreview.net/pdf/039dfe30e42cddf0da3ff88c3fbb66087d12f9a7.pdf |
FILIP: Fine-grained Interactive Language-Image Pre-Training | https://openreview.net/forum?id=cpDhcsEDC2 | https://openreview.net/forum?id=cpDhcsEDC2 | Lewei Yao,Runhui Huang,Lu Hou,Guansong Lu,Minzhe Niu,Hang Xu,Xiaodan Liang,Zhenguo Li,Xin Jiang,Chunjing Xu | ICLR 2022,Poster | Unsupervised large-scale vision-language pre-training has shown promising advances on various downstream tasks. Existing methods often model the cross-modal interaction either via the similarity of the global feature of each modality which misses sufficient information, or finer-grained interactions using cross/self-attention upon visual and textual tokens. However, cross/self-attention suffers from inferior efficiency in both training and inference. In this paper, we introduce a large-scale Fine-grained Interactive Language-Image Pre-training (FILIP) to achieve finer-level alignment through a cross-modal late interaction mechanism, which uses a token-wise maximum similarity between visual and textual tokens to guide the contrastive objective. FILIP successfully leverages the finer-grained expressiveness between image patches and textual words by modifying only contrastive loss, while simultaneously gaining the ability to pre-compute image and text representations offline at inference, keeping both large-scale training and inference efficient. Furthermore, we construct a new large-scale image-text pair dataset called FILIP300M for pre-training. Experiments show that FILIP achieves state-of-the-art performance on multiple downstream vision-language tasks including zero-shot image classification and image-text retrieval. The visualization on word-patch alignment further shows that FILIP can learn meaningful fine-grained features with promising localization ability. | https://openreview.net/pdf/e8f6807c88ea1d0d0090f2c381f21739b217efb9.pdf |
Information Prioritization through Empowerment in Visual Model-based RL | https://openreview.net/forum?id=DfUjyyRW90 | https://openreview.net/forum?id=DfUjyyRW90 | Homanga Bharadhwaj,Mohammad Babaeizadeh,Dumitru Erhan,Sergey Levine | ICLR 2022,Poster | Model-based reinforcement learning (RL) algorithms designed for handling complex visual observations typically learn some sort of latent state representation, either explicitly or implicitly. Standard methods of this sort do not distinguish between functionally relevant aspects of the state and irrelevant distractors, instead aiming to represent all available information equally. We propose a modified objective for model-based RL that, in combination with mutual information maximization, allows us to learn representations and dynamics for visual model-based RL without reconstruction in a way that explicitly prioritizes functionally relevant factors. The key principle behind our design is to integrate a term inspired by variational empowerment into a state-space learning model based on mutual information. This term prioritizes information that is correlated with action, thus ensuring that functionally relevant factors are captured first. Furthermore, the same empowerment term also promotes faster exploration during the RL process, especially for sparse-reward tasks where the reward signal is insufficient to drive exploration in the early stages of learning. We evaluate the approach on a suite of vision-based robot control tasks with natural video backgrounds, and show that the proposed prioritized information objective outperforms state-of-the-art model based RL approaches by an average of 20\% in terms of episodic returns at 1M environment interactions with 30\% higher sample efficiency at 100k interactions. | https://openreview.net/pdf/13e706cf08e60f2727651883527c31c53558ed33.pdf |
Efficient Active Search for Combinatorial Optimization Problems | https://openreview.net/forum?id=nO5caZwFwYu | https://openreview.net/forum?id=nO5caZwFwYu | André Hottung,Yeong-Dae Kwon,Kevin Tierney | ICLR 2022,Poster | Recently numerous machine learning based methods for combinatorial optimization problems have been proposed that learn to construct solutions in a sequential decision process via reinforcement learning. While these methods can be easily combined with search strategies like sampling and beam search, it is not straightforward to integrate them into a high-level search procedure offering strong search guidance. Bello et al. (2016) propose active search, which adjusts the weights of a (trained) model with respect to a single instance at test time using reinforcement learning. While active search is simple to implement, it is not competitive with state-of-the-art methods because adjusting all model weights for each test instance is very time and memory intensive. Instead of updating all model weights, we propose and evaluate three efficient active search strategies that only update a subset of parameters during the search. The proposed methods offer a simple way to significantly improve the search performance of a given model and outperform state-of-the-art machine learning based methods on combinatorial problems, even surpassing the well-known heuristic solver LKH3 on the capacitated vehicle routing problem. Finally, we show that (efficient) active search enables learned models to effectively solve instances that are much larger than those seen during training. | https://openreview.net/pdf/80ed58845ccc4912c64aeee73748354bf61b6a13.pdf |
Ancestral protein sequence reconstruction using a tree-structured Ornstein-Uhlenbeck variational autoencoder | https://openreview.net/forum?id=FZoZ7a31GCW | https://openreview.net/forum?id=FZoZ7a31GCW | Lys Sanz Moreta,Ola Rønning,Ahmad Salim Al-Sibahi,Jotun Hein,Douglas Theobald,Thomas Hamelryck | ICLR 2022,Poster | We introduce a deep generative model for representation learning of biological sequences that, unlike existing models, explicitly represents the evolutionary process. The model makes use of a tree-structured Ornstein-Uhlenbeck process, obtained from a given phylogenetic tree, as an informative prior for a variational autoencoder. We show the model performs well on the task of ancestral sequence reconstruction of single protein families. Our results and ablation studies indicate that the explicit representation of evolution using a suitable tree-structured prior has the potential to improve representation learning of biological sequences considerably. Finally, we briefly discuss extensions of the model to genomic-scale data sets and the case of a latent phylogenetic tree. | https://openreview.net/pdf/375306f629eb94307d85be89f055c937f95bde83.pdf |
Training Structured Neural Networks Through Manifold Identification and Variance Reduction | https://openreview.net/forum?id=mdUYT5QV0O | https://openreview.net/forum?id=mdUYT5QV0O | Zih-Syuan Huang,Ching-pei Lee | ICLR 2022,Poster | This paper proposes an algorithm, RMDA, for training neural networks (NNs) with a regularization term for promoting desired structures. RMDA does not incur computation additional to proximal SGD with momentum, and achieves variance reduction without requiring the objective function to be of the finite-sum form. Through the tool of manifold identification from nonlinear optimization, we prove that after a finite number of iterations, all iterates of RMDA possess a desired structure identical to that induced by the regularizer at the stationary point of asymptotic convergence, even in the presence of engineering tricks like data augmentation that complicate the training process. Experiments on training NNs with structured sparsity confirm that variance reduction is necessary for such an identification, and show that RMDA thus significantly outperforms existing methods for this task. For unstructured sparsity, RMDA also outperforms a state-of-the-art pruning method, validating the benefits of training structured NNs through regularization.
Implementation of RMDA is available at https://www.github.com/zihsyuan1214/rmda. | https://openreview.net/pdf/f5bc435bcb3593d7f03a33d1dc39cffc4ff125da.pdf |
The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization | https://openreview.net/forum?id=KBQP4A_J1K | https://openreview.net/forum?id=KBQP4A_J1K | Róbert Csordás,Kazuki Irie,Jürgen Schmidhuber | ICLR 2022,Poster | Despite progress across a broad range of applications, Transformers have limited success in systematic generalization. The situation is especially frustrating in the case of algorithmic tasks, where they often fail to find intuitive solutions that route relevant information to the right node/operation at the right time in the grid represented by Transformer columns. To facilitate the learning of useful control flow, we propose two modifications to the Transformer architecture, copy gate and geometric attention. Our novel Neural Data Router (NDR) achieves 100% length generalization accuracy on the classic compositional table lookup task, as well as near-perfect accuracy on the simple arithmetic task and a new variant of ListOps testing for generalization across computational depths. NDR’s attention and gating patterns tend to be interpretable as an intuitive form of neural routing | https://openreview.net/pdf/0a8ae186717b6e3ecc30dea384724b288d4060b6.pdf |
On the Limitations of Multimodal VAEs | https://openreview.net/forum?id=w-CPUXXrAj | https://openreview.net/forum?id=w-CPUXXrAj | Imant Daunhawer,Thomas M. Sutter,Kieran Chin-Cheong,Emanuele Palumbo,Julia E Vogt | ICLR 2022,Poster | Multimodal variational autoencoders (VAEs) have shown promise as efficient generative models for weakly-supervised data. Yet, despite their advantage of weak supervision, they exhibit a gap in generative quality compared to unimodal VAEs, which are completely unsupervised. In an attempt to explain this gap, we uncover a fundamental limitation that applies to a large family of mixture-based multimodal VAEs. We prove that the sub-sampling of modalities enforces an undesirable upper bound on the multimodal ELBO and thereby limits the generative quality of the respective models. Empirically, we showcase the generative quality gap on both synthetic and real data and present the tradeoffs between different variants of multimodal VAEs. We find that none of the existing approaches fulfills all desired criteria of an effective multimodal generative model when applied on more complex datasets than those used in previous benchmarks. In summary, we identify, formalize, and validate fundamental limitations of VAE-based approaches for modeling weakly-supervised data and discuss implications for real-world applications. | https://openreview.net/pdf/e25daec9628954edce262e1cda172567415510fc.pdf |
Recursive Disentanglement Network | https://openreview.net/forum?id=CSfcOznpDY | https://openreview.net/forum?id=CSfcOznpDY | Yixuan Chen,Yubin Shi,Dongsheng Li,Yujiang Wang,Mingzhi Dong,Yingying Zhao,Robert P. Dick,Qin Lv,Fan Yang,Li Shang | ICLR 2022,Poster | Disentangled feature representation is essential for data-efficient learning. The feature space of deep models is inherently compositional. Existing $\beta$-VAE-based methods, which only apply disentanglement regularization to the resulting embedding space of deep models, cannot effectively regularize such compositional feature space, resulting in unsatisfactory disentangled results. In this paper, we formulate the compositional disentanglement learning problem from an information-theoretic perspective and propose a recursive disentanglement network (RecurD) that propagates regulatory inductive bias recursively across the compositional feature space during disentangled representation learning.
Experimental studies demonstrate that RecurD outperforms $\beta$-VAE and several of its state-of-the-art variants on disentangled representation learning and enables more data-efficient downstream machine learning tasks. | https://openreview.net/pdf/dcb1062c5fcfc89c6726a5cb4916e3e228a85716.pdf |
ADAVI: Automatic Dual Amortized Variational Inference Applied To Pyramidal Bayesian Models | https://openreview.net/forum?id=CgIEctmcXx1 | https://openreview.net/forum?id=CgIEctmcXx1 | Louis Rouillard,Demian Wassermann | ICLR 2022,Poster | Frequently, population studies feature pyramidally-organized data represented using Hierarchical Bayesian Models (HBM) enriched with plates. These models can become prohibitively large in settings such as neuroimaging, where a sample is composed of a functional MRI signal measured on 300 brain locations, across 4 measurement sessions, and 30 subjects, resulting in around 1 million latent parameters.
Such high dimensionality hampers the usage of modern, expressive flow-based techniques.
To infer parameter posterior distributions in this challenging class of problems, we designed a novel methodology that automatically produces a variational family dual to a target HBM. This variational family, represented as a neural network, consists in the combination of an attention-based hierarchical encoder feeding summary statistics to a set of normalizing flows. Our automatically-derived neural network exploits exchangeability in the plate-enriched HBM and factorizes its parameter space. The resulting architecture reduces by orders of magnitude its parameterization with respect to that of a typical flow-based representation, while maintaining expressivity.
Our method performs inference on the specified HBM in an amortized setup: once trained, it can readily be applied to a new data sample to compute the parameters' full posterior.
We demonstrate the capability and scalability of our method on simulated data, as well as a challenging high-dimensional brain parcellation experiment. We also open up several questions that lie at the intersection between normalizing flows, SBI, structured Variational Inference, and inference amortization. | https://openreview.net/pdf/5e3287e6e246a5cf89ad2b2824e72a35f115d662.pdf |
Distributionally Robust Models with Parametric Likelihood Ratios | https://openreview.net/forum?id=a34GrNaYEcS | https://openreview.net/forum?id=a34GrNaYEcS | Paul Michel,Tatsunori Hashimoto,Graham Neubig | ICLR 2022,Poster | As machine learning models are deployed ever more broadly, it becomes increasingly important that they are not only able to perform well on their training distribution, but also yield accurate predictions when confronted with distribution shift. The Distributionally Robust Optimization (DRO) framework proposes to address this issue by training models to minimize their expected risk under a collection of distributions, to imitate test-time shifts. This is most commonly achieved by instance-level re-weighting of the training objective to emulate the likelihood ratio with possible test distributions, which allows for estimating their empirical risk via importance sampling (assuming that they are subpopulations of the training distribution). However, re-weighting schemes in the literature are usually limited due to the difficulty of keeping the optimization problem tractable and the complexity of enforcing normalization constraints. In this paper, we show that three simple ideas -- mini-batch level normalization, a KL penalty and simultaneous gradient updates -- allow us to train models with DRO using a broader class of parametric likelihood ratios. In a series of experiments on both image and text classification benchmarks, we find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches, and that the method performs reliably well with little hyper-parameter tuning. | https://openreview.net/pdf/6da76f4b4c34dc213335f4873bea59a8c0f40ec9.pdf |
Constrained Physical-Statistics Models for Dynamical System Identification and Prediction | https://openreview.net/forum?id=gbe1zHyA73 | https://openreview.net/forum?id=gbe1zHyA73 | Jérémie DONA,Marie Déchelle,patrick gallinari,Marina Levy | ICLR 2022,Poster | Modeling dynamical systems combining prior physical knowledge and machine learning (ML) is promising in scientific problems when the underlying processes are not fully understood, e.g. when the dynamics is partially known. A common practice to identify the respective parameters of the physical and ML components is to formulate the problem as supervised learning on observed trajectories. However, this formulation leads to an infinite number of possible decompositions. To solve this ill-posedness, we reformulate the learning problem by introducing an upper bound on the prediction error of a physical-statistical model. This allows us to control the contribution of both the physical and statistical components to the overall prediction. This framework generalizes several existing hybrid schemes proposed in the literature. We provide theoretical guarantees on the well-posedness of our formulation along with a proof of convergence in a simple affine setting. For more complex dynamics, we validate our framework experimentally. | https://openreview.net/pdf/523bf746dd928f5b2c204455c61d5a91ca94c02d.pdf |
Doubly Adaptive Scaled Algorithm for Machine Learning Using Second-Order Information | https://openreview.net/forum?id=HCelXXcSEuH | https://openreview.net/forum?id=HCelXXcSEuH | Majid Jahani,Sergey Rusakov,Zheng Shi,Peter Richtárik,Michael W. Mahoney,Martin Takac | ICLR 2022,Poster | We present a novel adaptive optimization algorithm for large-scale machine learning problems. Equipped with a low-cost estimate of local curvature and Lipschitz smoothness, our method dynamically adapts the search direction and step-size. The search direction contains gradient information preconditioned by a well-scaled diagonal preconditioning matrix that captures the local curvature information. Our methodology does not require the tedious task of learning rate tuning, as the learning rate is updated automatically without adding an extra hyper-parameter. We provide convergence guarantees on a comprehensive collection of optimization problems, including convex, strongly convex, and nonconvex problems, in both deterministic and stochastic regimes. We also conduct an extensive empirical evaluation on standard machine learning problems, justifying our algorithm's versatility and demonstrating its strong performance compared to other start-of-the-art first-order and second-order methods. | https://openreview.net/pdf/329537bd0e1ff27ec646590f4e792d301baae526.pdf |
Understanding approximate and unrolled dictionary learning for pattern recovery | https://openreview.net/forum?id=rI0LYgGeYaw | https://openreview.net/forum?id=rI0LYgGeYaw | Benoît Malézieux,Thomas Moreau,Matthieu Kowalski | ICLR 2022,Poster | Dictionary learning consists of finding a sparse representation from noisy data and is a common way to encode data-driven prior knowledge on signals. Alternating minimization (AM) is standard for the underlying optimization, where gradient descent steps alternate with sparse coding procedures. The major drawback of this method is its prohibitive computational cost, making it unpractical on large real-world data sets. This work studies an approximate formulation of dictionary learning based on unrolling and compares it to alternating minimization to find the best trade-off between speed and precision. We analyze the asymptotic behavior and convergence rate of gradients estimates in both methods. We show that unrolling performs better on the support of the inner problem solution and during the first iterations. Finally, we apply unrolling on pattern learning in magnetoencephalography (MEG) with the help of a stochastic algorithm and compare the performance to a state-of-the-art method. | https://openreview.net/pdf/e456e2f1ce949acca23fba4b8e8661e5415f672a.pdf |
Constraining Linear-chain CRFs to Regular Languages | https://openreview.net/forum?id=jbrgwbv8nD | https://openreview.net/forum?id=jbrgwbv8nD | Sean Papay,Roman Klinger,Sebastian Pado | ICLR 2022,Poster | A major challenge in structured prediction is to represent the interdependencies within output structures. When outputs are structured as sequences, linear-chain conditional random fields (CRFs) are a widely used model class which can learn local dependencies in the output. However, the CRF's Markov assumption makes it impossible for CRFs to represent distributions with nonlocal dependencies, and standard CRFs are unable to respect nonlocal constraints of the data (such as global arity constraints on output labels). We present a generalization of CRFs that can enforce a broad class of constraints, including nonlocal ones, by specifying the space of possible output structures as a regular language $\mathcal{L}$. The resulting regular-constrained CRF (RegCCRF) has the same formal properties as a standard CRF, but assigns zero probability to all label sequences not in $\mathcal{L}$. Notably, RegCCRFs can incorporate their constraints during training, while related models only enforce constraints during decoding. We prove that constrained training is never worse than constrained decoding, and show empirically that it can be substantially better in practice. Additionally, we demonstrate a practical benefit on downstream tasks by incorporating a RegCCRF into a deep neural model for semantic role labeling, exceeding state-of-the-art results on a standard dataset. | https://openreview.net/pdf/978e824fd3601a2093e96071691ad08ccd066da4.pdf |
Dive Deeper Into Integral Pose Regression | https://openreview.net/forum?id=vHVcB-ak3Si | https://openreview.net/forum?id=vHVcB-ak3Si | Kerui Gu,Linlin Yang,Angela Yao | ICLR 2022,Poster | Integral pose regression combines an implicit heatmap with end-to-end training for human body and hand pose estimation. Unlike detection-based heatmap methods, which decode final joint positions from the heatmap with a non-differentiable argmax operation, integral regression methods apply a differentiable expectation operation. This paper offers a deep dive into the inference and back-propagation of integral pose regression to better understand the differences in performance and training compared to detection-based methods. For inference, we give theoretical support as to why expectation should always be better than the argmax operation, i.e. integral regression should always outperform detection. Yet, in practice, this is observed only in hard cases because the heatmap activation for regression shrinks in easy cases. We then experimentally show that activation shrinkage is one of the leading causes for integral regression's inferior performance. For back-propagation, we theoretically and empirically analyze the gradients to explain the slow training speed of integral regression. Based on these findings, we incorporate the supervision of a spatial prior to speed up training and improve performance. | https://openreview.net/pdf/1c30c07fe0d8b037287227beefbc0d1263b462f5.pdf |
Evidential Turing Processes | https://openreview.net/forum?id=84NMXTHYe- | https://openreview.net/forum?id=84NMXTHYe- | Melih Kandemir,Abdullah Akgül,Manuel Haussmann,Gozde Unal | ICLR 2022,Poster | A probabilistic classifier with reliable predictive uncertainties i) fits successfully to the target domain data, ii) provides calibrated class probabilities in difficult regions of the target domain (e.g. class overlap), and iii) accurately identifies queries coming out of the target domain and reject them. We introduce an original combination of Evidential Deep Learning, Neural Processes, and Neural Turing Machines capable of providing all three essential properties mentioned above for total uncertainty quantification. We observe our method on three image classification benchmarks to consistently improve the in-domain uncertainty quantification, out-of-domain detection, and robustness against input perturbations with one single model. Our unified solution delivers an implementation-friendly and computationally efficient recipe for safety clearance and provides intellectual economy to an investigation of algorithmic roots of epistemic awareness in deep neural nets. | https://openreview.net/pdf/78e3a3224d06a68626d3e18fe724144646c74064.pdf |
Noisy Feature Mixup | https://openreview.net/forum?id=vJb4I2ANmy | https://openreview.net/forum?id=vJb4I2ANmy | Soon Hoe Lim,N. Benjamin Erichson,Francisco Utrera,Winnie Xu,Michael W. Mahoney | ICLR 2022,Poster | We introduce Noisy Feature Mixup (NFM), an inexpensive yet effective method for data augmentation that combines the best of interpolation based training and noise injection schemes. Rather than training with convex combinations of pairs of examples and their labels, we use noise-perturbed convex combinations of pairs of data points in both input and feature space. This method includes mixup and manifold mixup as special cases, but it has additional advantages, including better smoothing of decision boundaries and enabling improved model robustness. We provide theory to understand this as well as the implicit regularization effects of NFM. Our theory is supported by empirical results, demonstrating the advantage of NFM, as compared to mixup and manifold mixup. We show that residual networks and vision transformers trained with NFM have favorable trade-offs between predictive accuracy on clean data and robustness with respect to various types of data perturbation across a range of computer vision benchmark datasets. | https://openreview.net/pdf/4a18925f14f56e9ae86ffe5ebb83e50a2d418c34.pdf |
Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently | https://openreview.net/forum?id=moHCzz6D5H3 | https://openreview.net/forum?id=moHCzz6D5H3 | Xiaohan Chen,Jason Zhang,Zhangyang Wang | ICLR 2022,Poster | Sparse neural networks (NNs) are intensively investigated in literature due to their appeal in saving storage, memory, and computational costs. A recent work (Ramanujan et al., 2020) showed that, different from conventional pruning-and-finetuning pipeline, there exist hidden subnetworks in randomly initialized NNs that have good performance without training the weights. However, such "hidden subnetworks" have mediocre performances and require an expensive edge-popup algorithm to search for them. In this work, we define an extended class of subnetworks in randomly initialized NNs called disguised subnetworks, which are not only "hidden" in the random networks but also "disguised" -- hence can only be "unmasked" with certain transformations on weights. We argue that the unmasking process plays an important role in enlarging the capacity of the subnetworks and thus grants two major benefits: (i) the disguised subnetworks easily outperform the hidden counterparts; (ii) the unmasking process helps to relax the quality requirement on the sparse subnetwork mask so that the expensive edge-popup algorithm can be replaced with more efficient alternatives. On top of this new concept, we propose a novel two-stage algorithm that plays a Peek-a-Boo (PaB) game to identify the disguised subnetworks with a combination of two operations: (1) searching efficiently for a subnetwork at random initialization; (2) unmasking the disguise by learning to transform the resulting subnetwork's remaining weights. Furthermore, we show that the unmasking process can be efficiently implemented (a) without referring to any latent weights or scores; and (b) by only leveraging approximated gradients, so that the whole training algorithm is computationally light. Extensive experiments with several large models (ResNet-18, ResNet-50, and WideResNet-28) and datasets (CIFAR-10, CIFAR-100 and ImageNet) demonstrate the competency of PaB over edge-popup and other counterparts. Our codes are available at: https://github.com/VITA-Group/Peek-a-Boo. | https://openreview.net/pdf/60b7fbc376713b122304e2e0530ea7290974d364.pdf |
How Well Does Self-Supervised Pre-Training Perform with Streaming Data? | https://openreview.net/forum?id=EwqEx5ipbOu | https://openreview.net/forum?id=EwqEx5ipbOu | Dapeng Hu,Shipeng Yan,Qizhengqiu Lu,Lanqing HONG,Hailin Hu,Yifan Zhang,Zhenguo Li,Xinchao Wang,Jiashi Feng | ICLR 2022,Poster | Prior works on self-supervised pre-training focus on the joint training scenario, where massive unlabeled data are assumed to be given as input all at once, and only then is a learner trained. Unfortunately, such a problem setting is often impractical if not infeasible since many real-world tasks rely on sequential learning, e.g., data are decentralized or collected in a streaming fashion. In this paper, we conduct the first thorough and dedicated investigation on self-supervised pre-training with streaming data, aiming to shed light on the model behavior under this overlooked setup. Specifically, we pre-train over 500 models on four categories of pre-training streaming data from ImageNet and DomainNet and evaluate them on three types of downstream tasks and 12 different downstream datasets. Our studies show that, somehow beyond our expectation, with simple data replay or parameter regularization, sequential self-supervised pre-training turns out to be an efficient alternative for joint pre-training, as the performances of the former are mostly on par with those of the latter. Moreover, catastrophic forgetting, a common issue in sequential supervised learning, is much alleviated in sequential self-supervised learning (SSL), which is well justified through our comprehensive empirical analysis on representations and the sharpness of minima in the loss landscape. Our findings, therefore, suggest that, in practice, for SSL, the cumbersome joint training can be replaced mainly by sequential learning, which in turn enables a much broader spectrum of potential application scenarios. | https://openreview.net/pdf/a00f82e4a4b8bc53140104602610c56f5dea871e.pdf |
Subspace Regularizers for Few-Shot Class Incremental Learning | https://openreview.net/forum?id=boJy41J-tnQ | https://openreview.net/forum?id=boJy41J-tnQ | Afra Feyza Akyürek,Ekin Akyürek,Derry Wijaya,Jacob Andreas | ICLR 2022,Poster | Few-shot class incremental learning---the problem of updating a trained classifier to discriminate among an expanded set of classes with limited labeled data---is a key challenge for machine learning systems deployed in non-stationary environments. Existing approaches to the problem rely on complex model architectures and training procedures that are difficult to tune and re-use. In this paper, we present an extremely simple approach that enables the use of ordinary logistic regression classifiers for few-shot incremental learning. The key to this approach is a new family of \textit{subspace regularization} schemes that encourage weight vectors for new classes to lie close to the subspace spanned by the weights of existing classes. When combined with pretrained convolutional feature extractors, logistic regression models trained with subspace regularization outperform specialized, state-of-the-art approaches to few-shot incremental image classification by up to 23\% on the \textit{mini}ImageNet dataset. Because of its simplicity, subspace regularization can be straightforwardly configured to incorporate additional background information about the new classes (including class names and descriptions specified in natural language); this offers additional control over the trade-off between existing and new classes. Our results show that simple geometric regularization of class representations offers an effective tool for continual learning. | https://openreview.net/pdf/23bc15dae0714cd2d58e51d9c64892ca38433f7d.pdf |
Using Graph Representation Learning with Schema Encoders to Measure the Severity of Depressive Symptoms | https://openreview.net/forum?id=OtEDS2NWhqa | https://openreview.net/forum?id=OtEDS2NWhqa | Simin Hong,Anthony Cohn,David Crossland Hogg | ICLR 2022,Poster | Graph neural networks (GNNs) are widely used in regression and classification problems applied to text, in areas such as sentiment analysis and medical decision-making processes. We propose a novel form for node attributes within a GNN based model that captures node-specific embeddings for every word in the vocabulary. This provides a global representation at each node, coupled with node-level updates according to associations among words in a transcript. We demonstrate the efficacy of the approach by augmenting the accuracy of measuring major depressive disorder (MDD). Prior research has sought to make a diagnostic prediction of depression levels from patient data using several modalities, including audio, video, and text. On the DAIC-WOZ benchmark, our method outperforms state-of-art methods by a substantial margin, including those using multiple modalities. Moreover, we also evaluate the performance of our novel model on a Twitter sentiment dataset. We show that our model outperforms a general GNN model by leveraging our novel 2-D node attributes. These results demonstrate the generality of the proposed method. | https://openreview.net/pdf/9dbfdd98e925440e87789286618df5f2ae9e27ba.pdf |
Actor-Critic Policy Optimization in a Large-Scale Imperfect-Information Game | https://openreview.net/forum?id=DTXZqTNV5nW | https://openreview.net/forum?id=DTXZqTNV5nW | Haobo Fu,Weiming Liu,Shuang Wu,Yijia Wang,Tao Yang,Kai Li,Junliang Xing,Bin Li,Bo Ma,QIANG FU,Yang Wei | ICLR 2022,Poster | The deep policy gradient method has demonstrated promising results in many large-scale games, where the agent learns purely from its own experience. Yet, policy gradient methods with self-play suffer convergence problems to a Nash Equilibrium (NE) in multi-agent situations. Counterfactual regret minimization (CFR) has a convergence guarantee to a NE in 2-player zero-sum games, but it usually needs domain-specific abstractions to deal with large-scale games. Inheriting merits from both methods, in this paper we extend the actor-critic algorithm framework in deep reinforcement learning to tackle a large-scale 2-player zero-sum imperfect-information game, 1-on-1 Mahjong, whose information set size and game length are much larger than poker. The proposed algorithm, named Actor-Critic Hedge (ACH), modifies the policy optimization objective from originally maximizing the discounted returns to minimizing a type of weighted cumulative counterfactual regret. This modification is achieved by approximating the regret via a deep neural network and minimizing the regret via generating self-play policies using Hedge. ACH is theoretically justified as it is derived from a neural-based weighted CFR, for which we prove the convergence to a NE under certain conditions. Experimental results on the proposed 1-on-1 Mahjong benchmark and benchmarks from the literature demonstrate that ACH outperforms related state-of-the-art methods. Also, the agent obtained by ACH defeats a human champion in 1-on-1 Mahjong. | https://openreview.net/pdf/6fe3b02efc57f5d0f92998d2d9f16fbf729ade8f.pdf |
Policy Gradients Incorporating the Future | https://openreview.net/forum?id=EHaUTlm2eHg | https://openreview.net/forum?id=EHaUTlm2eHg | David Venuto,Elaine Lau,Doina Precup,Ofir Nachum | ICLR 2022,Poster | Reasoning about the future -- understanding how decisions in the present time affect outcomes in the future -- is one of the central challenges for reinforcement learning (RL), especially in highly-stochastic or partially observable environments. While predicting the future directly is hard, in this work we introduce a method that allows an agent to ``look into the future'' without explicitly predicting it. Namely, we propose to allow an agent, during its training on past experience, to observe what \emph{actually} happened in the future at that time, while enforcing an information bottleneck to avoid the agent overly relying on this privileged information. Coupled with recent advances in variational inference and a latent-variable autoregressive model, this gives our agent the ability to utilize rich and \emph{useful} information about the future trajectory dynamics in addition to the present. Our method, Policy Gradients Incorporating the Future (PGIF), is easy to implement and versatile, being applicable to virtually any policy gradient algorithm. We apply our proposed method to a number of off-the-shelf RL algorithms and show that PGIF is able to achieve higher reward faster in a variety of online and offline RL domains, as well as sparse-reward and partially observable environments. | https://openreview.net/pdf/1bc7c8d13a1713bf5e86827cb71b07a2d36496ad.pdf |
Gradient Information Matters in Policy Optimization by Back-propagating through Model | https://openreview.net/forum?id=rzvOQrnclO0 | https://openreview.net/forum?id=rzvOQrnclO0 | Chongchong Li,Yue Wang,Wei Chen,Yuting Liu,Zhi-Ming Ma,Tie-Yan Liu | ICLR 2022,Poster | Model-based reinforcement learning provides an efficient mechanism to find the optimal policy by interacting with the learned environment. In addition to treating the learned environment like a black-box simulator, a more effective way to use the model is to exploit its differentiability. Such methods require the gradient information of the learned environment model when calculating the policy gradient. However, since the error of gradient is not considered in the model learning phase, there is no guarantee for the model's accuracy. To address this problem, we first analyze the convergence rate for the policy optimization methods when the policy gradient is calculated using the learned environment model. The theoretical results show that the model gradient error matters in the policy optimization phrase. Then we propose a two-model-based learning method to control the prediction error and the gradient error. We separate the different roles of these two models at the model learning phase and coordinate them at the policy optimization phase. After proposing the method, we introduce the directional derivative projection policy optimization (DDPPO) algorithm as a practical implementation to find the optimal policy. Finally, we empirically demonstrate the proposed algorithm has better sample efficiency when achieving a comparable or better performance on benchmark continuous control tasks. | https://openreview.net/pdf/f6890ab3f174fcc59f2441755d0529b472b382da.pdf |
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning | https://openreview.net/forum?id=xm6YD62D1Ub | https://openreview.net/forum?id=xm6YD62D1Ub | Adrien Bardes,Jean Ponce,Yann LeCun | ICLR 2022,Poster | Recent self-supervised methods for image representation learning maximize the agreement between embedding vectors produced by encoders fed with different views of the same image. The main challenge is to prevent a collapse in which the encoders produce constant or non-informative vectors. We introduce VICReg (Variance-Invariance-Covariance Regularization), a method that explicitly avoids the collapse problem with two regularizations terms applied to both embeddings separately: (1) a term that maintains the variance of each embedding dimension above a threshold, (2) a term that decorrelates each pair of variables. Unlike most other approaches to the same problem, VICReg does not require techniques such as: weight sharing between the branches, batch normalization, feature-wise normalization, output quantization, stop gradient, memory banks, etc., and achieves results on par with the state of the art on several downstream tasks. In addition, we show that our variance regularization term stabilizes the training of other methods and leads to performance improvements. | https://openreview.net/pdf/25a14f6fde1bb9ddf5881d141f200e0c3aaa0ccb.pdf |
High Probability Generalization Bounds with Fast Rates for Minimax Problems | https://openreview.net/forum?id=gI7feJ9yXPz | https://openreview.net/forum?id=gI7feJ9yXPz | Shaojie Li,Yong Liu | ICLR 2022,Poster | Minimax problems are receiving an increasing amount of attention in a wide range of applications in machine learning (ML), for instance, reinforcement learning, robust optimization, adversarial learning, and distributed computing, to mention but a few. Current studies focus on the fundamental understanding of general minimax problems with an emphasis on convergence behavior. As a comparison, there is far less work to study the generalization performance. Additionally, existing generalization bounds are almost all derived in expectation, and the high probability bounds are all presented in the slow order $\mathcal{O}(1/\sqrt{n})$, where $n$ is the sample size. In this paper, we provide improved generalization analyses and obtain sharper high probability generalization bounds for most existing generalization measures of minimax problems. We then use the improved learning bounds to establish high probability generalization bounds with fast rates for classical empirical saddle point (ESP) solution and several popular gradient-based optimization algorithms, including gradient descent ascent (GDA), stochastic gradient descent ascent (SGDA), proximal point method (PPM), extra-gradient (EG), and optimistic gradient descent ascent (OGDA). In summary, we provide a systematical analysis of sharper generalization bounds of minimax problems. | https://openreview.net/pdf/e5876716f378d51a9cbe5f9d74f719c455bd4377.pdf |
SUMNAS: Supernet with Unbiased Meta-Features for Neural Architecture Search | https://openreview.net/forum?id=Z8FzvVU6_Kj | https://openreview.net/forum?id=Z8FzvVU6_Kj | Hyeonmin Ha,Ji-Hoon Kim,Semin Park,Byung-Gon Chun | ICLR 2022,Poster | One-shot Neural Architecture Search (NAS) usually constructs an over-parameterized network, which we call a supernet, and typically adopts sharing parameters among the sub-models to improve computational efficiency. One-shot NAS often repeatedly samples sub-models from the supernet and trains them to optimize the shared parameters. However, this training strategy suffers from multi-model forgetting. Training a sampled sub-model overrides the previous knowledge learned by the other sub-models, resulting in an unfair performance evaluation between the sub-models. We propose Supernet with Unbiased Meta-Features for Neural Architecture Search (SUMNAS), a supernet learning strategy based on meta-learning to tackle the knowledge forgetting issue. During the training phase, we explicitly address the multi-model forgetting problem and help the supernet learn unbiased meta-features, independent from the sampled sub-models. Once training is over, sub-models can be instantly compared to get the overall ranking or the best sub-model. Our evaluation on the NAS-Bench-201 and MobileNet-based search space demonstrate that SUMNAS shows improved ranking ability and finds architectures whose performance is on par with existing state-of-the-art NAS algorithms. | https://openreview.net/pdf/82215472636d191a93f9ff0e73c2fb07893068f4.pdf |
Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting | https://openreview.net/forum?id=_XNtisL32jv | https://openreview.net/forum?id=_XNtisL32jv | Shikuang Deng,Yuhang Li,Shanghang Zhang,Shi Gu | ICLR 2022,Poster | Recently, brain-inspired spiking neuron networks (SNNs) have attracted widespread research interest because of their event-driven and energy-efficient characteristics. It is difficult to efficiently train deep SNNs due to the non-differentiability of its activation function, which disables the typically used gradient descent approaches for traditional artificial neural networks (ANNs). Although the adoption of surrogate gradient (SG) formally allows for the back-propagation of losses, the discrete spiking mechanism actually differentiates the loss landscape of SNNs from that of ANNs, failing the surrogate gradient methods to achieve comparable accuracy as for ANNs. In this paper, we first analyze why the current direct training approach with surrogate gradient results in SNNs with poor generalizability. Then we introduce the temporal efficient training (TET) approach to compensate for the loss of momentum in the gradient descent with SG so that the training process can converge into flatter minima with better generalizability. Meanwhile, we demonstrate that TET improves the temporal scalability of SNN and induces a temporal inheritable training for acceleration. Our method consistently outperforms the SOTA on all reported mainstream datasets, including CIFAR-10/100 and ImageNet. Remarkably on DVS-CIFAR10, we obtained 83% top-1 accuracy, over 10% improvement compared to existing state of the art. | https://openreview.net/pdf/36b73d733683265023d3e40a225095942a71eef4.pdf |
Reliable Adversarial Distillation with Unreliable Teachers | https://openreview.net/forum?id=u6TRGdzhfip | https://openreview.net/forum?id=u6TRGdzhfip | Jianing Zhu,Jiangchao Yao,Bo Han,Jingfeng Zhang,Tongliang Liu,Gang Niu,Jingren Zhou,Jianliang Xu,Hongxia Yang | ICLR 2022,Poster | In ordinary distillation, student networks are trained with soft labels (SLs) given by pretrained teacher networks, and students are expected to improve upon teachers since SLs are stronger supervision than the original hard labels. However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students. Therefore, in this paper, we propose reliable introspective adversarial distillation (IAD) where students partially instead of fully trust their teachers. Specifically, IAD distinguishes between three cases given a query of a natural data (ND) and the corresponding adversarial data (AD): (a) if a teacher is good at AD, its SL is fully trusted; (b) if a teacher is good at ND but not AD, its SL is partially trusted and the student also takes its own SL into account; (c) otherwise, the student only relies on its own SL. Experiments demonstrate the effectiveness of IAD for improving upon teachers in terms of adversarial robustness. | https://openreview.net/pdf/eac811eb67d636dd867f0cf3166da4e08c31d495.pdf |
Neural Program Synthesis with Query | https://openreview.net/forum?id=NyJ2KIN8P17 | https://openreview.net/forum?id=NyJ2KIN8P17 | Di Huang,Rui Zhang,Xing Hu,Xishan Zhang,Pengwei Jin,Nan Li,Zidong Du,Qi Guo,Yunji Chen | ICLR 2022,Poster | Aiming to find a program satisfying the user intent given input-output examples, program synthesis has attracted increasing interest in the area of machine learning. Despite the promising performance of existing methods, most of their success comes from the privileged information of well-designed input-output examples. However, providing such input-output examples is unrealistic because it requires the users to have the ability to describe the underlying program with a few input-output examples under the training distribution. In this work, we propose a query-based framework that trains a query neural network to generate informative input-output examples automatically and interactively from a large query space. The quality of the query depends on the amount of the mutual information between the query and the corresponding program, which can guide the optimization of the query framework. To estimate the mutual information more accurately, we introduce the functional space (F-space) which models the relevance between the input-output examples and the programs in a differentiable way. We evaluate the effectiveness and generalization of the proposed query-based framework on the Karel task and the list processing task. Experimental results show that the query-based framework can generate informative input-output examples which achieve
and even outperform well-designed input-output examples. | https://openreview.net/pdf/e5b3abde2c30abab84561a7eb8d74c79cdb1ddec.pdf |
Delaunay Component Analysis for Evaluation of Data Representations | https://openreview.net/forum?id=HTVch9AMPa | https://openreview.net/forum?id=HTVch9AMPa | Petra Poklukar,Vladislav Polianskii,Anastasiia Varava,Florian T. Pokorny,Danica Kragic Jensfelt | ICLR 2022,Poster | Advanced representation learning techniques require reliable and general evaluation methods. Recently, several algorithms based on the common idea of geometric and topological analysis of a manifold approximated from the learned data representations have been proposed. In this work, we introduce Delaunay Component Analysis (DCA) -- an evaluation algorithm which approximates the data manifold using a more suitable neighbourhood graph called Delaunay graph. This provides a reliable manifold estimation even for challenging geometric arrangements of representations such as clusters with varying shape and density as well as outliers, which is where existing methods often fail. Furthermore, we exploit the nature of Delaunay graphs and introduce a framework for assessing the quality of individual novel data representations. We experimentally validate the proposed DCA method on representations obtained from neural networks trained with contrastive objective, supervised and generative models, and demonstrate various use cases of our extended single point evaluation framework. | https://openreview.net/pdf/3df46bb46fbcb4d72d6c49b08889d954321cc9c6.pdf |
Visual hyperacuity with moving sensor and recurrent neural computations | https://openreview.net/forum?id=p0rCmDEN_- | https://openreview.net/forum?id=p0rCmDEN_- | Alexander Rivkind,Or Ram,Eldad Assa,Michael Kreiserman,Ehud Ahissar | ICLR 2022,Poster | Dynamical phenomena, such as recurrent neuronal activity and perpetual motion of the eye, are typically overlooked in models of bottom-up visual perception. Recent experiments suggest that tiny inter-saccadic eye motion ("fixational drift") enhances visual acuity beyond the limit imposed by the density of retinal photoreceptors. Here we hypothesize that such an enhancement is enabled by recurrent neuronal computations in early visual areas. Specifically, we explore a setting involving a low-resolution dynamical sensor that moves with respect to a static scene, with drift-like tiny steps. This setting mimics a dynamical eye viewing objects in perceptually-challenging conditions. The dynamical sensory input is classified by a convolutional neural network with recurrent connectivity added to its lower layers, in analogy to recurrent connectivity in early visual areas. Applying our system to CIFAR-10 and CIFAR-100 datasets down-sampled via 8x8 sensor, we found that (i) classification accuracy, which is drastically reduced by this down-sampling, is mostly restored to its 32x32 baseline level when using a moving sensor and recurrent connectivity, (ii) in this setting, neurons in the early layers exhibit a wide repertoire of selectivity patterns, spanning the spatiotemporal selectivity space, with neurons preferring different combinations of spatial and temporal patterning, and (iii) curved sensor's trajectories improve visual acuity compared to straight trajectories, echoing recent experimental findings involving eye-tracking in challenging conditions. Our work sheds light on the possible role of recurrent connectivity in early vision as well as the roles of fixational drift and temporal-frequency selective cells in the visual system. It also proposes a solution for artificial image recognition in settings with limited resolution and multiple time samples, such as in edge AI applications. | https://openreview.net/pdf/74d601c5743a4f1b7e2e0c33b63ef39f695a6c4a.pdf |
Partial Wasserstein Adversarial Network for Non-rigid Point Set Registration | https://openreview.net/forum?id=2ggNjUisGyr | https://openreview.net/forum?id=2ggNjUisGyr | Ziming Wang,Nan Xue,Ling Lei,Gui-Song Xia | ICLR 2022,Poster | Given two point sets, the problem of registration is to recover a transformation that matches one set to the other. This task is challenging due to the presence of large number of outliers, the unknown non-rigid deformations and the large sizes of point sets. To obtain strong robustness against outliers, we formulate the registration problem as a partial distribution matching (PDM) problem, where the goal is to partially match the distributions represented by point sets in a metric space. To handle large point sets, we propose a scalable PDM algorithm by utilizing the efficient partial Wasserstein-1 (PW) discrepancy. Specifically, we derive the Kantorovich-Rubinstein duality for the PW discrepancy, and show its gradient can be explicitly computed. Based on these results, we propose a partial Wasserstein adversarial network (PWAN), which is able to approximate the PW discrepancy by a neural network, and minimize it by gradient descent. In addition,
it also incorporates an efficient coherence regularizer for non-rigid transformations to avoid unrealistic deformations. We evaluate PWAN on practical point set registration tasks, and show that the proposed PWAN is robust, scalable and performs more favorably than the state-of-the-art methods.
| https://openreview.net/pdf/aa0f00aa3ffa33879529f0a8702f8a59097997a7.pdf |
Quantitative Performance Assessment of CNN Units via Topological Entropy Calculation | https://openreview.net/forum?id=xFOyMwWPkz | https://openreview.net/forum?id=xFOyMwWPkz | Yang Zhao,Hao Zhang | ICLR 2022,Poster | Identifying the status of individual network units is critical for understanding the mechanism of convolutional neural networks (CNNs). However, it is still challenging to reliably give a general indication of unit status, especially for units in different network models. To this end, we propose a novel method for quantitatively clarifying the status of single unit in CNN using algebraic topological tools. Unit status is indicated via the calculation of a defined topological-based entropy, called feature entropy, which measures the degree of chaos of the global spatial pattern hidden in the unit for a category. In this way, feature entropy could provide an accurate indication of status for units in different networks with diverse situations like weight-rescaling operation. Further, we show that feature entropy decreases as the layer goes deeper and shares almost simultaneous trend with loss during training. We show that by investigating the feature entropy of units on only training data, it could give discrimination between networks with different generalization ability from the view of the effectiveness of feature representations.
| https://openreview.net/pdf/20b7b30269e6c1a3819b2bb2a73bf9a836ccd1b6.pdf |
Imitation Learning by Reinforcement Learning | https://openreview.net/forum?id=1zwleytEpYx | https://openreview.net/forum?id=1zwleytEpYx | Kamil Ciosek | ICLR 2022,Poster | Imitation learning algorithms learn a policy from demonstrations of expert behavior. We show that, for deterministic experts, imitation learning can be done by reduction to reinforcement learning with a stationary reward. Our theoretical analysis both certifies the recovery of expert reward and bounds the total variation distance between the expert and the imitation learner, showing a link to adversarial imitation learning. We conduct experiments which confirm that our reduction works well in practice for continuous control tasks. | https://openreview.net/pdf/18385148f0590e0d9a4ea379bd07c26c43414141.pdf |
On-Policy Model Errors in Reinforcement Learning | https://openreview.net/forum?id=81e1aeOt-sd | https://openreview.net/forum?id=81e1aeOt-sd | Lukas Froehlich,Maksym Lefarov,Melanie Zeilinger,Felix Berkenkamp | ICLR 2022,Poster | Model-free reinforcement learning algorithms can compute policy gradients given sampled environment transitions, but require large amounts of data. In contrast, model-based methods can use the learned model to generate new data, but model errors and bias can render learning unstable or suboptimal. In this paper, we present a novel method that combines real-world data and a learned model in order to get the best of both worlds. The core idea is to exploit the real-world data for on-policy predictions and use the learned model only to generalize to different actions. Specifically, we use the data as time-dependent on-policy correction terms on top of a learned model, to retain the ability to generate data without accumulating errors over long prediction horizons. We motivate this method theoretically and show that it counteracts an error term for model-based policy improvement. Experiments on MuJoCo- and PyBullet-benchmarks show that our method can drastically improve existing model-based approaches without introducing additional tuning parameters. | https://openreview.net/pdf/a579d018b07ad6fa046ecc55697be2a1ea96eace.pdf |
TAPEX: Table Pre-training via Learning a Neural SQL Executor | https://openreview.net/forum?id=O50443AsCP | https://openreview.net/forum?id=O50443AsCP | Qian Liu,Bei Chen,Jiaqi Guo,Morteza Ziyadi,Zeqi Lin,Weizhu Chen,Jian-Guang Lou | ICLR 2022,Poster | Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we propose TAPEX to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes the improvements on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs and to achieve new state-of-the-art results on various downstream tasks. Our code can be found at https://github.com/microsoft/Table-Pretraining. | https://openreview.net/pdf/9abc11a326d0ad12abb958697c1ab8e0a585e62b.pdf |
DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning | https://openreview.net/forum?id=9SDQB3b68K | https://openreview.net/forum?id=9SDQB3b68K | Jinxin Liu,Zhang Hongyin,Donglin Wang | ICLR 2022,Poster | Offline reinforcement learning algorithms promise to be applicable in settings where a fixed dataset is available and no new experience can be acquired. However, such formulation is inevitably offline-data-hungry and, in practice, collecting a large offline dataset for one specific task over one specific environment is also costly and laborious. In this paper, we thus 1) formulate the offline dynamics adaptation by using (source) offline data collected from another dynamics to relax the requirement for the extensive (target) offline data, 2) characterize the dynamics shift problem in which prior offline methods do not scale well, and 3) derive a simple dynamics-aware reward augmentation (DARA) framework from both model-free and model-based offline settings. Specifically, DARA emphasizes learning from those source transition pairs that are adaptive for the target environment and mitigates the offline dynamics shift by characterizing state-action-next-state pairs instead of the typical state-action distribution sketched by prior offline RL methods. The experimental evaluation demonstrates that DARA, by augmenting rewards in the source offline dataset, can acquire an adaptive policy for the target environment and yet significantly reduce the requirement of target offline data. With only modest amounts of target offline data, our performance consistently outperforms the prior offline RL methods in both simulated and real-world tasks. | https://openreview.net/pdf/b68b505b8daf0a76ad9d4d5e4cd43976ba9864db.pdf |
Explaining Point Processes by Learning Interpretable Temporal Logic Rules | https://openreview.net/forum?id=P07dq7iSAGr | https://openreview.net/forum?id=P07dq7iSAGr | Shuang Li,Mingquan Feng,Lu Wang,Abdelmajid Essofi,Yufeng Cao,Junchi Yan,Le Song | ICLR 2022,Poster | We propose a principled method to learn a set of human-readable logic rules to explain temporal point processes.
We assume that the generative mechanisms underlying the temporal point processes are governed by a set of first-order temporal logic rules, as a compact representation of domain knowledge. Our method formulates the rule discovery process from noisy event data as a maximum likelihood problem, and designs an efficient and tractable branch-and-price algorithm to progressively search for new rules and expand existing rules. The proposed algorithm alternates between the rule generation stage and the rule evaluation stage, and uncovers the most important collection of logic rules within a fixed time limit for both synthetic and real event data. In a real healthcare application, we also had human experts (i.e., doctors) verify the learned temporal logic rules and provide further improvements. These expert-revised interpretable rules lead to a point process model which outperforms previous state-of-the-arts for symptom prediction, both in their occurrence times and types. | https://openreview.net/pdf/a417bc93488c1842edd3369524cfa9125d192f04.pdf |
On Robust Prefix-Tuning for Text Classification | https://openreview.net/forum?id=eBCmOocUejf | https://openreview.net/forum?id=eBCmOocUejf | Zonghan Yang,Yang Liu | ICLR 2022,Poster | Recently, prefix-tuning has gained increasing attention as a parameter-efficient finetuning method for large-scale pretrained language models. The method keeps the pretrained models fixed and only updates the prefix token parameters for each downstream task. Despite being lightweight and modular, prefix-tuning still lacks robustness to textual adversarial attacks. However, most currently developed defense techniques necessitate auxiliary model update and storage, which inevitably hamper the modularity and low storage of prefix-tuning. In this work, we propose a robust prefix-tuning framework that preserves the efficiency and modularity of prefix-tuning. The core idea of our framework is leveraging the layerwise activations of the language model by correctly-classified training data as the standard for additional prefix finetuning. During the test phase, an extra batch-level prefix is tuned for each batch and added to the original prefix for robustness enhancement. Extensive experiments on three text classification benchmarks show that our framework substantially improves robustness over several strong baselines against five textual attacks of different types while maintaining comparable accuracy on clean texts. We also interpret our robust prefix-tuning framework from the optimal control perspective and pose several directions for future research. | https://openreview.net/pdf/02dfcabb44949137b40a59f94715b5caa4b12231.pdf |
Learning Graphon Mean Field Games and Approximate Nash Equilibria | https://openreview.net/forum?id=0sgntlpKDOz | https://openreview.net/forum?id=0sgntlpKDOz | Kai Cui,Heinz Koeppl | ICLR 2022,Poster | Recent advances at the intersection of dense large graph limits and mean field games have begun to enable the scalable analysis of a broad class of dynamical sequential games with large numbers of agents. So far, results have been largely limited to graphon mean field systems with continuous-time diffusive or jump dynamics, typically without control and with little focus on computational methods. We propose a novel discrete-time formulation for graphon mean field games as the limit of non-linear dense graph Markov games with weak interaction. On the theoretical side, we give extensive and rigorous existence and approximation properties of the graphon mean field solution in sufficiently large systems. On the practical side we provide general learning schemes for graphon mean field equilibria by either introducing agent equivalence classes or reformulating the graphon mean field system as a classical mean field system. By repeatedly finding a regularized optimal control solution and its generated mean field, we successfully obtain plausible approximate Nash equilibria in otherwise infeasible large dense graph games with many agents. Empirically, we are able to demonstrate on a number of examples that the finite-agent behavior comes increasingly close to the mean field behavior for our computed equilibria as the graph or system size grows, verifying our theory. More generally, we successfully apply policy gradient reinforcement learning in conjunction with sequential Monte Carlo methods. | https://openreview.net/pdf/b4f2ad24930753086ac1a6b4fea2f45e13771c21.pdf |
Measuring CLEVRness: Black-box Testing of Visual Reasoning Models | https://openreview.net/forum?id=UtGtoS4CYU | https://openreview.net/forum?id=UtGtoS4CYU | Spyridon Mouselinos,Henryk Michalewski,Mateusz Malinowski | ICLR 2022,Poster | How can we measure the reasoning capabilities of intelligence systems? Visual question answering provides a convenient framework for testing the model's abilities by interrogating the model through questions about the scene. However, despite scores of various visual QA datasets and architectures, which sometimes yield even a super-human performance, the question of whether those architectures can actually reason remains open to debate.
To answer this, we extend the visual question answering framework and propose the following behavioral test in the form of a two-player game. We consider black-box neural models of CLEVR. These models are trained on a diagnostic dataset benchmarking reasoning. Next, we train an adversarial player that re-configures the scene to fool the CLEVR model. We show that CLEVR models, which otherwise could perform at a ``human-level'', can easily be fooled by our agent. Our results
put in doubt whether data-driven approaches can do reasoning without exploiting the numerous biases that are often present in those datasets. Finally, we also propose a controlled experiment measuring the efficiency of such models to learn and perform reasoning. | https://openreview.net/pdf/3eb3b1766e7d5addbbef3045662e6ad378427ae1.pdf |
Exploiting Class Activation Value for Partial-Label Learning | https://openreview.net/forum?id=qqdXHUGec9h | https://openreview.net/forum?id=qqdXHUGec9h | Fei Zhang,Lei Feng,Bo Han,Tongliang Liu,Gang Niu,Tao Qin,Masashi Sugiyama | ICLR 2022,Poster | Partial-label learning (PLL) solves the multi-class classification problem, where each training instance is assigned a set of candidate labels that include the true label. Recent advances showed that PLL can be compatible with deep neural networks, which achieved state-of-the-art performance. However, most of the existing deep PLL methods focus on designing proper training objectives under various assumptions on the collected data, which may limit their performance when the collected data cannot satisfy the adopted assumptions. In this paper, we propose to exploit the learned intrinsic representation of the model to identify the true label in the training process, which does not rely on any assumptions on the collected data. We make two key contributions. As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, could surprisingly be utilized to make accurate predictions on selecting the true label from candidate labels. Unfortunately, as CAM is confined to image inputs with convolutional neural networks, we are yet unable to directly leverage CAM to address the PLL problem with general inputs and models. Thus, as the second contribution, we propose the class activation value (CAV), which owns similar properties of CAM, while CAV is versatile in various types of inputs and models. Building upon CAV, we propose a novel method named CAV Learning (CAVL) that selects the true label by the class with the maximum CAV for model training. Extensive experiments on various datasets demonstrate that our proposed CAVL method achieves state-of-the-art performance. | https://openreview.net/pdf/c3254feff27af6a4191d7d320290953ea6656e5d.pdf |
Givens Coordinate Descent Methods for Rotation Matrix Learning in Trainable Embedding Indexes | https://openreview.net/forum?id=9-Rfew334N | https://openreview.net/forum?id=9-Rfew334N | Yunjiang Jiang,Han Zhang,Yiming Qiu,Yun Xiao,Bo Long,Wen-Yun Yang | ICLR 2022,Poster | Product quantization (PQ) coupled with a space rotation, is widely used in modern approximate nearest neighbor (ANN) search systems to significantly compress the disk storage for embeddings and speed up the inner product computation. Existing rotation learning methods, however, minimize quantization distortion for fixed embeddings, which are not applicable to an end-to-end training scenario where embeddings are updated constantly. In this paper, based on geometric intuitions from Lie group theory, in particular the special orthogonal groupSO(n), we propose a family of block Givens coordinate descent algorithms to learn rotation matrix that are provably convergent on any convex objectives. Compared to the state-of-the-art SVD method, the Givens algorithms are much more parallelizable, reducing runtime by orders of magnitude on modern GPUs, and converge more stably according to experimental studies. They further improve upon vanilla product quantization significantly in an end-to-end training scenario. | https://openreview.net/pdf/8b198d22f6cd67d27c168c61da339c81ac7d363d.pdf |
cosFormer: Rethinking Softmax In Attention | https://openreview.net/forum?id=Bl8CQrx2Up4 | https://openreview.net/forum?id=Bl8CQrx2Up4 | Zhen Qin,Weixuan Sun,Hui Deng,Dongxu Li,Yunshen Wei,Baohong Lv,Junjie Yan,Lingpeng Kong,Yiran Zhong | ICLR 2022,Poster | Transformer has shown great successes in natural language processing, computer vision, and audio processing. As one of its core components, the softmax attention helps to capture long-range dependencies yet prohibits its scale-up due to the quadratic space and time complexity to the sequence length. Kernel methods are often adopted to reduce the complexity by approximating the softmax operator. Nevertheless, due to the approximation errors, their performances vary in different tasks/corpus and suffer crucial performance drops when compared with the vanilla softmax attention. In this paper, we propose a linear transformer called cosFormer that can achieve comparable or better accuracy to the vanilla transformer in both casual and cross attentions. cosFormer is based on two key properties of softmax attention: i). non-negativeness of the attention matrix; ii). a non-linear re-weighting scheme that can concentrate the distribution of the attention matrix. As its linear substitute, cosFormer fulfills these properties with a linear operator and a cosine-based distance re-weighting mechanism. Extensive experiments on language modeling and text understanding tasks demonstrate the effectiveness of our method. We further examine our method on long sequences and achieve state-of-the-art performance on the Long-Range Arena benchmark. The source code is available at https://github.com/OpenNLPLab/cosFormer. | https://openreview.net/pdf/8d5626cec27b9e7c1a7e9c6ad0ba3b4e20fa74f9.pdf |
FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic descriptions, and Conceptual Relations | https://openreview.net/forum?id=htWIlvDcY8 | https://openreview.net/forum?id=htWIlvDcY8 | Lingjie Mei,Jiayuan Mao,Ziqi Wang,Chuang Gan,Joshua B. Tenenbaum | ICLR 2022,Poster | We present a meta-learning framework for learning new visual concepts quickly, from just one or a few examples, guided by multiple naturally occurring data streams: simultaneously looking at images, reading sentences that describe the objects in the scene, and interpreting supplemental sentences that relate the novel concept with other concepts. The learned concepts support downstream applications, such as answering questions by reasoning about unseen images. Our model, namely FALCON, represents individual visual concepts, such as colors and shapes, as axis-aligned boxes in a high-dimensional space (the ``box embedding space''). Given an input image and its paired sentence, our model first resolves the referential expression in the sentence and associates the novel concept with particular objects in the scene. Next, our model interprets supplemental sentences to relate the novel concept with other known concepts, such as ``X has property Y'' or ``X is a kind of Y''. Finally, it infers an optimal box embedding for the novel concept that jointly 1) maximizes the likelihood of the observed instances in the image, and 2) satisfies the relationships between the novel concepts and the known ones. We demonstrate the effectiveness of our model on both synthetic and real-world datasets. | https://openreview.net/pdf/074074edfbe3b59bf1651653fcf8002522df2588.pdf |
HyAR: Addressing Discrete-Continuous Action Reinforcement Learning via Hybrid Action Representation | https://openreview.net/forum?id=64trBbOhdGU | https://openreview.net/forum?id=64trBbOhdGU | Boyan Li,Hongyao Tang,YAN ZHENG,Jianye HAO,Pengyi Li,Zhen Wang,Zhaopeng Meng,LI Wang | ICLR 2022,Poster | Discrete-continuous hybrid action space is a natural setting in many practical problems, such as robot control and game AI. However, most previous Reinforcement Learning (RL) works only demonstrate the success in controlling with either discrete or continuous action space, while seldom take into account the hybrid action space. One naive way to address hybrid action RL is to convert the hybrid action space into a unified homogeneous action space by discretization or continualization, so that conventional RL algorithms can be applied. However, this ignores the underlying structure of hybrid action space and also induces the scalability issue and additional approximation difficulties, thus leading to degenerated results. In this paper, we propose Hybrid Action Representation (HyAR) to learn a compact and decodable latent representation space for the original hybrid action space. HyAR constructs the latent space and embeds the dependence between discrete action and continuous parameter via an embedding table and conditional Variantional Auto-Encoder (VAE). To further improve the effectiveness, the action representation is trained to be semantically smooth through unsupervised environmental dynamics prediction. Finally, the agent then learns its policy with conventional DRL algorithms in the learned representation space and interacts with the environment by decoding the hybrid action embeddings to the original action space. We evaluate HyAR in a variety of environments with discrete-continuous action space. The results demonstrate the superiority of HyAR when compared with previous baselines, especially for high-dimensional action spaces. | https://openreview.net/pdf/21005c7fdccb6d12eff7a0a9deba69320225bd53.pdf |
Transferable Adversarial Attack based on Integrated Gradients | https://openreview.net/forum?id=DesNW4-5ai9 | https://openreview.net/forum?id=DesNW4-5ai9 | Yi Huang,Adams Wai-Kin Kong | ICLR 2022,Poster | The vulnerability of deep neural networks to adversarial examples has drawn tremendous attention from the community. Three approaches, optimizing standard objective functions, exploiting attention maps, and smoothing decision surfaces, are commonly used to craft adversarial examples. By tightly integrating the three approaches, we propose a new and simple algorithm named Transferable Attack based on Integrated Gradients (TAIG) in this paper, which can find highly transferable adversarial examples for black-box attacks. Unlike previous methods using multiple computational terms or combining with other methods, TAIG integrates the three approaches into one single term. Two versions of TAIG that compute their integrated gradients on a straight-line path and a random piecewise linear path are studied. Both versions offer strong transferability and can seamlessly work together with the previous methods. Experimental results demonstrate that TAIG outperforms the state-of-the-art methods. | https://openreview.net/pdf/1586562e5f640c6d1b803ae91c947824cbefde02.pdf |
How to deal with missing data in supervised deep learning? | https://openreview.net/forum?id=J7b4BCtDm4 | https://openreview.net/forum?id=J7b4BCtDm4 | Niels Bruun Ipsen,Pierre-Alexandre Mattei,Jes Frellsen | ICLR 2022,Poster | The issue of missing data in supervised learning has been largely overlooked, especially in the deep learning community. We investigate strategies to adapt neural architectures for handling missing values. Here, we focus on regression and classification problems where the features are assumed to be missing at random. Of particular interest are schemes that allow reusing as-is a neural discriminative architecture. To address supervised deep learning with missing values, we propose to marginalize over missing values in a joint model of covariates and outcomes. Thereby, we leverage both the flexibility of deep generative models to describe the distribution of the covariates and the power of purely discriminative models to make predictions. More precisely, a deep latent variable model can be learned jointly with the discriminative model, using importance-weighted variational inference, essentially using importance sampling to mimick averaging over multiple imputations. In low-capacity regimes, or when the discriminative model has a strong inductive bias, we find that our hybrid generative/discriminative approach generally outperforms single imputations methods. | https://openreview.net/pdf/7b8b526fb7f29d7a4173ec3ad4d7c9cd67e17a63.pdf |
Topological Graph Neural Networks | https://openreview.net/forum?id=oxxUMeFwEHd | https://openreview.net/forum?id=oxxUMeFwEHd | Max Horn,Edward De Brouwer,Michael Moor,Yves Moreau,Bastian Rieck,Karsten Borgwardt | ICLR 2022,Poster | Graph neural networks (GNNs) are a powerful architecture for tackling graph learning tasks, yet have been shown to be oblivious to eminent substructures such as cycles. We present TOGL, a novel layer that incorporates global topological information of a graph using persistent homology. TOGL can be easily integrated into any type of GNN and is strictly more expressive (in terms the Weisfeiler–Lehman graph isomorphism test) than message-passing GNNs. Augmenting GNNs with TOGL leads to improved predictive performance for graph and node classification tasks, both on synthetic data sets, which can be classified by humans using their topology but not by ordinary GNNs, and on real-world data. | https://openreview.net/pdf/8c27790ab47c50f8661bee7b4b27becf68c62532.pdf |
Learning Value Functions from Undirected State-only Experience | https://openreview.net/forum?id=6Pe99Juo9gd | https://openreview.net/forum?id=6Pe99Juo9gd | Matthew Chang,Arjun Gupta,Saurabh Gupta | ICLR 2022,Poster | This paper tackles the problem of learning value functions from undirected state-only experience (state transitions without action labels i.e. (s,s',r) tuples). We first theoretically characterize the applicability of Q-learning in this setting. We show that tabular Q-learning in discrete Markov decision processes (MDPs) learns the same value function under any arbitrary refinement of the action space. This theoretical result motivates the design of Latent Action Q-learning or LAQ, an offline RL method that can learn effective value functions from state-only experience. Latent Action Q-learning (LAQ) learns value functions using Q-learning on discrete latent actions obtained through a latent-variable future prediction model. We show that LAQ can recover value functions that have high correlation with value functions learned using ground truth actions. Value functions learned using LAQ lead to sample efficient acquisition of goal-directed behavior, can be used with domain-specific low-level controllers, and facilitate transfer across embodiments. Our experiments in 5 environments ranging from 2D grid world to 3D visual navigation in realistic environments demonstrate the benefits of LAQ over simpler alternatives, imitation learning oracles, and competing methods. | https://openreview.net/pdf/165580641e9ae16c0919513d98c7a95e8f701683.pdf |
The Boltzmann Policy Distribution: Accounting for Systematic Suboptimality in Human Models | https://openreview.net/forum?id=_l_QjPGN5ye | https://openreview.net/forum?id=_l_QjPGN5ye | Cassidy Laidlaw,Anca Dragan | ICLR 2022,Poster | Models of human behavior for prediction and collaboration tend to fall into two categories: ones that learn from large amounts of data via imitation learning, and ones that assume human behavior to be noisily-optimal for some reward function. The former are very useful, but only when it is possible to gather a lot of human data in the target environment and distribution. The advantage of the latter type, which includes Boltzmann rationality, is the ability to make accurate predictions in new environments without extensive data when humans are actually close to optimal. However, these models fail when humans exhibit systematic suboptimality, i.e. when their deviations from optimal behavior are not independent, but instead consistent over time. Our key insight is that systematic suboptimality can be modeled by predicting policies, which couple action choices over time, instead of trajectories. We introduce the Boltzmann policy distribution (BPD), which serves as a prior over human policies and adapts via Bayesian inference to capture systematic deviations by observing human actions during a single episode. The BPD is difficult to compute and represent because policies lie in a high-dimensional continuous space, but we leverage tools from generative and sequence modeling to enable efficient sampling and inference. We show that the BPD enables prediction of human behavior and human-AI collaboration equally as well as imitation learning-based human models while using far less data. | https://openreview.net/pdf/441fdfcfbe0339bb96b0292455eb5acb04f4676e.pdf |
WeakM3D: Towards Weakly Supervised Monocular 3D Object Detection | https://openreview.net/forum?id=ahi2XSHpAUZ | https://openreview.net/forum?id=ahi2XSHpAUZ | Liang Peng,Senbo Yan,Boxi Wu,Zheng Yang,Xiaofei He,Deng Cai | ICLR 2022,Poster | Monocular 3D object detection is one of the most challenging tasks in 3D scene understanding. Due to the ill-posed nature of monocular imagery, existing monocular 3D detection methods highly rely on training with the manually annotated 3D box labels on the LiDAR point clouds. This annotation process is very laborious and expensive. To dispense with the reliance on 3D box labels, in this paper we explore the weakly supervised monocular 3D detection. Specifically, we first detect 2D boxes on the image. Then, we adopt the generated 2D boxes to select corresponding RoI LiDAR points as the weak supervision. Eventually, we adopt a network to predict 3D boxes which can tightly align with associated RoI LiDAR points. This network is learned by minimizing our newly-proposed 3D alignment loss between the 3D box estimates and the corresponding RoI LiDAR points. We will illustrate the potential challenges of the above learning problem and resolve these challenges by introducing several effective designs into our method. Codes are available at https://github.com/SPengLiang/WeakM3D.
| https://openreview.net/pdf/0c6d24738da03503a65d924f29e0a8a8a96e1b49.pdf |
Exploring Memorization in Adversarial Training | https://openreview.net/forum?id=7gE9V9GBZaI | https://openreview.net/forum?id=7gE9V9GBZaI | Yinpeng Dong,Ke Xu,Xiao Yang,Tianyu Pang,Zhijie Deng,Hang Su,Jun Zhu | ICLR 2022,Poster | Deep learning models have a propensity for fitting the entire training set even with random labels, which requires memorization of every training sample. In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of the adversarially trained models. We first demonstrate that deep networks have sufficient capacity to memorize adversarial examples of training data with completely random labels, but not all AT algorithms can converge under the extreme circumstance. Our study of AT with random labels motivates further analyses on the convergence and generalization of AT. We find that some AT approaches suffer from a gradient instability issue and the recently suggested complexity measures cannot explain robust generalization by considering models trained on random labels. Furthermore, we identify a significant drawback of memorization in AT that it could result in robust overfitting. We then propose a new mitigation algorithm motivated by detailed memorization analyses. Extensive experiments on various datasets validate the effectiveness of the proposed method. | https://openreview.net/pdf/c33e24a96109ee75779a6d6cbb2bb2f7df3ddadd.pdf |
Disentanglement Analysis with Partial Information Decomposition | https://openreview.net/forum?id=pETy-HVvGtt | https://openreview.net/forum?id=pETy-HVvGtt | Seiya Tokui,Issei Sato | ICLR 2022,Poster | We propose a framework to analyze how multivariate representations disentangle ground-truth generative factors. A quantitative analysis of disentanglement has been based on metrics designed to compare how one variable explains each generative factor. Current metrics, however, may fail to detect entanglement that involves more than two variables, e.g., representations that duplicate and rotate generative factors in high dimensional spaces. In this work, we establish a framework to analyze information sharing in a multivariate representation with Partial Information Decomposition and propose a new disentanglement metric. This framework enables us to understand disentanglement in terms of uniqueness, redundancy, and synergy. We develop an experimental protocol to assess how increasingly entangled representations are evaluated with each metric and confirm that the proposed metric correctly responds to entanglement. Through experiments on variational autoencoders, we find that models with similar disentanglement scores have a variety of characteristics in entanglement, for each of which a distinct strategy may be required to obtain a disentangled representation. | https://openreview.net/pdf/9951cc9e8f5ebea47555522be7bf51da71054709.pdf |
Differentiable Gradient Sampling for Learning Implicit 3D Scene Reconstructions from a Single Image | https://openreview.net/forum?id=U8pbd00cCWB | https://openreview.net/forum?id=U8pbd00cCWB | Shizhan Zhu,Sayna Ebrahimi,Angjoo Kanazawa,Trevor Darrell | ICLR 2022,Poster | Implicit shape models are promising 3D representations for modeling arbitrary locations, with Signed Distance Functions (SDFs) particularly suitable for clear mesh surface reconstruction. Existing approaches for single object reconstruction impose supervision signals based on the loss of the signed distance value from all locations in a scene, posing difficulties when extending to real-world scenarios. The spatial gradient of the signed distance field, rather than the SDF value itself, has not been typically employed as a source of supervision for single-view reconstruction, in part due to the difficulties of differentiable sampling a spatial gradient from the feature map. In this study, we derive a novel closed-form gradient sampling solution for Differentialble Gradient Sampling (DGS) that enables backpropagation of the loss of the spatial gradient back to the feature map pixels, thus allowing the imposition of the loss efficiently on the spatial gradient. As a result, we achieve high-quality single view indoor scene reconstruction results learning directly from a real-world scanned dataset (e.g. ScannetV2). Our model also performs well when generalizing to unseen images downloaded directly from the internet (Fig. 1). We comfortably advanced the state-of-the-art results with several established datasets including ShapeNet and ScannetV2; extensive quantitative analysis confirmed that our proposed DGS module plays an essential role in achieving this performance improvement. Full codes are available in MaskedURL. | https://openreview.net/pdf/8c96aef9f871676b7212944a243a58434b44fc1a.pdf |
Learning Continuous Environment Fields via Implicit Functions | https://openreview.net/forum?id=3ILxkQ7yElm | https://openreview.net/forum?id=3ILxkQ7yElm | Xueting Li,Shalini De Mello,Xiaolong Wang,Ming-Hsuan Yang,Jan Kautz,Sifei Liu | ICLR 2022,Poster | We propose a novel scene representation that encodes reaching distance -- the distance between any position in the scene to a goal along a feasible trajectory. We demonstrate that this environment field representation can directly guide the dynamic behaviors of agents in 2D mazes or 3D indoor scenes. Our environment field is a continuous representation and learned via a neural implicit function using discretely sampled training data. We showcase its application for agent navigation in 2D mazes, and human trajectory prediction in 3D indoor environments. To produce physically plausible and natural trajectories for humans, we additionally learn a generative model that predicts regions where humans commonly appear, and enforce the environment field to be defined within such regions. Extensive experiments demonstrate that the proposed method can generate both feasible and plausible trajectories efficiently and accurately. | https://openreview.net/pdf/bb9cdfc2e84adef6cb9610f21715cf048acaeaa4.pdf |
Causal Contextual Bandits with Targeted Interventions | https://openreview.net/forum?id=F5Em8ASCosV | https://openreview.net/forum?id=F5Em8ASCosV | Chandrasekar Subramanian,Balaraman Ravindran | ICLR 2022,Poster | We study a contextual bandit setting where the learning agent has the ability to perform interventions on targeted subsets of the population, apart from possessing qualitative causal side-information. This novel formalism captures intricacies in real-world scenarios such as software product experimentation where targeted experiments can be conducted. However, this fundamentally changes the set of options that the agent has, compared to standard contextual bandit settings, necessitating new techniques. This is also the first work that integrates causal side-information in a contextual bandit setting, where the agent aims to learn a policy that maps contexts to arms (as opposed to just identifying one best arm). We propose a new algorithm, which we show empirically performs better than baselines on experiments that use purely synthetic data and on real world-inspired experiments. We also prove a bound on regret that theoretically guards performance. | https://openreview.net/pdf/b77f8b2b6b86bc81bdb006750510124a95769c01.pdf |
Sound and Complete Neural Network Repair with Minimality and Locality Guarantees | https://openreview.net/forum?id=xS8AMYiEav3 | https://openreview.net/forum?id=xS8AMYiEav3 | Feisi Fu,Wenchao Li | ICLR 2022,Poster | We present a novel methodology for repairing neural networks that use ReLU activation functions. Unlike existing methods that rely on modifying the weights of a neural network which can induce a global change in the function space, our approach applies only a localized change in the function space while still guaranteeing the removal of the buggy behavior. By leveraging the piecewise linear nature of ReLU networks, our approach can efficiently construct a patch network tailored to the linear region where the buggy input resides, which when combined with the original network, provably corrects the behavior on the buggy input. Our method is both sound and complete -- the repaired network is guaranteed to fix the buggy input, and a patch is guaranteed to be found for any buggy input. Moreover, our approach preserves the continuous piecewise linear nature of ReLU networks, automatically generalizes the repair to all the points including other undetected buggy inputs inside the repair region, is minimal in terms of changes in the function space, and guarantees that outputs on inputs away from the repair region are unaltered. On several benchmarks, we show that our approach significantly outperforms existing methods in terms of locality and limiting negative side effects. | https://openreview.net/pdf/33795da3be90ef32dd95bcf374b18d0f4f957766.pdf |
Blaschke Product Neural Networks (BPNN): A Physics-Infused Neural Network for Phase Retrieval of Meromorphic Functions | https://openreview.net/forum?id=JJxiD-kg-oK | https://openreview.net/forum?id=JJxiD-kg-oK | Juncheng Dong,Simiao Ren,Yang Deng,Omar Khatib,Jordan Malof,Mohammadreza Soltani,Willie Padilla,Vahid Tarokh | ICLR 2022,Poster | Numerous physical systems are described by ordinary or partial differential equations whose solutions are given by holomorphic or meromorphic functions in the complex domain. In many cases, only the magnitude of these functions are observed on various points on the purely imaginary $j\omega$-axis since coherent measurement of their phases is often expensive. However, it is desirable to retrieve the lost phases from the magnitudes when possible. To this end, we propose a physics-infused deep neural network based on the Blaschke products for phase retrieval. Inspired by the Helson and Sarason Theorem, we recover coefficients of a rational function of Blaschke products using a Blaschke Product Neural Network (BPNN), based upon the magnitude observations as input. The resulting rational function is then used for phase retrieval. We compare the BPNN to conventional deep neural networks (NNs) on several phase retrieval problems, comprising both synthetic and contemporary real-world problems (e.g., metamaterials for which data collection requires substantial expertise and is time consuming). On each phase retrieval problem, we compare against a population of conventional NNs of varying size and hyperparameter settings. Even without any hyper-parameter search, we find that BPNNs consistently outperform the population of optimized NNs in scarce data scenarios, and do so despite being much smaller models. The results can in turn be applied to calculate the refractive index of metamaterials, which is an important problem in emerging areas of material science. | https://openreview.net/pdf/76b216b1b8df2b17e46022888e66cde431a1fb26.pdf |
Automated Self-Supervised Learning for Graphs | https://openreview.net/forum?id=rFbR4Fv-D6- | https://openreview.net/forum?id=rFbR4Fv-D6- | Wei Jin,Xiaorui Liu,Xiangyu Zhao,Yao Ma,Neil Shah,Jiliang Tang | ICLR 2022,Poster | Graph self-supervised learning has gained increasing attention due to its capacity to learn expressive node representations. Many pretext tasks, or loss functions have been designed from distinct perspectives. However, we observe that different pretext tasks affect downstream tasks differently cross datasets, which suggests that searching pretext tasks is crucial for graph self-supervised learning. Different from existing works focusing on designing single pretext tasks, this work aims to investigate how to automatically leverage multiple pretext tasks effectively. Nevertheless, evaluating representations derived from multiple pretext tasks without direct access to ground truth labels makes this problem challenging. To address this obstacle, we make use of a key principle of many real-world graphs, i.e., homophily, or the principle that ``like attracts like,'' as the guidance to effectively search various self-supervised pretext tasks. We provide theoretical understanding and empirical evidence to justify the flexibility of homophily in this search task. Then we propose the AutoSSL framework which can automatically search over combinations of various self-supervised tasks. By evaluating the framework on 7 real-world datasets, our experimental results show that AutoSSL can significantly boost the performance on downstream tasks including node clustering and node classification compared with training under individual tasks. | https://openreview.net/pdf/f5713df2ae7c42f22843d36f1aae8a36c6010b6d.pdf |
Creating Training Sets via Weak Indirect Supervision | https://openreview.net/forum?id=m8uJvVgwRci | https://openreview.net/forum?id=m8uJvVgwRci | Jieyu Zhang,Bohan Wang,Xiangchen Song,Yujing Wang,Yaming Yang,Jing Bai,Alexander Ratner | ICLR 2022,Poster | Creating labeled training sets has become one of the major roadblocks in machine learning. To address this, recent Weak Supervision (WS) frameworks synthesize training labels from multiple potentially noisy supervision sources. However, existing frameworks are restricted to supervision sources that share the same output space as the target task. To extend the scope of usable sources, we formulate Weak Indirect Supervision (WIS), a new research problem for automatically synthesizing training labels based on indirect supervision sources that have different output label spaces. To overcome the challenge of mismatched output spaces, we develop a probabilistic modeling approach, PLRM, which uses user-provided label relations to model and leverage indirect supervision sources. Moreover, we provide a theoretically-principled test of the distinguishability of PLRM for unseen labels, along with an generalization bound. On both image and text classification tasks as well as an industrial advertising application, we demonstrate the advantages of PLRM by outperforming baselines by a margin of 2%-9%. | https://openreview.net/pdf/46411f07ed6843d6bacf30073524429547e8d250.pdf |
Do Not Escape From the Manifold: Discovering the Local Coordinates on the Latent Space of GANs | https://openreview.net/forum?id=aTzMi4yV_RO | https://openreview.net/forum?id=aTzMi4yV_RO | Jaewoong Choi,Junho Lee,Changyeon Yoon,Jung Ho Park,Geonho Hwang,Myungjoo Kang | ICLR 2022,Poster | The discovery of the disentanglement properties of the latent space in GANs motivated a lot of research to find the semantically meaningful directions on it. In this paper, we suggest that the disentanglement property is closely related to the geometry of the latent space. In this regard, we propose an unsupervised method for finding the semantic-factorizing directions on the intermediate latent space of GANs based on the local geometry. Intuitively, our proposed method, called $\textit{Local Basis}$, finds the principal variation of the latent space in the neighborhood of the base latent variable. Experimental results show that the local principal variation corresponds to the semantic factorization and traversing along it provides strong robustness to image traversal. Moreover, we suggest an explanation for the limited success in finding the global traversal directions in the latent space, especially $\mathcal{W}$-space of StyleGAN2. We show that $\mathcal{W}$-space is warped globally by comparing the local geometry, discovered from Local Basis, through the metric on Grassmannian Manifold. The global warpage implies that the latent space is not well-aligned globally and therefore the global traversal directions are bound to show limited success on it. | https://openreview.net/pdf/35267777d7fe835879ddec7c83aecd9d170d070d.pdf |
GradSign: Model Performance Inference with Theoretical Insights | https://openreview.net/forum?id=HObMhrCeAAF | https://openreview.net/forum?id=HObMhrCeAAF | Zhihao Zhang,Zhihao Jia | ICLR 2022,Poster | A key challenge in neural architecture search (NAS) is quickly inferring the predictive performance of a broad spectrum of networks to discover statistically accurate and computationally efficient ones. We refer to this task as model performance inference (MPI). The current practice for efficient MPI is gradient-based methods that leverage the gradients of a network at initialization to infer its performance. However, existing gradient-based methods rely only on heuristic metrics and lack the necessary theoretical foundations to consolidate their designs. We propose GradSign, an accurate, simple, and flexible metric for model performance inference with theoretical insights. The key idea behind GradSign is a quantity Ψ to analyze the sample-wise optimization landscape of different networks. Theoretically, we show that Ψ is an upper bound for both the training and true population losses of a neural network under reasonable assumptions. However, it is computationally prohibitive to directly calculate Ψ for modern neural networks. To
address this challenge, we design GradSign, an accurate and simple approximation of Ψ using the gradients of a network evaluated at a random initialization state. Evaluation on seven NAS benchmarks across three training datasets shows that GradSign generalizes well to real-world networks and consistently outperforms state-of-the-art gradient-based methods for MPI evaluated by Spearman’s ρ and Kendall’s Tau. Additionally, we integrate GradSign into four existing NAS algorithms and show that the GradSign-assisted NAS algorithms outperform their vanilla counterparts by improving the accuracies of best-discovered networks by up to 0.3%, 1.1%, and 1.0% on three real-world tasks. Code is available at https://github.com/JackFram/GradSign | https://openreview.net/pdf/f7fb958fd531503a7fafea7cd2c68e5507b12e75.pdf |
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks | https://openreview.net/forum?id=hpBTIv2uy_E | https://openreview.net/forum?id=hpBTIv2uy_E | Eli Chien,Chao Pan,Jianhao Peng,Olgica Milenkovic | ICLR 2022,Poster | Hypergraphs are used to model higher-order interactions amongst agents and there exist many practically relevant instances of hypergraph datasets. To enable the efficient processing of hypergraph data, several hypergraph neural network platforms have been proposed for learning hypergraph properties and structure, with a special focus on node classification tasks. However, almost all existing methods use heuristic propagation rules and offer suboptimal performance on benchmarking datasets. We propose AllSet, a new hypergraph neural network paradigm that represents a highly general framework for (hyper)graph neural networks and for the first time implements hypergraph neural network layers as compositions of two multiset functions that can be efficiently learned for each task and each dataset. The proposed AllSet framework also for the first time integrates Deep Sets and Set Transformers with hypergraph neural networks for the purpose of learning multiset functions and therefore allows for significant modeling flexibility and high expressive power. To evaluate the performance of AllSet, we conduct the most extensive experiments to date involving ten known benchmarking datasets and three newly curated datasets that represent significant challenges for hypergraph node classification. The results demonstrate that our method has the unique ability to either match or outperform all other hypergraph neural networks across the tested datasets: As an example, the performance improvements over existing methods and a new method based on heterogeneous graph neural networks are close to $4\%$ on the Yelp and Zoo datasets, and $3\%$ on the Walmart dataset. | https://openreview.net/pdf/bb6cb5f99fd0b8d93c19d31dbda4a1b5eeda57a3.pdf |
Synchromesh: Reliable Code Generation from Pre-trained Language Models | https://openreview.net/forum?id=KmtVD97J43e | https://openreview.net/forum?id=KmtVD97J43e | Gabriel Poesia,Alex Polozov,Vu Le,Ashish Tiwari,Gustavo Soares,Christopher Meek,Sumit Gulwani | ICLR 2022,Poster | Large pre-trained language models have been used to generate code, providing a flexible interface for synthesizing programs from natural language specifications. However, they often violate syntactic and semantic rules of their output language, limiting their practical usability. In this paper, we propose Synchromesh: a framework for substantially improving the reliability of pre-trained models for code generation. Synchromesh comprises two components. First, it retrieves few-shot examples from a training bank using Target Similarity Tuning (TST), a novel method for semantic example selection. TST learns to recognize utterances that describe similar target programs despite of differences in surface natural language features. Then, Synchromesh feeds the examples to a pre-trained language model and samples programs using Constrained Semantic Decoding (CSD): a general framework for constraining the output to a set of valid programs in the target language. CSD leverages constraints on partial outputs to sample complete correct programs, and needs neither re-training nor fine-tuning of the language model. We evaluate our methods by synthesizing code from natural language descriptions using GPT-3 and Codex in three real-world languages: SQL queries, Vega-Lite visualizations and SMCalFlow programs. These domains showcase rich constraints that CSD is able to enforce, including syntax, scoping and typing rules. Across all languages, we observe complementary gains from CSD and TST in prediction accuracy and in effectively preventing parsing, type and run-time errors. | https://openreview.net/pdf/6ff098333e70afec46ebe0a90baf01256cacc43c.pdf |
Learning curves for continual learning in neural networks: Self-knowledge transfer and forgetting | https://openreview.net/forum?id=tFgdrQbbaa | https://openreview.net/forum?id=tFgdrQbbaa | Ryo Karakida,Shotaro Akaho | ICLR 2022,Poster | Sequential training from task to task is becoming one of the major objects in deep learning applications such as continual learning and transfer learning. Nevertheless, it remains unclear under what conditions the trained model's performance improves or deteriorates. To deepen our understanding of sequential training, this study provides a theoretical analysis of generalization performance in a solvable case of continual learning. We consider neural networks in the neural tangent kernel (NTK) regime that continually learn target functions from task to task, and investigate the generalization by using an established statistical mechanical analysis of kernel ridge-less regression. We first show characteristic transitions from positive to negative transfer. More similar targets above a specific critical value can achieve positive knowledge transfer for the subsequent task while catastrophic forgetting occurs even with very similar targets. Next, we investigate a variant of continual learning which supposes the same target function in multiple tasks. Even for the same target, the trained model shows some transfer and forgetting depending on the sample size of each task. We can guarantee that the generalization error monotonically decreases from task to task for equal sample sizes while unbalanced sample sizes deteriorate the generalization. We respectively refer to these improvement and deterioration as self-knowledge transfer and forgetting, and empirically confirm them in realistic training of deep neural networks as well. | https://openreview.net/pdf/496372cd3a258b357f5e999076aa2f2a991ca5c8.pdf |
Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning | https://openreview.net/forum?id=xLfAgCroImw | https://openreview.net/forum?id=xLfAgCroImw | Yatao Bian,Yu Rong,Tingyang Xu,Jiaxiang Wu,Andreas Krause,Junzhou Huang | ICLR 2022,Poster | Valuation problems, such as feature interpretation, data valuation and model valuation for ensembles, become increasingly more important in many machine learning applications. Such problems are commonly solved by well-known game-theoretic criteria, such as Shapley value or Banzhaf value. In this work, we present a novel energy-based treatment for cooperative games, with a theoretical justification by the maximum entropy framework. Surprisingly, by conducting variational inference of the energy-based model, we recover various game-theoretic valuation criteria through conducting one-step fixed point iteration for maximizing the mean-field ELBO objective. This observation also verifies the rationality of existing criteria, as they are all attempting to decouple the correlations among the players through the mean-field approach. By running fixed point iteration for multiple steps, we achieve a trajectory of the valuations, among which we define the valuation with the best conceivable decoupling error as the Variational Index. We prove that under uniform initializations, these variational valuations all satisfy a set of game-theoretic axioms. We experimentally demonstrate that the proposed Variational Index enjoys lower decoupling error and better valuation performance on certain synthetic and real-world valuation problems. | https://openreview.net/pdf/52a282d00203dfc39e85be560be1315b7d3ea1e3.pdf |
Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage | https://openreview.net/forum?id=tyrJsbKAe6 | https://openreview.net/forum?id=tyrJsbKAe6 | Masatoshi Uehara,Wen Sun | ICLR 2022,Poster | We study model-based offline Reinforcement Learning with general function approximation without a full coverage assumption on the offline data distribution. We present an algorithm named Constrained Pessimistic Policy Optimization (CPPO) which leverages a general function class and uses a constraint over the models to encode pessimism. Under the assumption that the ground truth model belongs to our function class (i.e., realizability in the function class), CPPO has a PAC guarantee with offline data only providing partial coverage, i.e., it can learn a policy that competes against any policy covered by the offline data. We then demonstrate that this algorithmic framework can be applied to many specialized Markov Decision Processes where the additional structural assumptions can further refine the concept of partial coverage. Two notable examples are: (1) low- rank MDP with representation learning where the partial coverage condition is defined using a relative condition number measured by the unknown ground truth feature representation; (2) factored MDP where the partial coverage condition is defined using density-ratio based concentrability coefficients associated with individual factors. | https://openreview.net/pdf/d15bd4853bbe7787fe5fd4a9ffa5c164ac6e2d90.pdf |
Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods | https://openreview.net/forum?id=1ugNpm7W6E | https://openreview.net/forum?id=1ugNpm7W6E | Wenqing Zheng,Edward W Huang,Nikhil Rao,Sumeet Katariya,Zhangyang Wang,Karthik Subbian | ICLR 2022,Poster | Graph Neural Networks (GNNs) have achieved state-of-the-art performance in node classification, regression, and recommendation tasks. GNNs work well when rich and high-quality connections are available. However, their effectiveness is often jeopardized in many real-world graphs in which node degrees have power-law distributions. The extreme case of this situation, where a node may have no neighbors, is called Strict Cold Start (SCS). SCS forces the prediction to rely completely on the node's own features. We propose Cold Brew, a teacher-student distillation approach to address the SCS and noisy-neighbor challenges for GNNs. We also introduce feature contribution ratio (FCR), a metric to quantify the behavior of inductive GNNs to solve SCS. We experimentally show that FCR disentangles the contributions of different graph data components and helps select the best architecture for SCS generalization. We further demonstrate the superior performance of Cold Brew on several public benchmark and proprietary e-commerce datasets, where many nodes have either very few or noisy connections. Our source code is available at https://github.com/amazon-research/gnn-tail-generalization. | https://openreview.net/pdf/889df57fc4b767c52628e65826f4c6260f1947f6.pdf |
NASI: Label- and Data-agnostic Neural Architecture Search at Initialization | https://openreview.net/forum?id=v-v1cpNNK_v | https://openreview.net/forum?id=v-v1cpNNK_v | Yao Shu,Shaofeng Cai,Zhongxiang Dai,Beng Chin Ooi,Bryan Kian Hsiang Low | ICLR 2022,Poster | Recent years have witnessed a surging interest in Neural Architecture Search (NAS). Various algorithms have been proposed to improve the search efficiency and effectiveness of NAS, i.e., to reduce the search cost and improve the generalization performance of the selected architectures, respectively. However, the search efficiency of these algorithms is severely limited by the need for model training during the search process. To overcome this limitation, we propose a novel NAS algorithm called NAS at Initialization (NASI) that exploits the capability of a Neural Tangent Kernel in being able to characterize the performance of candidate architectures at initialization, hence allowing model training to be completely avoided to boost the search efficiency. Besides the improved search efficiency, NASI also achieves competitive search effectiveness on various datasets like CIFAR-10/100 and ImageNet. Further, NASI is shown to be label- and data-agnostic under mild conditions, which guarantees the transferability of architectures selected by our NASI over different datasets. | https://openreview.net/pdf/403d8d24b5d2e39b6c55d43bda2f3c36566a45ea.pdf |
How to Train Your MAML to Excel in Few-Shot Classification | https://openreview.net/forum?id=49h_IkpJtaE | https://openreview.net/forum?id=49h_IkpJtaE | Han-Jia Ye,Wei-Lun Chao | ICLR 2022,Poster | Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms nowadays.
Nevertheless, its performance on few-shot classification is far behind many recent algorithms dedicated to the problem. In this paper, we point out several key facets of how to train MAML to excel in few-shot classification. First, we find that MAML needs a large number of gradient steps in its inner loop update, which contradicts its common usage in few-shot classification. Second, we find that MAML is sensitive to the class label assignments during meta-testing. Concretely, MAML meta-trains the initialization of an $N$-way classifier. These $N$ ways, during meta-testing, then have "$N!$" different permutations to be paired with a few-shot task of $N$ novel classes. We find that these permutations lead to a huge variance of accuracy, making MAML unstable in few-shot classification. Third, we investigate several approaches to make MAML permutation-invariant, among which meta-training a single vector to initialize all the $N$ weight vectors in the classification head performs the best. On benchmark datasets like MiniImageNet and TieredImageNet, our approach, which we name UNICORN-MAML, performs on a par with or even outperforms many recent few-shot classification algorithms, without sacrificing MAML's simplicity. | https://openreview.net/pdf/959d2177bd1cb26a51379f81a0acdd1335afe8e3.pdf |
Communication-Efficient Actor-Critic Methods for Homogeneous Markov Games | https://openreview.net/forum?id=xy_2w3J3kH | https://openreview.net/forum?id=xy_2w3J3kH | Dingyang Chen,Yile Li,Qi Zhang | ICLR 2022,Poster | Recent success in cooperative multi-agent reinforcement learning (MARL) relies on centralized training and policy sharing. Centralized training eliminates the issue of non-stationarity MARL yet induces large communication costs, and policy sharing is empirically crucial to efficient learning in certain tasks yet lacks theoretical justification. In this paper, we formally characterize a subclass of cooperative Markov games where agents exhibit a certain form of homogeneity such that policy sharing provably incurs no suboptimality. This enables us to develop the first consensus-based decentralized actor-critic method where the consensus update is applied to both the actors and the critics while ensuring convergence. We also develop practical algorithms based on our decentralized actor-critic method to reduce the communication cost during training, while still yielding policies comparable with centralized training. | https://openreview.net/pdf/53792786d99df961509a8372e2fe61fafba57a92.pdf |
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer | https://openreview.net/forum?id=vh-0sUt8HlG | https://openreview.net/forum?id=vh-0sUt8HlG | Sachin Mehta,Mohammad Rastegari | ICLR 2022,Poster | Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters.
Our source code is open-source and available at: https://github.com/apple/ml-cvnets | https://openreview.net/pdf/bbdc0b90daf74d7fa54066200459a863a1f5c4e0.pdf |
Spatial Graph Attention and Curiosity-driven Policy for Antiviral Drug Discovery | https://openreview.net/forum?id=kavTY__jxp | https://openreview.net/forum?id=kavTY__jxp | Yulun Wu,Nicholas Choma,Andrew Deru Chen,Mikaela Cashman,Erica Teixeira Prates,Veronica G Melesse Vergara,Manesh B Shah,Austin Clyde,Thomas Brettin,Wibe Albert de Jong,Neeraj Kumar,Martha S Head,Rick L. Stevens,Peter Nugent,Daniel A Jacobson,James B Brown | ICLR 2022,Poster | We developed Distilled Graph Attention Policy Network (DGAPN), a reinforcement learning model to generate novel graph-structured chemical representations that optimize user-defined objectives by efficiently navigating a physically constrained domain. The framework is examined on the task of generating molecules that are designed to bind, noncovalently, to functional sites of SARS-CoV-2 proteins. We present a spatial Graph Attention (sGAT) mechanism that leverages self-attention over both node and edge attributes as well as encoding the spatial structure --- this capability is of considerable interest in synthetic biology and drug discovery. An attentional policy network is introduced to learn the decision rules for a dynamic, fragment-based chemical environment, and state-of-the-art policy gradient techniques are employed to train the network with stability. Exploration is driven by the stochasticity of the action space design and the innovation reward bonuses learned and proposed by random network distillation. In experiments, our framework achieved outstanding results compared to state-of-the-art algorithms, while reducing the complexity of paths to chemical synthesis. | https://openreview.net/pdf/b93cd097bbeb37eeacd8b044df00fd6143a092c6.pdf |
Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of Tabular NAS Benchmarks | https://openreview.net/forum?id=OnpFa95RVqs | https://openreview.net/forum?id=OnpFa95RVqs | Arber Zela,Julien Niklas Siems,Lucas Zimmer,Jovita Lukasik,Margret Keuper,Frank Hutter | ICLR 2022,Poster | The most significant barrier to the advancement of Neural Architecture Search (NAS) is its demand for large computational resources, which hinders scientifically sound empirical evaluations of NAS methods. Tabular NAS benchmarks have alleviated this problem substantially, making it possible to properly evaluate NAS methods in seconds on commodity machines. However, an unintended consequence of tabular NAS benchmarks has been a focus on extremely small architectural search spaces since their construction relies on exhaustive evaluations of the space. This leads to unrealistic results that do not transfer to larger spaces. To overcome this fundamental limitation, we propose a methodology to create cheap NAS surrogate benchmarks for arbitrary search spaces. We exemplify this approach by creating surrogate NAS benchmarks on the existing tabular NAS-Bench-101 and on two widely used NAS search spaces with up to $10^{21}$ architectures ($10^{13}$ times larger than any previous tabular NAS benchmark). We show that surrogate NAS benchmarks can model the true performance of architectures better than tabular benchmarks (at a small fraction of the cost), that they lead to faithful estimates of how well different NAS methods work on the original non-surrogate benchmark, and that they can generate new scientific insight. We open-source all our code and believe that surrogate NAS benchmarks are an indispensable tool to extend scientifically sound work on NAS to large and exciting search spaces. | https://openreview.net/pdf/db2ebd6947a358b40d1236000da44615a7f04605.pdf |
Certified Robustness for Deep Equilibrium Models via Interval Bound Propagation | https://openreview.net/forum?id=y1PXylgrXZ | https://openreview.net/forum?id=y1PXylgrXZ | Colin Wei,J Zico Kolter | ICLR 2022,Poster | Deep equilibrium layers (DEQs) have demonstrated promising performance and are competitive with standard explicit models on many benchmarks. However, little is known about certifying robustness for these models. Inspired by interval bound propagation (IBP), we propose the IBP-MonDEQ layer, a DEQ layer whose robustness can be verified by computing upper and lower interval bounds on the output. Our key insights are that these interval bounds can be obtained as the fixed-point solution to an IBP-inspired equilibrium equation, and furthermore, that this solution always exists and is unique when the layer obeys a certain parameterization. This fixed point can be interpreted as the result of applying IBP to an infinitely deep, weight-tied neural network, which may be of independent interest, as IBP bounds are typically unstable for deeper networks. Our empirical comparison reveals that models with IBP-MonDEQ layers can achieve comparable $\ell_{\infty}$ certified robustness to similarly-sized fully explicit networks. | https://openreview.net/pdf/56e9b880fdd57082fed8b007163cdddb3d9c50e1.pdf |
Crystal Diffusion Variational Autoencoder for Periodic Material Generation | https://openreview.net/forum?id=03RLpj-tc_ | https://openreview.net/forum?id=03RLpj-tc_ | Tian Xie,Xiang Fu,Octavian-Eugen Ganea,Regina Barzilay,Tommi S. Jaakkola | ICLR 2022,Poster | Generating the periodic structure of stable materials is a long-standing challenge for the material design community. This task is difficult because stable materials only exist in a low-dimensional subspace of all possible periodic arrangements of atoms: 1) the coordinates must lie in the local energy minimum defined by quantum mechanics, and 2) global stability also requires the structure to follow the complex, yet specific bonding preferences between different atom types. Existing methods fail to incorporate these factors and often lack proper invariances. We propose a Crystal Diffusion Variational Autoencoder (CDVAE) that captures the physical inductive bias of material stability. By learning from the data distribution of stable materials, the decoder generates materials in a diffusion process that moves atomic coordinates towards a lower energy state and updates atom types to satisfy bonding preferences between neighbors. Our model also explicitly encodes interactions across periodic boundaries and respects permutation, translation, rotation, and periodic invariances. We significantly outperform past methods in three tasks: 1) reconstructing the input structure, 2) generating valid, diverse, and realistic materials, and 3) generating materials that optimize a specific property. We also provide several standard datasets and evaluation metrics for the broader machine learning community. | https://openreview.net/pdf/95e16c7859a352bb2fbe73d3777141e66abbd9bf.pdf |
Task Affinity with Maximum Bipartite Matching in Few-Shot Learning | https://openreview.net/forum?id=u2GZOiUTbt | https://openreview.net/forum?id=u2GZOiUTbt | Cat Phuoc Le,Juncheng Dong,Mohammadreza Soltani,Vahid Tarokh | ICLR 2022,Poster | We propose an asymmetric affinity score for representing the complexity of utilizing the knowledge of one task for learning another one. Our method is based on the maximum bipartite matching algorithm and utilizes the Fisher Information matrix. We provide theoretical analyses demonstrating that the proposed score is mathematically well-defined, and subsequently use the affinity score to propose a novel algorithm for the few-shot learning problem. In particular, using this score, we find relevant training data labels to the test data and leverage the discovered relevant data for episodically fine-tuning a few-shot model. Results on various few-shot benchmark datasets demonstrate the efficacy of the proposed approach by improving the classification accuracy over the state-of-the-art methods even when using smaller models. | https://openreview.net/pdf/9f6c1ee6225b1fcb5afc57472b4f1fa49e81df37.pdf |
Latent Image Animator: Learning to Animate Images via Latent Space Navigation | https://openreview.net/forum?id=7r6kDq0mK_ | https://openreview.net/forum?id=7r6kDq0mK_ | Yaohui Wang,Di Yang,Francois Bremond,Antitza Dantcheva | ICLR 2022,Poster | Due to the remarkable progress of deep generative models, animating images has become increasingly efficient, whereas associated results have become increasingly realistic. Current animation-approaches commonly exploit structure representation extracted from driving videos. Such structure representation is instrumental in transferring motion from driving videos to still images. However, such approaches fail in case the source image and driving video encompass large appearance variation. Moreover, the extraction of structure information requires additional modules that endow the animation-model with increased complexity. Deviating from such models, we here introduce the Latent Image Animator (LIA), a self-supervised autoencoder that evades need for structure representation. LIA is streamlined to animate images by linear navigation in the latent space. Specifically, motion in generated video is constructed by linear displacement of codes in the latent space. Towards this, we learn a set of orthogonal motion directions simultaneously, and use their linear combination, in order to represent any displacement in the latent space. Extensive quantitative and qualitative analysis suggests that our model systematically and significantly outperforms state-of-art methods on VoxCeleb, Taichi and TED-talk datasets w.r.t. generated quality. | https://openreview.net/pdf/4c9867f27fdc26664f6abfab9127dd3e7da49c11.pdf |
Know Thyself: Transferable Visual Control Policies Through Robot-Awareness | https://openreview.net/forum?id=o0ehFykKVtr | https://openreview.net/forum?id=o0ehFykKVtr | Edward S. Hu,Kun Huang,Oleh Rybkin,Dinesh Jayaraman | ICLR 2022,Poster | Training visual control policies from scratch on a new robot typically requires generating large amounts of robot-specific data. How might we leverage data previously collected on another robot to reduce or even completely remove this need for robot-specific data? We propose a "robot-aware control" paradigm that achieves this by exploiting readily available knowledge about the robot. We then instantiate this in a robot-aware model-based RL policy by training modular dynamics models that couple a transferable, robot-aware world dynamics module with a robot-specific, potentially analytical, robot dynamics module. This also enables us to set up visual planning costs that separately consider the robot agent and the world. Our experiments on tabletop manipulation tasks with simulated and real robots demonstrate that these plug-in improvements dramatically boost the transferability of visual model-based RL policies, even permitting zero-shot transfer of visual manipulation skills onto new robots. Project website: https://www.seas.upenn.edu/~hued/rac | https://openreview.net/pdf/f710bc40ef26e24b70f811b0f03ad548956e27ae.pdf |
Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction | https://openreview.net/forum?id=KJggliHbs8 | https://openreview.net/forum?id=KJggliHbs8 | Eli Chien,Wei-Cheng Chang,Cho-Jui Hsieh,Hsiang-Fu Yu,Jiong Zhang,Olgica Milenkovic,Inderjit S Dhillon | ICLR 2022,Poster | Learning on graphs has attracted significant attention in the learning community due to numerous real-world applications. In particular, graph neural networks (GNNs), which take \emph{numerical} node features and graph structure as inputs, have been shown to achieve state-of-the-art performance on various graph-related learning tasks. Recent works exploring the correlation between numerical node features and graph structure via self-supervised learning have paved the way for further performance improvements of GNNs. However, methods used for extracting numerical node features from \emph{raw data} are still \emph{graph-agnostic} within standard GNN pipelines. This practice is sub-optimal as it prevents one from fully utilizing potential correlations between graph topology and node attributes. To mitigate this issue, we propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT). GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information, and scales to large datasets. We also provide a theoretical analysis that justifies the use of XMC over link prediction and motivates integrating XR-Transformers, a powerful method for solving XMC problems, into the GIANT framework. We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets: For example, we improve the accuracy of the top-ranked method GAMLP from $68.25\%$ to $69.67\%$, SGC from $63.29\%$ to $66.10\%$ and MLP from $47.24\%$ to $61.10\%$ on the ogbn-papers100M dataset by leveraging GIANT. | https://openreview.net/pdf/431334fb15f28e23e02e4a1cd1513fef6cacd2b0.pdf |
Spherical Message Passing for 3D Molecular Graphs | https://openreview.net/forum?id=givsRXsOt9r | https://openreview.net/forum?id=givsRXsOt9r | Yi Liu,Limei Wang,Meng Liu,Yuchao Lin,Xuan Zhang,Bora Oztekin,Shuiwang Ji | ICLR 2022,Poster | We consider representation learning of 3D molecular graphs in which each atom is associated with a spatial position in 3D. This is an under-explored area of research, and a principled message passing framework is currently lacking. In this work, we conduct analyses in the spherical coordinate system (SCS) for the complete identification of 3D graph structures. Based on such observations, we propose the spherical message passing (SMP) as a novel and powerful scheme for 3D molecular learning. SMP dramatically reduces training complexity, enabling it to perform efficiently on large-scale molecules. In addition, SMP is capable of distinguishing almost all molecular structures, and the uncovered cases may not exist in practice. Based on meaningful physically-based representations of 3D information, we further propose the SphereNet for 3D molecular learning. Experimental results demonstrate that the use of meaningful 3D information in SphereNet leads to significant performance improvements in prediction tasks. Our results also demonstrate the advantages of SphereNet in terms of capability, efficiency, and scalability. | https://openreview.net/pdf/7601c409e12322f92ca5dd6beeaaab909fdd13b1.pdf |
Fairness Guarantees under Demographic Shift | https://openreview.net/forum?id=wbPObLm6ueA | https://openreview.net/forum?id=wbPObLm6ueA | Stephen Giguere,Blossom Metevier,Bruno Castro da Silva,Yuriy Brun,Philip S. Thomas,Scott Niekum | ICLR 2022,Poster | Recent studies have demonstrated that using machine learning for social applications can lead to injustice in the form of racist, sexist, and otherwise unfair and discriminatory outcomes. To address this challenge, recent machine learning algorithms have been designed to limit the likelihood such unfair behaviors will occur. However, these approaches typically assume the data used for training is representative of what will be encountered once the model is deployed, thus limiting their usefulness. In particular, if certain subgroups of the population become more or less probable after the model is deployed (a phenomenon we call demographic shift), the fair-ness assurances provided by prior algorithms are often invalid. We consider the impact of demographic shift and present a class of algorithms, called Shifty algorithms, that provide high-confidence behavioral guarantees that hold under demographic shift. Shifty is the first technique of its kind and demonstrates an effective strategy for designing algorithms to overcome the challenges demographic shift poses. We evaluate Shifty-ttest, an implementation of Shifty based on Student’s 𝑡-test, and, using a real-world data set of university entrance exams and subsequent student success, show that the models output by our algorithm avoid unfair bias under demo-graphic shift, unlike existing methods. Our experiments demonstrate that our algorithm’s high-confidence fairness guarantees are valid in practice and that our algorithm is an effective tool for training models that are fair when demographic shift occurs. | https://openreview.net/pdf/d40552fbf306f7f0a8081e70e59da5be9f462a23.pdf |
Fooling Explanations in Text Classifiers | https://openreview.net/forum?id=j3krplz_4w6 | https://openreview.net/forum?id=j3krplz_4w6 | Adam Ivankay,Ivan Girardi,Chiara Marchiori,Pascal Frossard | ICLR 2022,Poster | State-of-the-art text classification models are becoming increasingly reliant on deep neural networks (DNNs). Due to their black-box nature, faithful and robust explanation methods need to accompany classifiers for deployment in real-life scenarios. However, it has been shown that explanation methods in vision applications are susceptible to local, imperceptible perturbations that can significantly alter the explanations without changing the predicted classes. We show here that the existence of such perturbations extends to text classifiers as well. Specifically, we introduce TextExplanationFooler (TEF), a novel explanation attack algorithm that alters text input samples imperceptibly so that the outcome of widely-used explanation methods changes considerably while leaving classifier predictions unchanged. We evaluate the attribution robustness estimation performance of TEF on five text classification datasets, utilizing three DNN architectures and a transformer architecture for each dataset. By significantly decreasing the correlation between unchanged and perturbed input attributions, we show that all models and explanation methods are susceptible to TEF perturbations. Moreover, we evaluate how the perturbations transfer to other model architectures and attribution methods, finding better than random performance in scenarios where the exact attacked model and explanation method are unknown. Finally, we introduce a semi-universal attack that is able to compute fast, computationally light perturbations with no knowledge of the attacked classifier nor explanation method. Overall, our work shows that explanations in text classifiers are fragile and users need to carefully address their robustness before relying on them in critical applications. | https://openreview.net/pdf/1625b3e98423b8b43cb565c202827783d512083c.pdf |
On the Learning and Learnability of Quasimetrics | https://openreview.net/forum?id=y0VvIg25yk | https://openreview.net/forum?id=y0VvIg25yk | Tongzhou Wang,Phillip Isola | ICLR 2022,Poster | Our world is full of asymmetries. Gravity and wind can make reaching a place easier than coming back. Social artifacts such as genealogy charts and citation graphs are inherently directed. In reinforcement learning and control, optimal goal-reaching strategies are rarely reversible (symmetrical). Distance functions supported on these asymmetrical structures are called quasimetrics. Despite their common appearance, little research has been done on the learning of quasimetrics. Our theoretical analysis reveals that a common class of learning algorithms, including unconstrained multilayer perceptrons (MLPs), provably fails to learn a quasimetric consistent with training data. In contrast, our proposed Poisson Quasimetric Embedding (PQE) is the first quasimetric learning formulation that both is learnable with gradient-based optimization and enjoys strong performance guarantees. Experiments on random graphs, social graphs, and offline Q-learning demonstrate its effectiveness over many common baselines. | https://openreview.net/pdf/e5214f2935d36f9a385665491f63d55204633f1a.pdf |
Learning Prototype-oriented Set Representations for Meta-Learning | https://openreview.net/forum?id=WH6u2SvlLp4 | https://openreview.net/forum?id=WH6u2SvlLp4 | Dan dan Guo,Long Tian,Minghe Zhang,Mingyuan Zhou,Hongyuan Zha | ICLR 2022,Poster | Learning from set-structured data is a fundamental problem that has recently attracted increasing attention, where a series of summary networks are introduced to deal with the set input. In fact, many meta-learning problems can be treated as set-input tasks. Most existing summary networks aim to design different architectures for the input set in order to enforce permutation invariance. However, scant attention has been paid to the common cases where different sets in a meta distribution are closely related and share certain statistical properties. Viewing each set as a distribution over a set of global prototypes, this paper provides a novel prototype-oriented optimal transport (POT) framework to improve existing summary networks. To learn the distribution over the global prototypes, we minimize its regularized optimal transport distance to the set empirical distribution over data points, providing a natural unsupervised way to improve the summary network. Since our plug-and-play framework can be applied to many meta learning problems, we further instantiate it to the cases of few-shot classification and implicit meta generative modeling. Extensive experiments demonstrate that our framework significantly improves the existing summary networks on learning more powerful summary statistics from sets and can be successfully integrated into metric-based few-shot classification and generative modeling applications, providing a promising tool for addressing set-input and meta-learning problems. | https://openreview.net/pdf/68183379fa9bdd81c24c017f6e6ab5b9720d1c15.pdf |
Embedded-model flows: Combining the inductive biases of model-free deep learning and explicit probabilistic modeling | https://openreview.net/forum?id=9pEJSVfDbba | https://openreview.net/forum?id=9pEJSVfDbba | Gianluigi Silvestri,Emily Fertig,Dave Moore,Luca Ambrogioni | ICLR 2022,Poster | Normalizing flows have shown great success as general-purpose density estimators. However, many real world applications require the use of domain-specific knowledge, which normalizing flows cannot readily incorporate. We propose embedded-model flows (EMF), which alternate general-purpose transformations with structured layers that embed domain-specific inductive biases. These layers are automatically constructed by converting user-specified differentiable probabilistic models into equivalent bijective transformations. We also introduce gated structured layers, which allow bypassing the parts of the models that fail to capture the statistics of the data. We demonstrate that EMFs can be used to induce desirable properties such as multimodality and continuity. Furthermore, we show that EMFs enable a high performance form of variational inference where the structure of the prior model is embedded in the variational architecture. In our experiments, we show that this approach outperforms a large number of alternative methods in common structured inference problems. | https://openreview.net/pdf/ea5ccb4619c2399f3a652126f47196d475ee7c9c.pdf |
A Relational Intervention Approach for Unsupervised Dynamics Generalization in Model-Based Reinforcement Learning | https://openreview.net/forum?id=YRq0ZUnzKoZ | https://openreview.net/forum?id=YRq0ZUnzKoZ | Jiaxian Guo,Mingming Gong,Dacheng Tao | ICLR 2022,Poster | The generalization of model-based reinforcement learning (MBRL) methods to environments with unseen transition dynamics is an important yet challenging problem.
Existing methods try to extract environment-specified information $Z$ from past transition segments to make the dynamics prediction model generalizable to different dynamics. However, because environments are not labelled, the extracted information inevitably contains redundant information unrelated to the dynamics in transition segments and thus fails to maintain a crucial property of $Z$: $Z$ should be similar in the same environment and dissimilar in different ones. As a result, the learned dynamics prediction function will deviate from the true one, which undermines the generalization ability. To tackle this problem, we introduce an interventional prediction module to estimate the probability of two estimated $\hat{z}_i, \hat{z}_j$ belonging to the same environment.
Furthermore, by utilizing the $Z$'s invariance within a single environment, a relational head is proposed to enforce the similarity between $\hat{{Z}}$ from the same environment. As a result, the redundant information will be reduced in $\hat{Z}$. We empirically show that $\hat{{Z}}$ estimated by our method enjoy less redundant information than previous methods, and such $\hat{{Z}}$ can significantly reduce dynamics prediction errors and improve the performance of model-based RL methods on zero-shot new environments with unseen dynamics. The codes of this method are available at \url{https://github.com/CR-Gjx/RIA}. | https://openreview.net/pdf/a6c6a600f9e89fe92c0e2d8df1d09d0a78dd39ad.pdf |
Critical Points in Quantum Generative Models | https://openreview.net/forum?id=2f1z55GVQN | https://openreview.net/forum?id=2f1z55GVQN | Eric Ricardo Anschuetz | ICLR 2022,Poster | One of the most important properties of neural networks is the clustering of local minima of the loss function near the global minimum, enabling efficient training. Though generative models implemented on quantum computers are known to be more expressive than their traditional counterparts, it has empirically been observed that these models experience a transition in the quality of their local minima. Namely, below some critical number of parameters, all local minima are far from the global minimum in function value; above this critical parameter count, all local minima are good approximators of the global minimum. Furthermore, for a certain class of quantum generative models, this transition has empirically been observed to occur at parameter counts exponentially large in the problem size, meaning practical training of these models is out of reach. Here, we give the first proof of this transition in trainability, specializing to this latter class of quantum generative model. We use techniques inspired by those used to study the loss landscapes of classical neural networks. We also verify that our analytic results hold experimentally even at modest model sizes. | https://openreview.net/pdf/425fdfa2b1be0636b4b3ab1636a8eaac0ea179ea.pdf |
VOS: Learning What You Don't Know by Virtual Outlier Synthesis | https://openreview.net/forum?id=TW7d65uYu5M | https://openreview.net/forum?id=TW7d65uYu5M | Xuefeng Du,Zhaoning Wang,Mu Cai,Yixuan Li | ICLR 2022,Poster | Out-of-distribution (OOD) detection has received much attention lately due to its importance in the safe deployment of neural networks. One of the key challenges is that models lack supervision signals from unknown data, and as a result, can produce overconfident predictions on OOD data. Previous approaches rely on real outlier datasets for model regularization, which can be costly and sometimes infeasible to obtain in practice. In this paper, we present VOS, a novel framework for OOD detection by adaptively synthesizing virtual outliers that can meaningfully regularize the model's decision boundary during training. Specifically, VOS samples virtual outliers from the low-likelihood region of the class-conditional distribution estimated in the feature space. Alongside, we introduce a novel unknown-aware training objective, which contrastively shapes the uncertainty space between the ID data and synthesized outlier data. VOS achieves competitive performance on both object detection and image classification models, reducing the FPR95 by up to 9.36% compared to the previous best method on object detectors. Code is available at https://github.com/deeplearning-wisc/vos. | https://openreview.net/pdf/6faea9b02b55e27b924aea7fe1a92365b3b12a27.pdf |
Trust Region Policy Optimisation in Multi-Agent Reinforcement Learning | https://openreview.net/forum?id=EcGGFkNTxdJ | https://openreview.net/forum?id=EcGGFkNTxdJ | Jakub Grudzien Kuba,Ruiqing Chen,Muning Wen,Ying Wen,Fanglei Sun,Jun Wang,Yaodong Yang | ICLR 2022,Poster | Trust region methods rigorously enabled reinforcement learning (RL) agents to learn monotonically improving policies, leading to superior performance on a variety of tasks. Unfortunately, when it comes to multi-agent reinforcement learning (MARL), the property of monotonic improvement may not simply apply; this is because agents, even in cooperative games, could have conflicting directions of policy updates. As a result, achieving a guaranteed improvement on the joint policy where each agent acts individually remains an open challenge. In this paper, we extend the theory of trust region learning to MARL. Central to our findings are the multi-agent advantage decomposition lemma and the sequential policy update scheme. Based on these, we develop Heterogeneous-Agent Trust Region Policy Optimisation (HATPRO) and Heterogeneous-Agent Proximal Policy Optimisation (HAPPO) algorithms. Unlike many existing MARL algorithms, HATRPO/HAPPO do not need agents to share parameters, nor do they need any restrictive assumptions on decomposibility of the joint value function. Most importantly, we justify in theory the monotonic improvement property of HATRPO/HAPPO. We evaluate the proposed methods on a series of Multi-Agent MuJoCo and StarCraftII tasks. Results show that HATRPO and HAPPO significantly outperform strong baselines such as IPPO, MAPPO and MADDPG on all tested tasks, thereby establishing a new state of the art. | https://openreview.net/pdf/3909fb38c37d6d25dca74d884b891baf99754ff3.pdf |
Unsupervised Disentanglement with Tensor Product Representations on the Torus | https://openreview.net/forum?id=neqU3HWDgE | https://openreview.net/forum?id=neqU3HWDgE | Michael Rotman,Amit Dekel,Shir Gur,Yaron Oz,Lior Wolf | ICLR 2022,Poster | The current methods for learning representations with auto-encoders almost exclusively employ vectors as the latent representations. In this work, we propose to employ a tensor product structure for this purpose. This way, the obtained representations are naturally disentangled. In contrast to the conventional variations methods, which are targeted toward normally distributed features, the latent space in our representation is distributed uniformly over a set of unit circles. We argue that the torus structure of the latent space captures the generative factors effectively. We employ recent tools for measuring unsupervised disentanglement, and in an extensive set of experiments demonstrate the advantage of our method in terms of disentanglement, completeness, and informativeness. The code for our proposed method is available at https://github.com/rotmanmi/Unsupervised-Disentanglement-Torus. | https://openreview.net/pdf/c3d91cb3b118da4fa03135f118cc0fdc619e5210.pdf |
Anomaly Detection for Tabular Data with Internal Contrastive Learning | https://openreview.net/forum?id=_hszZbt46bT | https://openreview.net/forum?id=_hszZbt46bT | Tom Shenkar,Lior Wolf | ICLR 2022,Poster | We consider the task of finding out-of-class samples in tabular data, where little can be assumed on the structure of the data. In order to capture the structure of the samples of the single training class, we learn mappings that maximize the mutual information between each sample and the part that is masked out. The mappings are learned by employing a contrastive loss, which considers only one sample at a time. Once learned, we can score a test sample by measuring whether the learned mappings lead to a small contrastive loss using the masked parts of this sample. Our experiments show that our method leads by a sizable accuracy gap in comparison to the literature and that the same default set of hyperparameters provides state-of-the-art results across benchmarks. | https://openreview.net/pdf/067e5071eee6bb0a62c48953456a5c12e7469b55.pdf |
LIGS: Learnable Intrinsic-Reward Generation Selection for Multi-Agent Learning | https://openreview.net/forum?id=CpTuR2ECuW | https://openreview.net/forum?id=CpTuR2ECuW | David Henry Mguni,Taher Jafferjee,Jianhong Wang,Nicolas Perez-Nieves,Oliver Slumbers,Feifei Tong,Yang Li,Jiangcheng Zhu,Yaodong Yang,Jun Wang | ICLR 2022,Poster | Efficient exploration is important for reinforcement learners (RL) to achieve high rewards. In multi-agent systems, coordinated exploration and behaviour is critical for agents to jointly achieve optimal outcomes. In this paper, we introduce a new general framework for improving coordination and performance of multi-agent reinforcement learners (MARL). Our framework, named Learnable Intrinsic-Reward Generation Selection algorithm (LIGS) introduces an adaptive learner, Generator that observes the agents and learns to construct intrinsic rewards online that coordinate the agents’ joint exploration and joint behaviour. Using a novel combination of reinforcement learning (RL) and switching controls, LIGS determines the best states to learn to add intrinsic rewards which leads to a highly efficient learning process. LIGS can subdivide complex tasks making them easier to solve and enables systems of RL agents to quickly solve environments with sparse rewards. LIGS can seamlessly adopt existing multi-agent RL algorithms and our theory shows that it ensures convergence to joint policies that deliver higher system performance. We demonstrate the superior performance of the LIGS framework in challenging tasks in Foraging and StarCraft II and show LIGS is capable of tackling tasks previously unsolvable by MARL methods. | https://openreview.net/pdf/e236eaa5e5c72be5b36faf18dc589c2d09c9f470.pdf |
Bayesian Modeling and Uncertainty Quantification for Learning to Optimize: What, Why, and How | https://openreview.net/forum?id=EVVadRFRgL7 | https://openreview.net/forum?id=EVVadRFRgL7 | Yuning You,Yue Cao,Tianlong Chen,Zhangyang Wang,Yang Shen | ICLR 2022,Poster | Optimizing an objective function with uncertainty awareness is well-known to improve the accuracy and confidence of optimization solutions. Meanwhile, another relevant but very different question remains yet open: how to model and quantify the uncertainty of an optimization algorithm (a.k.a., optimizer) itself? To close such a gap, the prerequisite is to consider the optimizers as sampled from a distribution, rather than a few prefabricated and fixed update rules. We first take the novel angle to consider the algorithmic space of optimizers, and provide definitions for the optimizer prior and likelihood, that intrinsically determine the posterior and therefore uncertainty. We then leverage the recent advance of learning to optimize (L2O) for the space parameterization, with the end-to-end training pipeline built via variational inference, referred to as uncertainty-aware L2O (UA-L2O). Our study represents the first effort to recognize and quantify the uncertainty of the optimization algorithm. The extensive numerical results show that, UA-L2O achieves superior uncertainty calibration with accurate confidence estimation and tight confidence intervals, suggesting the improved posterior estimation thanks to considering optimizer uncertainty. Intriguingly, UA-L2O even improves optimization performances for two out of three test functions, the loss function in data privacy attack, and four of five cases of the energy function in protein docking. Our codes are released at https://github.com/Shen-Lab/Bayesian-L2O. | https://openreview.net/pdf/19e1837e276ae2dc367c9117f1039ab41b0e7bc4.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.