Datasets:
title
stringlengths 12
151
| url
stringlengths 41
43
| detail_url
stringlengths 41
43
| authors
stringlengths 6
562
| tags
stringclasses 3
values | abstract
stringlengths 519
2.34k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Domino: Discovering Systematic Errors with Cross-Modal Embeddings | https://openreview.net/forum?id=FPCMqjI0jXN | https://openreview.net/forum?id=FPCMqjI0jXN | Sabri Eyuboglu,Maya Varma,Khaled Kamal Saab,Jean-Benoit Delbrouck,Christopher Lee-Messer,Jared Dunnmon,James Zou,Christopher Re | ICLR 2022,Oral | Machine learning models that achieve high overall accuracy often make systematic errors on important subsets (or slices) of data. Identifying underperforming slices is particularly challenging when working with high-dimensional inputs (e.g. images, audio), where important slices are often unlabeled. In order to address this issue, recent studies have proposed automated slice discovery methods (SDMs), which leverage learned model representations to mine input data for slices on which a model performs poorly. To be useful to a practitioner, these methods must identify slices that are both underperforming and coherent (i.e. united by a human-understandable concept). However, no quantitative evaluation framework currently exists for rigorously assessing SDMs with respect to these criteria. Additionally, prior qualitative evaluations have shown that SDMs often identify slices that are incoherent. In this work, we address these challenges by first designing a principled evaluation framework that enables a quantitative comparison of SDMs across 1,235 slice discovery settings in three input domains (natural images, medical images, and time-series data).
Then, motivated by the recent development of powerful cross-modal representation learning approaches, we present Domino, an SDM that leverages cross-modal embeddings and a novel error-aware mixture model to discover and describe coherent slices. We find that Domino accurately identifies 36% of the 1,235 slices in our framework -- a 12 percentage point improvement over prior methods. Further, Domino is the first SDM that can provide natural language descriptions of identified slices, correctly generating the exact name of the slice in 35% of settings. | https://openreview.net/pdf/a5ca838a35d810400cfa090453cd85abe02ab6b0.pdf |
Natural Language Descriptions of Deep Visual Features | https://openreview.net/forum?id=NudBMY-tzDr | https://openreview.net/forum?id=NudBMY-tzDr | Evan Hernandez,Sarah Schwettmann,David Bau,Teona Bagashvili,Antonio Torralba,Jacob Andreas | ICLR 2022,Oral | Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope, labeling only a small subset of neurons and behaviors in any network. Is a richer characterization of neuron-level computation possible? We introduce a procedure (called MILAN, for mutual information-guided linguistic annotation of neurons) that automatically labels neurons with open-ended, compositional, natural language descriptions. Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active. MILAN produces fine-grained descriptions that capture categorical, relational, and logical structure in learned features. These descriptions obtain high agreement with human-generated feature descriptions across a diverse set of model architectures and tasks, and can aid in understanding and controlling learned models. We highlight three applications of natural language neuron descriptions. First, we use MILAN for analysis, characterizing the distribution and importance of neurons selective for attribute, category, and relational information in vision models. Second, we use MILAN for auditing, surfacing neurons sensitive to human faces in datasets designed to obscure them. Finally, we use MILAN for editing, improving robustness in an image classifier by deleting neurons sensitive to text features spuriously correlated with class labels. | https://openreview.net/pdf/842234024e58a8d5073a88b3c04282011b8e20a7.pdf |
Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization | https://openreview.net/forum?id=tYRrOdSnVUy | https://openreview.net/forum?id=tYRrOdSnVUy | Lixu Wang,Shichao Xu,Ruiqi Xu,Xiao Wang,Qi Zhu | ICLR 2022,Oral | As Artificial Intelligence as a Service gains popularity, protecting well-trained models as intellectual property is becoming increasingly important. There are two common types of protection methods: ownership verification and usage authorization. In this paper, we propose Non-Transferable Learning (NTL), a novel approach that captures the exclusive data representation in the learned model and restricts the model generalization ability to certain domains. This approach provides effective solutions to both model verification and authorization. Specifically: 1) For ownership verification, watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods. By comparison, our NTL-based ownership verification provides robust resistance to state-of-the-art watermark removal methods, as shown in extensive experiments with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. 2) For usage authorization, prior solutions focus on authorizing specific users to access the model, but authorized users can still apply the model to any data without restriction. Our NTL-based authorization approach instead provides data-centric protection, which we call applicability authorization, by significantly degrading the performance of the model on unauthorized data. Its effectiveness is also shown through experiments on aforementioned datasets. | https://openreview.net/pdf/cc0b829e495ebd36c4e0dcce6f5d044ad4dce58d.pdf |
Neural Structured Prediction for Inductive Node Classification | https://openreview.net/forum?id=YWNAX0caEjI | https://openreview.net/forum?id=YWNAX0caEjI | Meng Qu,Huiyu Cai,Jian Tang | ICLR 2022,Oral | This paper studies node classification in the inductive setting, i.e., aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs. This problem has been extensively studied with graph neural networks (GNNs) by learning effective node representations, as well as traditional structured prediction methods for modeling the structured output of node labels, e.g., conditional random fields (CRFs). In this paper, we present a new approach called the Structured Proxy Network (SPN), which combines the advantages of both worlds. SPN defines flexible potential functions of CRFs with GNNs. However, learning such a model is nontrivial as it involves optimizing a maximin game with high-cost inference. Inspired by the underlying connection between joint and marginal distributions defined by Markov networks, we propose to solve an approximate version of the optimization problem as a proxy, which yields a near-optimal solution, making learning more efficient. Extensive experiments on two settings show that our approach outperforms many competitive baselines. | https://openreview.net/pdf/df1b628202430dff01a7eeed5b5e5a2e703d1bad.pdf |
A New Perspective on "How Graph Neural Networks Go Beyond Weisfeiler-Lehman?" | https://openreview.net/forum?id=uxgg9o7bI_3 | https://openreview.net/forum?id=uxgg9o7bI_3 | Asiri Wijesinghe,Qing Wang | ICLR 2022,Oral | We propose a new perspective on designing powerful Graph Neural Networks (GNNs). In a nutshell, this enables a general solution to inject structural properties of graphs into a message-passing aggregation scheme of GNNs. As a theoretical basis, we develop a new hierarchy of local isomorphism on neighborhood subgraphs. Then, we theoretically characterize how message-passing GNNs can be designed to be more expressive than the Weisfeiler Lehman test. To elaborate this characterization, we propose a novel neural model, called GraphSNN, and prove that this model is strictly more expressive than the Weisfeiler Lehman test in distinguishing graph structures. We empirically verify the strength of our model on different graph learning tasks. It is shown that our model consistently improves the state-of-the-art methods on the benchmark tasks without sacrificing computational simplicity and efficiency. | https://openreview.net/pdf/376e7da3d7f86a2bd40cd51fadfc278e94372443.pdf |
Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond | https://openreview.net/forum?id=LdlwbBP2mlq | https://openreview.net/forum?id=LdlwbBP2mlq | Chulhee Yun,Shashank Rajput,Suvrit Sra | ICLR 2022,Oral | In distributed learning, local SGD (also known as federated averaging) and its simple baseline minibatch SGD are widely studied optimization methods. Most existing analyses of these methods assume independent and unbiased gradient estimates obtained via with-replacement sampling. In contrast, we study shuffling-based variants: minibatch and local Random Reshuffling, which draw stochastic gradients without replacement and are thus closer to practice. For smooth functions satisfying the Polyak-Łojasiewicz condition, we obtain convergence bounds (in the large epoch regime) which show that these shuffling-based variants converge faster than their with-replacement counterparts. Moreover, we prove matching lower bounds showing that our convergence analysis is tight. Finally, we propose an algorithmic modification called synchronized shuffling that leads to convergence rates faster than our lower bounds in near-homogeneous settings. | https://openreview.net/pdf/1669f6cc32c853b0d69068b7ed1a230ce3f321d0.pdf |
The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions | https://openreview.net/forum?id=Z7Lk2cQEG8a | https://openreview.net/forum?id=Z7Lk2cQEG8a | Yifei Wang,Jonathan Lacotte,Mert Pilanci | ICLR 2022,Oral | We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints. Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces. Given the set of solutions of our convex optimization program, we show how to construct exactly the entire set of optimal neural networks. We provide a detailed characterization of this optimal set and its invariant transformations. As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.
Overall, we provide a rich framework for studying the landscape of neural network training loss through convexity. | https://openreview.net/pdf/9733b1623c23b45535cc2c126e6fb496e55e8049.pdf |
Provably Filtering Exogenous Distractors using Multistep Inverse Dynamics | https://openreview.net/forum?id=RQLLzMCefQu | https://openreview.net/forum?id=RQLLzMCefQu | Yonathan Efroni,Dipendra Misra,Akshay Krishnamurthy,Alekh Agarwal,John Langford | ICLR 2022,Oral | Many real-world applications of reinforcement learning (RL) require the agent to deal with high-dimensional observations such as those generated from a megapixel camera. Prior work has addressed such problems with representation learning, through which the agent can provably extract endogenous, latent state information from raw observations and subsequently plan efficiently. However, such approaches can fail in the presence of temporally correlated noise in the observations, a phenomenon that is common in practice. We initiate the formal study of latent state discovery in the presence of such exogenous noise sources by proposing a new model, the Exogenous Block MDP (EX-BMDP), for rich observation RL. We start by establishing several negative results, by highlighting failure cases of prior representation learning based approaches. Then, we introduce the Predictive Path Elimination (PPE) algorithm, that learns a generalization of inverse dynamics and is provably sample and computationally efficient in EX-BMDPs when the endogenous state dynamics are near deterministic. The sample complexity of PPE depends polynomially on the size of the latent endogenous state space while not directly depending on the size of the observation space, nor the exogenous state space. We provide experiments on challenging exploration problems which show that our approach works empirically. | https://openreview.net/pdf/310151127bcaaee206f6987dfe48a6f9a49ae848.pdf |
Bootstrapped Meta-Learning | https://openreview.net/forum?id=b-ny3x071E5 | https://openreview.net/forum?id=b-ny3x071E5 | Sebastian Flennerhag,Yannick Schroecker,Tom Zahavy,Hado van Hasselt,David Silver,Satinder Singh | ICLR 2022,Oral | Meta-learning empowers artificial intelligence to increase its efficiency by learning how to learn. Unlocking this potential involves overcoming a challenging meta-optimisation problem. We propose an algorithm that tackles this problem by letting the meta-learner teach itself. The algorithm first bootstraps a target from the meta-learner, then optimises the meta-learner by minimising the distance to that target under a chosen (pseudo-)metric. Focusing on meta-learning with gradients, we establish conditions that guarantee performance improvements and show that metric can be used to control meta-optimisation. Meanwhile, the bootstrapping mechanism can extend the effective meta-learning horizon without requiring backpropagation through all updates. We achieve a new state-of-the art for model-free agents on the Atari ALE benchmark and demonstrate that it yields both performance and efficiency gains in multi-task meta-learning. Finally, we explore how bootstrapping opens up new possibilities and find that it can meta-learn efficient exploration in an epsilon-greedy Q-learning agent - without backpropagating through the update rule. | https://openreview.net/pdf/0eccd48eddcbf9cfc77b50cb0e97fb58937aee70.pdf |
Coordination Among Neural Modules Through a Shared Global Workspace | https://openreview.net/forum?id=XzTtHjgPDsT | https://openreview.net/forum?id=XzTtHjgPDsT | Anirudh Goyal,Aniket Rajiv Didolkar,Alex Lamb,Kartikeya Badola,Nan Rosemary Ke,Nasim Rahaman,Jonathan Binas,Charles Blundell,Michael Curtis Mozer,Yoshua Bengio | ICLR 2022,Oral | Deep learning has seen a movement away from representing examples with a monolithic hidden state towards a richly structured state. For example, Transformers segment by position, and object-centric architectures decompose images into entities. In all these architectures, interactions between different elements are modeled via pairwise interactions: Transformers make use of self-attention to incorporate information from other positions and object-centric architectures make use of graph neural networks to model interactions among entities. We consider how to improve on pairwise interactions in terms of global coordination and a coherent, integrated representation that can be used for downstream tasks. In cognitive science, a global workspace architecture has been proposed in which functionally specialized components share information through a common, bandwidth-limited communication channel. We explore the use of such a communication channel in the context of deep learning for modeling the structure of complex environments. The proposed method includes a shared workspace through which communication among different specialist modules takes place but due to limits on the communication bandwidth, specialist modules must compete for access. We show that capacity limitations have a rational basis in that (1) they encourage specialization and compositionality and (2) they facilitate the synchronization of otherwise independent specialists.
| https://openreview.net/pdf/19aac83e8824498df7b9d1e6952523f7c068218b.pdf |
Data-Efficient Graph Grammar Learning for Molecular Generation | https://openreview.net/forum?id=l4IHywGq6a | https://openreview.net/forum?id=l4IHywGq6a | Minghao Guo,Veronika Thost,Beichen Li,Payel Das,Jie Chen,Wojciech Matusik | ICLR 2022,Oral | The problem of molecular generation has received significant attention recently. Existing methods are typically based on deep neural networks and require training on large datasets with tens of thousands of samples. In practice, however, the size of class-specific chemical datasets is usually limited (e.g., dozens of samples) due to labor-intensive experimentation and data collection. Another major challenge is to generate only physically synthesizable molecules. This is a non-trivial task for neural network-based generative models since the relevant chemical knowledge can only be extracted and generalized from the limited training data. In this work, we propose a data-efficient generative model that can be learned from datasets with orders of magnitude smaller sizes than common benchmarks. At the heart of this method is a learnable graph grammar that generates molecules from a sequence of production rules. Without any human assistance, these production rules are automatically constructed from training data. Furthermore, additional chemical knowledge can be incorporated into the model by further grammar optimization. Our learned graph grammar yields state-of-the-art results on generating high-quality molecules for three monomer datasets that contain only ${\sim}20$ samples each. Our approach also achieves remarkable performance in a challenging polymer generation task with $only$ $117$ training samples and is competitive against existing methods using $81$k data points.
| https://openreview.net/pdf/c17b0db09f98b3279ad677650f18acbf907883ce.pdf |
Poisoning and Backdooring Contrastive Learning | https://openreview.net/forum?id=iC4UHbQ01Mp | https://openreview.net/forum?id=iC4UHbQ01Mp | Nicholas Carlini,Andreas Terzis | ICLR 2022,Oral | Multimodal contrastive learning methods like CLIP train on noisy and uncurated training datasets. This is cheaper than labeling datasets manually, and even improves out-of-distribution robustness. We show that this practice makes backdoor and poisoning attacks a significant threat. By poisoning just 0.01% of a dataset (e.g., just 300 images of the 3 million-example Conceptual Captions dataset), we can cause the model to misclassify test images by overlaying a small patch. Targeted poisoning attacks, whereby the model misclassifies a particular test input with an adversarially-desired label, are even easier requiring control of 0.0001% of the dataset (e.g., just three out of the 3 million images). Our attacks call into question whether training on noisy and uncurated Internet scrapes is desirable. | https://openreview.net/pdf/abd77f0543a72cd26da355efc5680de233f120af.pdf |
Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path | https://openreview.net/forum?id=w1UbdvWH_R3 | https://openreview.net/forum?id=w1UbdvWH_R3 | X.Y. Han,Vardan Papyan,David L. Donoho | ICLR 2022,Oral | The recently discovered Neural Collapse (NC) phenomenon occurs pervasively in today's deep net training paradigm of driving cross-entropy (CE) loss towards zero. During NC, last-layer features collapse to their class-means, both classifiers and class-means collapse to the same Simplex Equiangular Tight Frame, and classifier behavior collapses to the nearest-class-mean decision rule. Recent works demonstrated that deep nets trained with mean squared error (MSE) loss perform comparably to those trained with CE. As a preliminary, we empirically establish that NC emerges in such MSE-trained deep nets as well through experiments on three canonical networks and five benchmark datasets. We provide, in a Google Colab notebook, PyTorch code for reproducing MSE-NC and CE-NC: https://colab.research.google.com/github/neuralcollapse/neuralcollapse/blob/main/neuralcollapse.ipynb. The analytically-tractable MSE loss offers more mathematical opportunities than the hard-to-analyze CE loss, inspiring us to leverage MSE loss towards the theoretical investigation of NC. We develop three main contributions: (I) We show a new decomposition of the MSE loss into (A) terms directly interpretable through the lens of NC and which assume the last-layer classifier is exactly the least-squares classifier; and (B) a term capturing the deviation from this least-squares classifier. (II) We exhibit experiments on canonical datasets and networks demonstrating that term-(B) is negligible during training. This motivates us to introduce a new theoretical construct: the central path, where the linear classifier stays MSE-optimal for feature activations throughout the dynamics. (III) By studying renormalized gradient flow along the central path, we derive exact dynamics that predict NC. | https://openreview.net/pdf/75799bbe466f7240935655cbfaa930c9628a915e.pdf |
Weighted Training for Cross-Task Learning | https://openreview.net/forum?id=ltM1RMZntpu | https://openreview.net/forum?id=ltM1RMZntpu | Shuxiao Chen,Koby Crammer,Hangfeng He,Dan Roth,Weijie J Su | ICLR 2022,Oral | In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and target tasks. We show that TAWT is easy to implement, is computationally efficient, requires little hyperparameter tuning, and enjoys non-asymptotic learning-theoretic guarantees. The effectiveness of TAWT is corroborated through extensive experiments with BERT on four sequence tagging tasks in natural language processing (NLP), including part-of-speech (PoS) tagging, chunking, predicate detection, and named entity recognition (NER). As a byproduct, the proposed representation-based task distance allows one to reason in a theoretically principled way about several critical aspects of cross-task learning, such as the choice of the source data and the impact of fine-tuning. | https://openreview.net/pdf/579ed2f74ecc130396039eae33e13de66b8de08b.pdf |
iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data | https://openreview.net/forum?id=wRODLDHaAiW | https://openreview.net/forum?id=wRODLDHaAiW | Marine Schimel,Ta-Chu Kao,Kristopher T Jensen,Guillaume Hennequin | ICLR 2022,Oral | Understanding how neural dynamics give rise to behaviour is one of the most fundamental questions in systems neuroscience. To achieve this, a common approach is to record neural populations in behaving animals, and model these data as emanating from a latent dynamical system whose state trajectories can then be related back to behavioural observations via some form of decoding. As recordings are typically performed in localized circuits that form only a part of the wider implicated network, it is important to simultaneously learn the local dynamics and infer any unobserved external input that might drive them. Here, we introduce iLQR-VAE, a novel control-based approach to variational inference in nonlinear dynamical systems, capable of learning both latent dynamics, initial conditions, and ongoing external inputs. As in recent deep learning approaches, our method is based on an input-driven sequential variational autoencoder (VAE). The main novelty lies in the use of the powerful iterative linear quadratic regulator algorithm (iLQR) in the recognition model. Optimization of the standard evidence lower-bound requires differentiating through iLQR solutions, which is made possible by recent advances in differentiable control. Importantly, having the recognition model be implicitly defined by the generative model greatly reduces the number of free parameters and allows for flexible, high-quality inference. This makes it possible for instance to evaluate the model on a single long trial after training on smaller chunks. We demonstrate the effectiveness of iLQR-VAE on a range of synthetic systems, with autonomous as well as input-driven dynamics. We further apply it to neural and behavioural recordings in non-human primates performing two different reaching tasks, and show that iLQR-VAE yields high-quality kinematic reconstructions from the neural data. | https://openreview.net/pdf/c4b2a10a835b79e5cbaff71f6577c29236e964b5.pdf |
Extending the WILDS Benchmark for Unsupervised Adaptation | https://openreview.net/forum?id=z7p2V6KROOV | https://openreview.net/forum?id=z7p2V6KROOV | Shiori Sagawa,Pang Wei Koh,Tony Lee,Irena Gao,Sang Michael Xie,Kendrick Shen,Ananya Kumar,Weihua Hu,Michihiro Yasunaga,Henrik Marklund,Sara Beery,Etienne David,Ian Stavness,Wei Guo,Jure Leskovec,Kate Saenko,Tatsunori Hashimoto,Sergey Levine,Chelsea Finn,Percy Liang | ICLR 2022,Oral | Machine learning systems deployed in the wild are often trained on a source distribution but deployed on a different target distribution. Unlabeled data can be a powerful point of leverage for mitigating these distribution shifts, as it is frequently much more available than labeled data and can often be obtained from distributions beyond the source distribution as well. However, existing distribution shift benchmarks with unlabeled data do not reflect the breadth of scenarios that arise in real-world applications. In this work, we present the WILDS 2.0 update, which extends 8 of the 10 datasets in the WILDS benchmark of distribution shifts to include curated unlabeled data that would be realistically obtainable in deployment. These datasets span a wide range of applications (from histology to wildlife conservation), tasks (classification, regression, and detection), and modalities (photos, satellite images, microscope slides, text, molecular graphs). The update maintains consistency with the original WILDS benchmark by using identical labeled training, validation, and test sets, as well as identical evaluation metrics. We systematically benchmark state-of-the-art methods that use unlabeled data, including domain-invariant, self-training, and self-supervised methods, and show that their success on WILDS is limited. To facilitate method development, we provide an open-source package that automates data loading and contains the model architectures and methods used in this paper. Code and leaderboards are available at https://wilds.stanford.edu. | https://openreview.net/pdf/16bc69d47c7ff67867bfc50009d6b9fc5043a00f.pdf |
Asymmetry Learning for Counterfactually-invariant Classification in OOD Tasks | https://openreview.net/forum?id=avgclFZ221l | https://openreview.net/forum?id=avgclFZ221l | S Chandra Mouli,Bruno Ribeiro | ICLR 2022,Oral | Generalizing from observed to new related environments (out-of-distribution) is central to the reliability of classifiers. However, most classifiers fail to predict label $Y$ from input $X$ when the change in environment is due a (stochastic) input transformation $T^\text{te} \circ X'$ not observed in training, as in training we observe $T^\text{tr} \circ X'$, where $X'$ is a hidden variable. This work argues that when the transformations in train $T^\text{tr}$ and test $T^\text{te}$ are (arbitrary) symmetry transformations induced by a collection of known $m$ equivalence relations, the task of finding a robust OOD classifier can be defined as finding the simplest causal model that defines a causal connection between the target labels and the symmetry transformations that are associated with label changes. We then propose a new learning paradigm, asymmetry learning, that identifies which symmetries the classifier must break in order to correctly predict $Y$ in both train and test. Asymmetry learning performs a causal model search that, under certain identifiability conditions, finds classifiers that perform equally well in-distribution and out-of-distribution. Finally, we show how to learn counterfactually-invariant representations with asymmetry learning in two physics tasks. | https://openreview.net/pdf/f15da1dc02ded9aba4a26e8ade750b28429da30f.pdf |
Comparing Distributions by Measuring Differences that Affect Decision Making | https://openreview.net/forum?id=KB5onONJIAU | https://openreview.net/forum?id=KB5onONJIAU | Shengjia Zhao,Abhishek Sinha,Yutong He,Aidan Perreault,Jiaming Song,Stefano Ermon | ICLR 2022,Oral | Measuring the discrepancy between two probability distributions is a fundamental problem in machine learning and statistics. We propose a new class of discrepancies based on the optimal loss for a decision task -- two distributions are different if the optimal decision loss is higher on their mixture than on each individual distribution. By suitably choosing the decision task, this generalizes the Jensen-Shannon divergence and the maximum mean discrepancy family. We apply our approach to two-sample tests, and on various benchmarks, we achieve superior test power compared to competing methods. In addition, a modeler can directly specify their preferences when comparing distributions through the decision loss. We apply this property to understanding the effects of climate change on different social and economic activities, evaluating sample quality, and selecting features targeting different decision tasks. | https://openreview.net/pdf/e99719a7a6796b569cc6afdf6f42024d0df2fbea.pdf |
MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling | https://openreview.net/forum?id=UseMOjWENv | https://openreview.net/forum?id=UseMOjWENv | Yusong Wu,Ethan Manilow,Yi Deng,Rigel Swavely,Kyle Kastner,Tim Cooijmans,Aaron Courville,Cheng-Zhi Anna Huang,Jesse Engel | ICLR 2022,Oral | Musical expression requires control of both what notes that are played, and how they are performed. Conventional audio synthesizers provide detailed expressive controls, but at the cost of realism. Black-box neural audio synthesis and concatenative samplers can produce realistic audio, but have few mechanisms for control. In this work, we introduce MIDI-DDSP a hierarchical model of musical instruments that enables both realistic neural audio synthesis and detailed user control. Starting from interpretable Differentiable Digital Signal Processing (DDSP) synthesis parameters, we infer musical notes and high-level properties of their expressive performance (such as timbre, vibrato, dynamics, and articulation). This creates a 3-level hierarchy (notes, performance, synthesis) that affords individuals the option to intervene at each level, or utilize trained priors (performance given notes, synthesis given performance) for creative assistance. Through quantitative experiments and listening tests, we demonstrate that this hierarchy can reconstruct high-fidelity audio, accurately predict performance attributes for a note sequence, independently manipulate the attributes of a given performance, and as a complete system, generate realistic audio from a novel note sequence. By utilizing an interpretable hierarchy, with multiple levels of granularity, MIDI-DDSP opens the door to assistive tools to empower individuals across a diverse range of musical experience. | https://openreview.net/pdf/e26b385d95d67af36d02a385047be6f7a0d6f47b.pdf |
Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling | https://openreview.net/forum?id=N0n_QyQ5lBF | https://openreview.net/forum?id=N0n_QyQ5lBF | Bo Wan,Wenjuan Han,Zilong Zheng,Tinne Tuytelaars | ICLR 2022,Oral | We introduce a new task, unsupervised vision-language (VL) grammar induction. Given an image-caption pair, the goal is to extract a shared hierarchical structure for both image and language simultaneously. We argue that such structured output, grounded in both modalities, is a clear step towards the high-level understanding of multimodal information. Besides challenges existing in conventional visually grounded grammar induction tasks, VL grammar induction requires a model to capture contextual semantics and perform a fine-grained alignment. To address these challenges, we propose a novel method, CLIORA, which constructs a shared vision-language constituency tree structure with context-dependent semantics for all possible phrases in different levels of the tree. It computes a matching score between each constituent and image region, trained via contrastive learning. It integrates two levels of fusion, namely at feature-level and at score-level, so as to allow fine-grained alignment. We introduce a new evaluation metric for VL grammar induction, CCRA, and show a 3.3% improvement over a strong baseline on Flickr30k Entities. We also evaluate our model via two derived tasks, i.e., language grammar induction and phrase grounding, and improve over the state-of-the-art for both. | https://openreview.net/pdf/5c104842d13e8d6efd55b6d7c04f4373a39eae18.pdf |
PiCO: Contrastive Label Disambiguation for Partial Label Learning | https://openreview.net/forum?id=EhYjZy6e1gJ | https://openreview.net/forum?id=EhYjZy6e1gJ | Haobo Wang,Ruixuan Xiao,Yixuan Li,Lei Feng,Gang Niu,Gang Chen,Junbo Zhao | ICLR 2022,Oral | Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL---representation learning and label disambiguation---in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Extensive experiments demonstrate that PiCO significantly outperforms the current state-of-the-art approaches in PLL and even achieves comparable results to fully supervised learning. Code and data available: https://github.com/hbzju/PiCO. | https://openreview.net/pdf/f9275b96d741f229db4e61a15ce5f2a499c9ee67.pdf |
Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting | https://openreview.net/forum?id=0EXmFzUn5I | https://openreview.net/forum?id=0EXmFzUn5I | Shizhan Liu,Hang Yu,Cong Liao,Jianguo Li,Weiyao Lin,Alex X. Liu,Schahram Dustdar | ICLR 2022,Oral | Accurate prediction of the future given the past based on time series data is of paramount importance, since it opens the door for decision making and risk management ahead of time. In practice, the challenge is to build a flexible but parsimonious model that can capture a wide range of temporal dependencies. In this paper, we propose Pyraformer by exploring the multiresolution representation of the time series. Specifically, we introduce the pyramidal attention module (PAM) in which the inter-scale tree structure summarizes features at different resolutions and the intra-scale neighboring connections model the temporal dependencies of different ranges. Under mild conditions, the maximum length of the signal traversing path in Pyraformer is a constant (i.e., $\mathcal O(1)$) with regard to the sequence length $L$, while its time and space complexity scale linearly with $L$. Extensive numerical results show that Pyraformer typically achieves the highest prediction accuracy in both single-step and long-range forecasting tasks with the least amount of time and memory consumption, especially when the sequence is long. | https://openreview.net/pdf/2ac159853cd001bbca6a8a12da497c8013914b31.pdf |
Expressiveness and Approximation Properties of Graph Neural Networks | https://openreview.net/forum?id=wIzUeM3TAU | https://openreview.net/forum?id=wIzUeM3TAU | Floris Geerts,Juan L Reutter | ICLR 2022,Oral | Characterizing the separation power of graph neural networks (GNNs) provides an understanding of their limitations for graph learning tasks. Results regarding separation power are, however, usually geared at specific GNNs architectures, and tools for understanding arbitrary GNN architectures are generally lacking. We provide an elegant way to easily obtain bounds on the separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests, which have become the yardstick to measure the separation power of GNNs. The crux is to view GNNs as expressions in a procedural tensor language describing the computations in the layers of the GNNs. Then, by a simple analysis of the obtained expressions, in terms of the number of indexes used and the nesting depth of summations, bounds on the separation power in terms of the WL-tests readily follow. We use tensor language to define Higher-Order Message-Passing Neural Networks (or k-MPNNs), a natural extension of MPNNs. Furthermore, the tensor language point of view allows for the derivation of universality results for classes of GNNs in a natural way. Our approach provides a toolbox with which GNN architecture designers can analyze the separation power of their GNNs, without needing to know the intricacies of the WL-tests. We also provide insights in what is needed to boost the separation power of GNNs. | https://openreview.net/pdf/9d0fe7ff08261aae56611b7f670de9875c2a9cd9.pdf |
Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space | https://openreview.net/forum?id=1L0C5ROtFp | https://openreview.net/forum?id=1L0C5ROtFp | Steeven JANNY,Fabien Baradel,Natalia Neverova,Madiha Nadri,Greg Mori,Christian Wolf | ICLR 2022,Oral | Learning causal relationships in high-dimensional data (images, videos) is a hard task, as they are often defined on low dimensional manifolds and must be extracted from complex signals dominated by appearance, lighting, textures and also spurious correlations in the data. We present a method for learning counterfactual reasoning of physical processes in pixel space, which requires the prediction of the impact of interventions on initial conditions. Going beyond the identification of structural relationships, we deal with the challenging problem of forecasting raw video over long horizons. Our method does not require the knowledge or supervision of any ground truth positions or other object or scene properties. Our model learns and acts on a suitable hybrid latent representation based on a combination of dense features, sets of 2D keypoints and an additional latent vector per keypoint. We show that this better captures the dynamics of physical processes than purely dense or sparse representations. We introduce a new challenging and carefully designed counterfactual benchmark for predictions in pixel space and outperform strong baselines in physics-inspired ML and video prediction. | https://openreview.net/pdf/cbd75b662eaa377753b892113b221d062f26511e.pdf |
BEiT: BERT Pre-Training of Image Transformers | https://openreview.net/forum?id=p-BhZSz59o4 | https://openreview.net/forum?id=p-BhZSz59o4 | Hangbo Bao,Li Dong,Songhao Piao,Furu Wei | ICLR 2022,Oral | We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e., image patches (such as 16 x 16 pixels), and visual tokens (i.e., discrete tokens). We first ``tokenize'' the original image into visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. Experimental results on image classification and semantic segmentation show that our model achieves competitive results with previous pre-training methods. | https://openreview.net/pdf/1be2cb0e0edf9af45f8ef450b802b459897cec3d.pdf |
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution | https://openreview.net/forum?id=UYneFzXSJWh | https://openreview.net/forum?id=UYneFzXSJWh | Ananya Kumar,Aditi Raghunathan,Robbie Matthew Jones,Tengyu Ma,Percy Liang | ICLR 2022,Oral | When transferring a pretrained model to a downstream task, two popular methods are full fine-tuning (updating all the model parameters) and linear probing (updating only the last linear layer---the "head"). It is well known that fine-tuning leads to better accuracy in-distribution (ID). However, in this paper, we find that fine-tuning can achieve worse accuracy than linear probing out-of-distribution (OOD) when the pretrained features are good and the distribution shift is large. On 10 distribution shift datasets (BREEDS-Living17, BREEDS-Entity30, DomainNet, CIFAR $\to$ STL, CIFAR-10.1, FMoW, ImageNetV2, ImageNet-R, ImageNet-A, ImageNet-Sketch), fine-tuning obtains on average 2% higher accuracy ID but 7% lower accuracy OOD than linear probing. We show theoretically that this tradeoff between ID and OOD accuracy arises even in a simple setting: fine-tuning overparameterized two-layer linear networks. We prove that the OOD error of fine-tuning is high when we initialize with a fixed or random head---this is because while fine-tuning learns the head, the lower layers of the neural network change simultaneously and distort the pretrained features. Our analysis suggests that the easy two-step strategy of linear probing then full fine-tuning (LP-FT), sometimes used as a fine-tuning heuristic, combines the benefits of both fine-tuning and linear probing. Empirically, LP-FT outperforms both fine-tuning and linear probing on the above datasets (1% better ID, 10% better OOD than full fine-tuning). | https://openreview.net/pdf/5d8a4ae4492042b22b07eabc7a9abcfa517f419c.pdf |
StyleAlign: Analysis and Applications of Aligned StyleGAN Models | https://openreview.net/forum?id=Qg2vi4ZbHM9 | https://openreview.net/forum?id=Qg2vi4ZbHM9 | Zongze Wu,Yotam Nitzan,Eli Shechtman,Dani Lischinski | ICLR 2022,Oral | In this paper, we perform an in-depth study of the properties and applications of aligned generative models.
We refer to two models as aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) via fine-tuning to another domain, a common practice in transfer learning. Several works already utilize some basic properties of aligned StyleGAN models to perform image-to-image translation. Here, we perform the first detailed exploration of model alignment, also focusing on StyleGAN. First, we empirically analyze aligned models and provide answers to important questions regarding their nature. In particular, we find that the child model's latent spaces are semantically aligned with those of the parent, inheriting incredibly rich semantics, even for distant data domains such as human faces and churches. Second, equipped with this better understanding, we leverage aligned models to solve a diverse set of tasks. In addition to image translation, we demonstrate fully automatic cross-domain image morphing. We further show that zero-shot vision tasks may be performed in the child domain, while relying exclusively on supervision in the parent domain. We demonstrate qualitatively and quantitatively that our approach yields state-of-the-art results, while requiring only simple fine-tuning and inversion. | https://openreview.net/pdf/a75f48f49713ac38baaaee51cb3273177975f96b.pdf |
Variational Inference for Discriminative Learning with Generative Modeling of Feature Incompletion | https://openreview.net/forum?id=qnQN4yr6FJz | https://openreview.net/forum?id=qnQN4yr6FJz | Kohei Miyaguchi,Takayuki Katsuki,Akira Koseki,Toshiya Iwamori | ICLR 2022,Oral | We are concerned with the problem of distributional prediction with incomplete features: The goal is to estimate the distribution of target variables given feature vectors with some of the elements missing. A typical approach to this problem is to perform missing-value imputation and regression, simultaneously or sequentially, which we call the generative approach. Another approach is to perform regression after appropriately encoding missing values into the feature, which we call the discriminative approach. In comparison, the generative approach is more robust to the feature corruption while the discriminative approach is more favorable to maximize the performance of prediction.
In this study, we propose a hybrid method to take the best of both worlds. Our method utilizes the black-box variational inference framework so that it can be applied to a wide variety of modern machine learning models, including the variational autoencoders. We also confirmed the effectiveness of the proposed method empirically.
| https://openreview.net/pdf/537474668e8264be0d7e7963ad009564621ad25e.pdf |
Efficiently Modeling Long Sequences with Structured State Spaces | https://openreview.net/forum?id=uYLFoz1vlAC | https://openreview.net/forum?id=uYLFoz1vlAC | Albert Gu,Karan Goel,Christopher Re | ICLR 2022,Oral | A central goal of sequence modeling is designing a single principled model that can address sequence data across a range of modalities and tasks, particularly on long-range dependencies. Although conventional models including RNNs, CNNs, and Transformers have specialized variants for capturing long dependencies, they still struggle to scale to very long sequences of $10000$ or more steps. A promising recent approach proposed modeling sequences by simulating the fundamental state space model (SSM) \( x'(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t) \), and showed that for appropriate choices of the state matrix \( A \), this system could handle long-range dependencies mathematically and empirically. However, this method has prohibitive computation and memory requirements, rendering it infeasible as a general sequence modeling solution. We propose the Structured State Space sequence model (S4) based on a new parameterization for the SSM, and show that it can be computed much more efficiently than prior approaches while preserving their theoretical strengths. Our technique involves conditioning \( A \) with a low-rank correction, allowing it to be diagonalized stably and reducing the SSM to the well-studied computation of a Cauchy kernel. S4 achieves strong empirical results across a diverse range of established benchmarks, including (i) 91\% accuracy on sequential CIFAR-10 with no data augmentation or auxiliary losses, on par with a larger 2-D ResNet, (ii) substantially closing the gap to Transformers on image and language modeling tasks, while performing generation $60\times$ faster (iii) SoTA on every task from the Long Range Arena benchmark, including solving the challenging Path-X task of length 16k that all prior work fails on, while being as efficient as all competitors. | https://openreview.net/pdf/a8eedf494f6698cb467c310c59d3ea6488546805.pdf |
Large Language Models Can Be Strong Differentially Private Learners | https://openreview.net/forum?id=bVuP3ltATMz | https://openreview.net/forum?id=bVuP3ltATMz | Xuechen Li,Florian Tramer,Percy Liang,Tatsunori Hashimoto | ICLR 2022,Oral | Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and straightforward attempts at applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead.
We show that this performance drop can be mitigated with (1) the use of large pretrained language models; (2) non-standard hyperparameters that suit DP optimization; and (3) fine-tuning objectives which are aligned with the pretraining procedure.
With the above, we obtain NLP models that outperform state-of-the-art DP-trained models under the same privacy budget and strong non-private baselines---by directly fine-tuning pretrained models with DP optimization on moderately-sized corpora.
To address the computational challenge of running DP-SGD with large Transformers, we propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients for any linear layer in the model.
The technique enables privately training Transformers with almost the same memory cost as non-private training at a modest run-time overhead.
Contrary to conventional wisdom that DP optimization fails at learning high-dimensional models (due to noise that scales with dimension) empirical results reveal that private learning with pretrained language models tends to not suffer from dimension-dependent performance degradation.
Code to reproduce results can be found at https://github.com/lxuechen/private-transformers.
| https://openreview.net/pdf/d88e1e721c4085b8a6403837f45b8c483ad0225b.pdf |
GeoDiff: A Geometric Diffusion Model for Molecular Conformation Generation | https://openreview.net/forum?id=PzcvxEMzvQC | https://openreview.net/forum?id=PzcvxEMzvQC | Minkai Xu,Lantao Yu,Yang Song,Chence Shi,Stefano Ermon,Jian Tang | ICLR 2022,Oral | Predicting molecular conformations from molecular graphs is a fundamental problem in cheminformatics and drug discovery. Recently, significant progress has been achieved with machine learning approaches, especially with deep generative models. Inspired by the diffusion process in classical non-equilibrium thermodynamics where heated particles will diffuse from original states to a noise distribution, in this paper, we propose a novel generative model named GeoDiff for molecular conformation prediction. GeoDiff treats each atom as a particle and learns to directly reverse the diffusion process (i.e., transforming from a noise distribution to stable conformations) as a Markov chain. Modeling such a generation process is however very challenging as the likelihood of conformations should be roto-translational invariant. We theoretically show that Markov chains evolving with equivariant Markov kernels can induce an invariant distribution by design, and further propose building blocks for the Markov kernels to preserve the desirable equivariance property. The whole framework can be efficiently trained in an end-to-end fashion by optimizing a weighted variational lower bound to the (conditional) likelihood. Experiments on multiple benchmarks show that GeoDiff is superior or comparable to existing state-of-the-art approaches, especially on large molecules. | https://openreview.net/pdf/d6be0299d7f2d2bf947d450fffe98c8395458c75.pdf |
Frame Averaging for Invariant and Equivariant Network Design | https://openreview.net/forum?id=zIUyj55nXR | https://openreview.net/forum?id=zIUyj55nXR | Omri Puny,Matan Atzmon,Edward J. Smith,Ishan Misra,Aditya Grover,Heli Ben-Hamu,Yaron Lipman | ICLR 2022,Oral | Many machine learning tasks involve learning functions that are known to be invariant or equivariant to certain symmetries of the input data. However, it is often challenging to design neural network architectures that respect these symmetries while being expressive and computationally efficient. For example, Euclidean motion invariant/equivariant graph or point cloud neural networks.
We introduce Frame Averaging (FA), a highly general purpose and systematic framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types. Our framework builds on the well known group averaging operator that guarantees invariance or equivariance but is intractable. In contrast, we observe that for many important classes of symmetries, this operator can be replaced with an averaging operator over a small subset of the group elements, called a frame. We show that averaging over a frame guarantees exact invariance or equivariance while often being much simpler to compute than averaging over the entire group. Furthermore, we prove that FA-based models have maximal expressive power in a broad setting and in general preserve the expressive power of their backbone architectures. Using frame averaging, we propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs. We demonstrate the practical effectiveness of FA on several applications including point cloud normal estimation, beyond $2$-WL graph separation, and $n$-body dynamics prediction, achieving state-of-the-art results in all of these benchmarks. | https://openreview.net/pdf/d7849f0ef0f911d06889785dc7116564d5342442.pdf |
Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation | https://openreview.net/forum?id=oapKSVM2bcj | https://openreview.net/forum?id=oapKSVM2bcj | Alex Rogozhnikov | ICLR 2022,Oral | Tensor computations underlie modern scientific computing and deep learning.
A number of tensor frameworks emerged varying in execution model, hardware support, memory management, model definition, etc.
However, tensor operations in all frameworks follow the same paradigm.
Recent neural network architectures demonstrate demand for higher expressiveness of tensor operations.
The current paradigm is not suited to write readable, reliable, or easy-to-modify code for multidimensional tensor manipulations.
Moreover, some commonly used operations do not provide sufficient checks and can break a tensor structure.
These mistakes are elusive as no tools or tests can detect them.
Independently, API discrepancies complicate code transfer between frameworks.
We propose einops notation: a uniform and generic way to manipulate tensor structure, that significantly improves code readability and flexibility by focusing on the structure of input and output tensors.
We implement einops notation in a Python package that efficiently supports multiple widely used frameworks and provides framework-independent minimalist API for tensor manipulations. | https://openreview.net/pdf/d568f6e36eaa377888611b8e0d84076777edc330.pdf |
A Fine-Grained Analysis on Distribution Shift | https://openreview.net/forum?id=Dl4LetuLdyK | https://openreview.net/forum?id=Dl4LetuLdyK | Olivia Wiles,Sven Gowal,Florian Stimberg,Sylvestre-Alvise Rebuffi,Ira Ktena,Krishnamurthy Dj Dvijotham,Ali Taylan Cemgil | ICLR 2022,Oral | Robustness to distribution shifts is critical for deploying machine learning models in the real world. Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts. To this end, we introduce a framework that enables fine-grained analysis of various distribution shifts. We provide a holistic analysis of current state-of-the-art methods by evaluating 19 distinct methods grouped into five categories across both synthetic and real-world datasets. Overall, we train more than 85K models. Our experimental framework can be easily extended to include new methods, shifts, and datasets. We find, unlike previous work (Gulrajani & Lopez-Paz, 2021), that progress has been made over a standard ERM baseline; in particular, pretraining and augmentations (learned or heuristic) offer large gains in many cases. However, the best methods are not consistent over different datasets and shifts. We will open source our experimental framework, allowing future work to evaluate new methods over multiple shifts to obtain a more complete picture of a method's effectiveness.
Code is available at github.com/deepmind/distribution_shift_framework.
| https://openreview.net/pdf/6be366539738706234ad0b104ed82361a3c5f6e7.pdf |
Open-Set Recognition: A Good Closed-Set Classifier is All You Need | https://openreview.net/forum?id=5hLP5JY9S2d | https://openreview.net/forum?id=5hLP5JY9S2d | Sagar Vaze,Kai Han,Andrea Vedaldi,Andrew Zisserman | ICLR 2022,Oral | The ability to identify whether or not a test sample belongs to one of the semantic classes in a classifier's training set is critical to practical deployment of the model. This task is termed open-set recognition (OSR) and has received significant attention in recent years. In this paper, we first demonstrate that the ability of a classifier to make the 'none-of-above' decision is highly correlated with its accuracy on the closed-set classes. We find that this relationship holds across loss objectives and architectures, and further demonstrate the trend both on the standard OSR benchmarks as well as on a large-scale ImageNet evaluation. Second, we use this correlation to boost the performance of the maximum softmax probability OSR 'baseline' by improving its closed-set accuracy, and with this strong baseline achieve state-of-the-art on a number of OSR benchmarks. Similarly, we boost the performance of the existing state-of-the-art method by improving its closed-set accuracy, but the resulting discrepancy with the strong baseline is marginal. Our third contribution is to present the 'Semantic Shift Benchmark' (SSB), which better respects the task of detecting semantic novelty, as opposed to low-level distributional shifts as tackled by neighbouring machine learning fields. On this new evaluation, we again demonstrate that there is negligible difference between the strong baseline and the existing state-of-the-art. Code available at: https://github.com/sgvaze/osr_closed_set_all_you_need. | https://openreview.net/pdf/a9e422d293a936fe65575b5e1ea6a86549b84bca.pdf |
Learning Strides in Convolutional Neural Networks | https://openreview.net/forum?id=M752z9FKJP | https://openreview.net/forum?id=M752z9FKJP | Rachid Riad,Olivier Teboul,David Grangier,Neil Zeghidour | ICLR 2022,Oral | Convolutional neural networks typically contain several downsampling operators, such as strided convolutions or pooling layers, that progressively reduce the resolution of intermediate representations. This provides some shift-invariance while reducing the computational complexity of the whole architecture. A critical hyperparameter of such layers is their stride: the integer factor of downsampling. As strides are not differentiable, finding the best configuration either requires cross-validation or discrete optimization (e.g. architecture search), which rapidly become prohibitive as the search space grows exponentially with the number of downsampling layers. Hence, exploring this search space by gradient descent would allow finding better configurations at a lower computational cost. This work introduces DiffStride, the first downsampling layer with learnable strides. Our layer learns the size of a cropping mask in the Fourier domain, that effectively performs resizing in a differentiable way. Experiments on audio and image classification show the generality and effectiveness of our solution: we use DiffStride as a drop-in replacement to standard downsampling layers and outperform them. In particular, we show that introducing our layer into a ResNet-18 architecture allows keeping consistent high performance on CIFAR10, CIFAR100 and ImageNet even when training starts from poor random stride configurations. Moreover, formulating strides as learnable variables allows us to introduce a regularization term that controls the computational complexity of the architecture. We show how this regularization allows trading off accuracy for efficiency on ImageNet. | https://openreview.net/pdf/1bc01ea49b5a288387ac5de300847b1d6690d940.pdf |
Understanding over-squashing and bottlenecks on graphs via curvature | https://openreview.net/forum?id=7UmjRGzp-A | https://openreview.net/forum?id=7UmjRGzp-A | Jake Topping,Francesco Di Giovanni,Benjamin Paul Chamberlain,Xiaowen Dong,Michael M. Bronstein | ICLR 2022,Oral | Most graph neural networks (GNNs) use the message passing paradigm, in which node features are propagated on the input graph. Recent works pointed to the distortion of information flowing from distant nodes as a factor limiting the efficiency of message passing for tasks relying on long-distance interactions. This phenomenon, referred to as 'over-squashing', has been heuristically attributed to graph bottlenecks where the number of $k$-hop neighbors grows rapidly with $k$. We provide a precise description of the over-squashing phenomenon in GNNs and analyze how it arises from bottlenecks in the graph. For this purpose, we introduce a new edge-based combinatorial curvature and prove that negatively curved edges are responsible for the over-squashing issue. We also propose and experimentally test a curvature-based graph rewiring method to alleviate the over-squashing. | https://openreview.net/pdf/f6b974eac8792a0d8d59633044276dabbf9d01c9.pdf |
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme | https://openreview.net/forum?id=8c50f-DoWAu | https://openreview.net/forum?id=8c50f-DoWAu | Vadim Popov,Ivan Vovk,Vladimir Gogoryan,Tasnima Sadekova,Mikhail Sergeevich Kudinov,Jiansheng Wei | ICLR 2022,Oral | Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario. The most challenging one often referred to as one-shot many-to-many voice conversion consists in copying target voice from only one reference utterance in the most general case when both source and target speakers do not belong to the training dataset. We present a scalable high-quality solution based on diffusion probabilistic modeling and demonstrate its superior quality compared to state-of-the-art one-shot voice conversion approaches. Moreover, focusing on real-time applications, we investigate general principles which can make diffusion models faster while keeping synthesis quality at a high level. As a result, we develop a novel Stochastic Differential Equations solver suitable for various diffusion model types and generative tasks as shown through empirical studies and justify it by theoretical analysis. | https://openreview.net/pdf/468145b46e459c5ba69e7017b6ef4eaece277e94.pdf |
Resolving Training Biases via Influence-based Data Relabeling | https://openreview.net/forum?id=EskfH0bwNVn | https://openreview.net/forum?id=EskfH0bwNVn | Shuming Kong,Yanyan Shen,Linpeng Huang | ICLR 2022,Oral | The performance of supervised learning methods easily suffers from the training bias issue caused by train-test distribution mismatch or label noise. Influence function is a technique that estimates the impacts of a training sample on the model’s predictions. Recent studies on \emph{data resampling} have employed influence functions to identify \emph{harmful} training samples that will degrade model's test performance. They have shown that discarding or downweighting the identified harmful training samples is an effective way to resolve training biases. In this work, we move one step forward and propose an influence-based relabeling framework named RDIA for reusing harmful training samples toward better model performance. To achieve this, we use influence functions to estimate how relabeling a training sample would affect model's test performance and further develop a novel relabeling function R. We theoretically prove that applying R to relabel harmful training samples allows the model to achieve lower test loss than simply discarding them for any classification tasks using cross-entropy loss. Extensive experiments on ten real-world datasets demonstrate RDIA outperforms the state-of-the-art data resampling methods and improves model's robustness against label noise. | https://openreview.net/pdf/64c51657be7bb5a9efecafe39344c719ccb4d394.pdf |
Representational Continuity for Unsupervised Continual Learning | https://openreview.net/forum?id=9Hrka5PA7LW | https://openreview.net/forum?id=9Hrka5PA7LW | Divyam Madaan,Jaehong Yoon,Yuanchun Li,Yunxin Liu,Sung Ju Hwang | ICLR 2022,Oral | Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. However, recent CL advances are restricted to supervised continual learning (SCL) scenarios. Consequently, they are not scalable to real-world applications where the data distribution is often biased and unannotated. In this work, we focus on unsupervised continual learning (UCL), where we learn the feature representations on an unlabelled sequence of tasks and show that reliance on annotated data is not necessary for continual learning. We conduct a systematic study analyzing the learned feature representations and show that unsupervised visual representations are surprisingly more robust to catastrophic forgetting, consistently achieve better performance, and generalize better to out-of-distribution tasks than SCL. Furthermore, we find that UCL achieves a smoother loss landscape through qualitative analysis of the learned representations and learns meaningful feature representations. Additionally, we propose Lifelong Unsupervised Mixup (LUMP), a simple yet effective technique that interpolates between the current task and previous tasks' instances to alleviate catastrophic forgetting for unsupervised representations. | https://openreview.net/pdf/947f2c6dc3cd63a83d402bf9cbaddf42e362709e.pdf |
Vision-Based Manipulators Need to Also See from Their Hands | https://openreview.net/forum?id=RJkAHKp7kNZ | https://openreview.net/forum?id=RJkAHKp7kNZ | Kyle Hsu,Moo Jin Kim,Rafael Rafailov,Jiajun Wu,Chelsea Finn | ICLR 2022,Oral | We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. Compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consistently improves training efficiency and out-of-distribution generalization. These benefits hold across a variety of learning algorithms, experimental settings, and distribution shifts, and for both simulated and real robot apparatuses. However, this is only the case when hand-centric observability is sufficient; otherwise, including a third-person perspective is necessary for learning, but also harms out-of-distribution generalization. To mitigate this, we propose to regularize the third-person information stream via a variational information bottleneck. On six representative manipulation tasks with varying hand-centric observability adapted from the Meta-World benchmark, this results in a state-of-the-art reinforcement learning agent operating from both perspectives improving its out-of-distribution generalization on every task. While some practitioners have long put cameras in the hands of robots, our work systematically analyzes the benefits of doing so and provides simple and broadly applicable insights for improving end-to-end learned vision-based robotic manipulation. | https://openreview.net/pdf/bf5308ad68220347e7cbf2dcbedbf7bb4e0a21b1.pdf |
Meta-Learning with Fewer Tasks through Task Interpolation | https://openreview.net/forum?id=ajXWF7bVR8d | https://openreview.net/forum?id=ajXWF7bVR8d | Huaxiu Yao,Linjun Zhang,Chelsea Finn | ICLR 2022,Oral | Meta-learning enables algorithms to quickly learn a newly encountered task with just a few labeled examples by transferring previously learned knowledge. However, the bottleneck of current meta-learning algorithms is the requirement of a large number of meta-training tasks, which may not be accessible in real-world scenarios. To address the challenge that available tasks may not densely sample the space of tasks, we propose to augment the task set through interpolation. By meta-learning with task interpolation (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels. Under both gradient-based and metric-based meta-learning settings, our theoretical analysis shows MLTI corresponds to a data-adaptive meta-regularization and further improves the generalization. Empirically, in our experiments on eight datasets from diverse domains including image recognition, pose prediction, molecule property prediction, and medical image classification, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies. | https://openreview.net/pdf/ebbfc5841da414394c96beeba92500546061461a.pdf |
DISCOVERING AND EXPLAINING THE REPRESENTATION BOTTLENECK OF DNNS | https://openreview.net/forum?id=iRCUlgmdfHJ | https://openreview.net/forum?id=iRCUlgmdfHJ | Huiqi Deng,Qihan Ren,Hao Zhang,Quanshi Zhang | ICLR 2022,Oral | This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interactions. We discover that a DNN is more likely to encode both too simple and too complex interactions, but usually fails to learn interactions of intermediate complexity. Such a phenomenon is widely shared by different DNNs for different tasks. This phenomenon indicates a cognition gap between DNNs and humans, and we call it a representation bottleneck. We theoretically prove the underlying reason for the representation bottleneck. Furthermore, we propose losses to encourage/penalize the learning of interactions of specific complexities, and analyze the representation capacities of interactions of different complexities. The code is available at https://github.com/Nebularaid2000/bottleneck. | https://openreview.net/pdf/e470657e4d47a20411713a973ed0282f87c9f9a9.pdf |
Sparse Communication via Mixed Distributions | https://openreview.net/forum?id=WAid50QschI | https://openreview.net/forum?id=WAid50QschI | António Farinhas,Wilker Aziz,Vlad Niculae,Andre Martins | ICLR 2022,Oral | Neural networks and other machine learning models compute continuous representations, while humans communicate mostly through discrete symbols. Reconciling these two forms of communication is desirable for generating human-readable interpretations or learning discrete latent variable models, while maintaining end-to-end differentiability. Some existing approaches (such as the Gumbel-Softmax transformation) build continuous relaxations that are discrete approximations in the zero-temperature limit, while others (such as sparsemax transformations and the Hard Concrete distribution) produce discrete/continuous hybrids. In this paper, we build rigorous theoretical foundations for these hybrids, which we call "mixed random variables.'' Our starting point is a new "direct sum'' base measure defined on the face lattice of the probability simplex. From this measure, we introduce new entropy and Kullback-Leibler divergence functions that subsume the discrete and differential cases and have interpretations in terms of code optimality. Our framework suggests two strategies for representing and sampling mixed random variables, an extrinsic ("sample-and-project'’) and an intrinsic one (based on face stratification). We experiment with both approaches on an emergent communication benchmark and on modeling MNIST and Fashion-MNIST data with variational auto-encoders with mixed latent variables. | https://openreview.net/pdf/f8c966f98befffb0bfbd9af921a4e4dd831d549f.pdf |
Finetuned Language Models are Zero-Shot Learners | https://openreview.net/forum?id=gEZrGCozdqR | https://openreview.net/forum?id=gEZrGCozdqR | Jason Wei,Maarten Bosma,Vincent Zhao,Kelvin Guu,Adams Wei Yu,Brian Lester,Nan Du,Andrew M. Dai,Quoc V Le | ICLR 2022,Oral | This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning—finetuning language models on a collection of datasets described via instructions—substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction tune it on over 60 NLP datasets verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning. | https://openreview.net/pdf/16b50405ab1e3ac1e2f76190ee62a48c496c568d.pdf |
F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization | https://openreview.net/forum?id=_CfpJazzXT2 | https://openreview.net/forum?id=_CfpJazzXT2 | Qing Jin,Jian Ren,Richard Zhuang,Sumant Hanumante,Zhengang Li,Zhiyu Chen,Yanzhi Wang,Kaiyuan Yang,Sergey Tulyakov | ICLR 2022,Oral | Neural network quantization is a promising compression technique to reduce memory footprint and save energy consumption, potentially leading to real-time inference. However, there is a performance gap between quantized and full-precision models. To reduce it, existing quantization approaches require high-precision INT32 or full-precision multiplication during inference for scaling or dequantization. This introduces a noticeable cost in terms of memory, speed, and required energy. To tackle these issues, we present F8Net, a novel quantization framework consisting in only fixed-point 8-bit multiplication. To derive our method, we first discuss the advantages of fixed-point multiplication with different formats of fixed-point numbers and study the statistical behavior of the associated fixed-point numbers. Second, based on the statistical and algorithmic analysis, we apply different fixed-point formats for weights and activations of different layers. We introduce a novel algorithm to automatically determine the right format for each layer during training. Third, we analyze a previous quantization algorithm—parameterized clipping activation (PACT)—and reformulate it using fixed-point arithmetic. Finally, we unify the recently proposed method for quantization fine-tuning and our fixed-point approach to show the potential of our method. We verify F8Net on ImageNet for MobileNet V1/V2 and ResNet18/50. Our approach achieves comparable and better performance, when compared not only to existing quantization techniques with INT32 multiplication or floating point arithmetic, but also to the full-precision counterparts, achieving state-of-the-art performance. | https://openreview.net/pdf/aed69dd0c10990a2c4948e6d230de04c5719fb7d.pdf |
Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design | https://openreview.net/forum?id=UcDUxjPYWSr | https://openreview.net/forum?id=UcDUxjPYWSr | Ye Yuan,Yuda Song,Zhengyi Luo,Wen Sun,Kris M. Kitani | ICLR 2022,Oral | An agent's functionality is largely determined by its design, i.e., skeletal structure and joint attributes (e.g., length, size, strength). However, finding the optimal agent design for a given function is extremely challenging since the problem is inherently combinatorial and the design space is prohibitively large. Additionally, it can be costly to evaluate each candidate design which requires solving for its optimal controller. To tackle these problems, our key idea is to incorporate the design procedure of an agent into its decision-making process. Specifically, we learn a conditional policy that, in an episode, first applies a sequence of transform actions to modify an agent's skeletal structure and joint attributes, and then applies control actions under the new design. To handle a variable number of joints across designs, we use a graph-based policy where each graph node represents a joint and uses message passing with its neighbors to output joint-specific actions. Using policy gradient methods, our approach enables joint optimization of agent design and control as well as experience sharing across different designs, which improves sample efficiency substantially. Experiments show that our approach, Transform2Act, outperforms prior methods significantly in terms of convergence speed and final performance. Notably, Transform2Act can automatically discover plausible designs similar to giraffes, squids, and spiders. Code and videos are available at https://sites.google.com/view/transform2act. | https://openreview.net/pdf/511a5c95afacad18125605721a8d1e530c07018b.pdf |
ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics | https://openreview.net/forum?id=s03AQxehtd_ | https://openreview.net/forum?id=s03AQxehtd_ | Boris N. Oreshkin,Florent Bocquelet,Felix G. Harvey,Bay Raitt,Dominic Laflamme | ICLR 2022,Oral | Our work focuses on the development of a learnable neural representation of human pose for advanced AI assisted animation tooling. Specifically, we tackle the problem of constructing a full static human pose based on sparse and variable user inputs (e.g. locations and/or orientations of a subset of body joints). To solve this problem, we propose a novel neural architecture that combines residual connections with prototype encoding of a partially specified pose to create a new complete pose from the learned latent space. We show that our architecture outperforms a baseline based on Transformer, both in terms of accuracy and computational efficiency. Additionally, we develop a user interface to integrate our neural model in Unity, a real-time 3D development platform. Furthermore, we introduce two new datasets representing the static human pose modeling problem, based on high-quality human motion capture data, which will be released publicly along with model code. | https://openreview.net/pdf/72eadcfe21558f0be18ff071adc50adc3ae85e5e.pdf |
Hyperparameter Tuning with Renyi Differential Privacy | https://openreview.net/forum?id=-70L8lpp9DF | https://openreview.net/forum?id=-70L8lpp9DF | Nicolas Papernot,Thomas Steinke | ICLR 2022,Oral | For many differentially private algorithms, such as the prominent noisy stochastic gradient descent (DP-SGD), the analysis needed to bound the privacy leakage of a single training run is well understood. However, few studies have reasoned about the privacy leakage resulting from the multiple training runs needed to fine tune the value of the training algorithm’s hyperparameters. In this work, we first illustrate how simply setting hyperparameters based on non-private training runs can leak private information. Motivated by this observation, we then provide privacy guarantees for hyperparameter search procedures within the framework of Renyi Differential Privacy. Our results improve and extend the work of Liu and Talwar (STOC 2019). Our analysis supports our previous observation that tuning hyperparameters does indeed leak private information, but we prove that, under certain assumptions, this leakage is modest, as long as each candidate training run needed to select hyperparameters is itself differentially private. | https://openreview.net/pdf/8832d0e112b9fd6c5c8f0be8a093625e4de6e337.pdf |
Real-Time Neural Voice Camouflage | https://openreview.net/forum?id=qj1IZ-6TInc | https://openreview.net/forum?id=qj1IZ-6TInc | Mia Chiquier,Chengzhi Mao,Carl Vondrick | ICLR 2022,Oral | Automatic speech recognition systems have created exciting possibilities for applications, however they also enable opportunities for systematic eavesdropping.We propose a method to camouflage a person's voice from these systems without inconveniencing the conversation between people in the room. Standard adversarial attacks are not effective in real-time streaming situations because the characteristics of the signal will have changed by the time the attack is executed. We introduce predictive adversarial attacks, which achieves real-time performance by forecasting the attack vector that will be the most effective in the future. Under real-time constraints, our method jams the established speech recognition system DeepSpeech 3.9x more than online projected gradient descent as measured through word error rate, and 6.6x more as measured through character error rate. We furthermore demonstrate our approach is practically effective in realistic environments with complex scene geometries. | https://openreview.net/pdf/e2b96a38db73636bfa51d5ee4097373ddda15329.pdf |
CycleMLP: A MLP-like Architecture for Dense Prediction | https://openreview.net/forum?id=NMEceG4v69Y | https://openreview.net/forum?id=NMEceG4v69Y | Shoufa Chen,Enze Xie,Chongjian GE,Runjian Chen,Ding Liang,Ping Luo | ICLR 2022,Oral | This paper presents a simple MLP-like architecture, CycleMLP, which is a versatile backbone for visual recognition and dense predictions. As compared to modern MLP architectures, e.g. , MLP-Mixer, ResMLP, and gMLP, whose architectures are correlated to image size and thus are infeasible in object detection and segmentation, CycleMLP has two advantages compared to modern approaches. (1) It can cope
with various image sizes. (2) It achieves linear computational complexity to image size by using local windows. In contrast, previous MLPs have $O(N^2)$ computations due to fully spatial connections. We build a family of models which surpass existing MLPs and even state-of-the-art Transformer-based models, e.g. Swin Transformer, while using fewer parameters and FLOPs. We expand the MLP-like models’ applicability, making them a versatile backbone for dense prediction tasks. CycleMLP achieves competitive results on object detection, instance segmentation, and semantic segmentation. In particular, CycleMLP-Tiny outperforms Swin-Tiny by 1.3% mIoU on ADE20K dataset with fewer FLOPs. Moreover, CycleMLP also shows excellent zero-shot robustness on ImageNet-C dataset. | https://openreview.net/pdf/0ff0f728cbc430b36ea84288793e887e216cff59.pdf |
Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models | https://openreview.net/forum?id=0xiJLKH-ufZ | https://openreview.net/forum?id=0xiJLKH-ufZ | Fan Bao,Chongxuan Li,Jun Zhu,Bo Zhang | ICLR 2022,Oral | Diffusion probabilistic models (DPMs) represent a class of powerful generative models. Despite their success, the inference of DPMs is expensive since it generally needs to iterate over thousands of timesteps. A key problem in the inference is to estimate the variance in each timestep of the reverse process. In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w.r.t. its score function. Building upon it, we propose \textit{Analytic-DPM}, a training-free inference framework that estimates the analytic forms of the variance and KL divergence using the Monte Carlo method and a pretrained score-based model. Further, to correct the potential bias caused by the score-based model, we derive both lower and upper bounds of the optimal variance and clip the estimate for a better result. Empirically, our analytic-DPM improves the log-likelihood of various DPMs, produces high-quality samples, and meanwhile enjoys a $20\times$ to $80\times$ speed up. | https://openreview.net/pdf/541cdc9e000367bb0bd3fc42201573ed434094c8.pdf |
RISP: Rendering-Invariant State Predictor with Differentiable Simulation and Rendering for Cross-Domain Parameter Estimation | https://openreview.net/forum?id=uSE03demja | https://openreview.net/forum?id=uSE03demja | Pingchuan Ma,Tao Du,Joshua B. Tenenbaum,Wojciech Matusik,Chuang Gan | ICLR 2022,Oral | This work considers identifying parameters characterizing a physical system's dynamic motion directly from a video whose rendering configurations are inaccessible. Existing solutions require massive training data or lack generalizability to unknown rendering configurations. We propose a novel approach that marries domain randomization and differentiable rendering gradients to address this problem. Our core idea is to train a rendering-invariant state-prediction (RISP) network that transforms image differences into state differences independent of rendering configurations, e.g., lighting, shadows, or material reflectance. To train this predictor, we formulate a new loss on rendering variances using gradients from differentiable rendering. Moreover, we present an efficient, second-order method to compute the gradients of this loss, allowing it to be integrated seamlessly into modern deep learning frameworks. We evaluate our method in rigid-body and deformable-body simulation environments using four tasks: state estimation, system identification, imitation learning, and visuomotor control. We further demonstrate the efficacy of our approach on a real-world example: inferring the state and action sequences of a quadrotor from a video of its motion sequences. Compared with existing methods, our approach achieves significantly lower reconstruction errors and has better generalizability among unknown rendering configurations. | https://openreview.net/pdf/999353870633727a2d50bc5b4ee873b50401eba7.pdf |
The Information Geometry of Unsupervised Reinforcement Learning | https://openreview.net/forum?id=3wU2UX0voE | https://openreview.net/forum?id=3wU2UX0voE | Benjamin Eysenbach,Ruslan Salakhutdinov,Sergey Levine | ICLR 2022,Oral | How can a reinforcement learning (RL) agent prepare to solve downstream tasks if those tasks are not known a priori? One approach is unsupervised skill discovery, a class of algorithms that learn a set of policies without access to a reward function. Such algorithms bear a close resemblance to representation learning algorithms (e.g., contrastive learning) in supervised learning, in that both are pretraining algorithms that maximize some approximation to a mutual information objective. While prior work has shown that the set of skills learned by such methods can accelerate downstream RL tasks, prior work offers little analysis into whether these skill learning algorithms are optimal, or even what notion of optimality would be appropriate to apply to them. In this work, we show that unsupervised skill discovery algorithms based on mutual information maximization do not learn skills that are optimal for every possible reward function. However, we show that the distribution over skills provides an optimal initialization minimizing regret against adversarially-chosen reward functions, assuming a certain type of adaptation procedure. Our analysis also provides a geometric perspective on these skill learning methods. | https://openreview.net/pdf/4709236cdf10497a057511e94fe99f87770c5bf6.pdf |
Language modeling via stochastic processes | https://openreview.net/forum?id=pMQwKL1yctf | https://openreview.net/forum?id=pMQwKL1yctf | Rose E Wang,Esin Durmus,Noah Goodman,Tatsunori Hashimoto | ICLR 2022,Oral | Modern language models can generate high-quality short texts. However, they often meander or are incoherent when generating longer texts. These issues arise from the next-token-only language modeling objective. To address these issues, we introduce Time Control (TC), a language model that implicitly plans via a latent stochastic process. TC does this by learning a representation which maps the dynamics of how text changes in a document to the dynamics of a stochastic process of interest. Using this representation, the language model can generate text by first implicitly generating a document plan via a stochastic process, and then generating text that is consistent with this latent plan. Compared to domain-specific methods and fine-tuning GPT2 across a variety of text domains, TC improves performance on text infilling and discourse coherence. On long text generation settings, TC preserves the text structure both in terms of ordering (up to +40% better) and text length consistency (up to +17% better). Human evaluators also prefer TC's output 28.6% more than the baselines. | https://openreview.net/pdf/ceeec650a60b1f87ad4dda26ecd02c9df0e3ed9d.pdf |
Learning to Downsample for Segmentation of Ultra-High Resolution Images | https://openreview.net/forum?id=HndgQudNb91 | https://openreview.net/forum?id=HndgQudNb91 | Chen Jin,Ryutaro Tanno,Thomy Mertzanidou,Eleftheria Panagiotaki,Daniel C. Alexander | ICLR 2022,Poster | Many computer vision systems require low-cost segmentation algorithms based on deep learning, either because of the enormous size of input images or limited computational budget. Common solutions uniformly downsample the input images to meet memory constraints, assuming all pixels are equally informative. In this work, we demonstrate that this assumption can harm the segmentation performance
because the segmentation difficulty varies spatially (see Figure 1 “Uniform”). We combat this problem by introducing a learnable downsampling module, which can be optimised together with the given segmentation model in an end-to-end fashion. We formulate the problem of training such downsampling module as optimisation of sampling density distributions over the input images given their low-resolution views. To defend against degenerate solutions (e.g. over-sampling trivial regions like the backgrounds), we propose a regularisation term that encourages the sampling locations to concentrate around the object boundaries. We find the downsampling
module learns to sample more densely at difficult locations, thereby improving the segmentation performance (see Figure 1 "Ours"). Our experiments on benchmarks of high-resolution street view, aerial and medical images demonstrate substantial improvements in terms of efficiency-and-accuracy trade-off compared to both uniform downsampling and two recent advanced downsampling techniques. | https://openreview.net/pdf/d2ade7120315e0521c4b97b593c4a2ebd44b0652.pdf |
Variational Neural Cellular Automata | https://openreview.net/forum?id=7fFO4cMBx_9 | https://openreview.net/forum?id=7fFO4cMBx_9 | Rasmus Berg Palm,Miguel González Duque,Shyam Sudhakaran,Sebastian Risi | ICLR 2022,Poster | In nature, the process of cellular growth and differentiation has lead to an amazing diversity of organisms --- algae, starfish, giant sequoia, tardigrades, and orcas are all created by the same generative process.
Inspired by the incredible diversity of this biological generative process, we propose a generative model, the Variational Neural Cellular Automata (VNCA), which is loosely inspired by the biological processes of cellular growth and differentiation. Unlike previous related works, the VNCA is a proper probabilistic generative model, and we evaluate it according to best practices. We find that the VNCA learns to reconstruct samples well and that despite its relatively few parameters and simple local-only communication, the VNCA can learn to generate a large variety of output from information encoded in a common vector format. While there is a significant gap to the current state-of-the-art in terms of generative modeling performance, we show that the VNCA can learn a purely self-organizing generative process of data. Additionally, the self-organizing nature bestows the VNCA with some inherent robustness against perturbations in the early stages of growth. | https://openreview.net/pdf/abec641c2a0c18536da3345e5cd92d673d90b69d.pdf |
Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation | https://openreview.net/forum?id=FKp8-pIRo3y | https://openreview.net/forum?id=FKp8-pIRo3y | Todor Davchev,Oleg Olegovich Sushkov,Jean-Baptiste Regli,Stefan Schaal,Yusuf Aytar,Markus Wulfmeier,Jon Scholz | ICLR 2022,Poster | Complex sequential tasks in continuous-control settings often require agents to successfully traverse a set of ``narrow passages'' in their state space. Solving such tasks with a sparse reward in a sample-efficient manner poses a challenge to modern reinforcement learning (RL) due to the associated long-horizon nature of the problem and the lack of sufficient positive signal during learning.
Various tools have been applied to address this challenge. When available, large sets of demonstrations can guide agent exploration. Hindsight relabelling on the other hand does not require additional sources of information. However, existing strategies explore based on task-agnostic goal distributions, which can render the solution of long-horizon tasks impractical. In this work, we extend hindsight relabelling mechanisms to guide exploration along task-specific distributions implied by a small set of successful demonstrations. We evaluate the approach on four complex, single and dual arm, robotics manipulation tasks against strong suitable baselines. The method requires far fewer demonstrations to solve all tasks and achieves a significantly higher overall performance as task complexity increases. Finally, we investigate the robustness of the proposed solution with respect to the quality of input representations and the number of demonstrations. | https://openreview.net/pdf/524d4c3cacc5ff7803cd7061b33991511fee7db7.pdf |
L0-Sparse Canonical Correlation Analysis | https://openreview.net/forum?id=KntaNRo6R48 | https://openreview.net/forum?id=KntaNRo6R48 | Ofir Lindenbaum,Moshe Salhov,Amir Averbuch,Yuval Kluger | ICLR 2022,Poster | Canonical Correlation Analysis (CCA) models are powerful for studying the associations between two sets of variables. The canonically correlated representations, termed \textit{canonical variates} are widely used in unsupervised learning to analyze unlabeled multi-modal registered datasets. Despite their success, CCA models may break (or overfit) if the number of variables in either of the modalities exceeds the number of samples. Moreover, often a significant fraction of the variables measures modality-specific information, and thus removing them is beneficial for identifying the \textit{canonically correlated variates}. Here, we propose $\ell_0$-CCA, a method for learning correlated representations based on sparse subsets of variables from two observed modalities.
Sparsity is obtained by multiplying the input variables by stochastic gates, whose parameters are learned together with the CCA weights via an $\ell_0$-regularized correlation loss.
We further propose $\ell_0$-Deep CCA for solving the problem of non-linear sparse CCA by modeling the correlated representations using deep nets. We demonstrate the efficacy of the method using several synthetic and real examples. Most notably, by gating nuisance input variables, our approach improves the extracted representations compared to other linear, non-linear and sparse CCA-based models. | https://openreview.net/pdf/69ae8c04ac43812f7523f009313daec68f09ea3d.pdf |
Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank? | https://openreview.net/forum?id=B7ZbqNLDn-_ | https://openreview.net/forum?id=B7ZbqNLDn-_ | Sheikh Shams Azam,Seyyedali Hosseinalipour,Qiang Qiu,Christopher Brinton | ICLR 2022,Poster | In this paper, we question the rationale behind propagating large numbers of parameters through a distributed system during federated learning. We start by examining the rank characteristics of the subspace spanned by gradients (i.e., the gradient-space) in centralized model training, and observe that the gradient-space often consists of a few leading principal components accounting for an overwhelming majority (95-99%) of the explained variance. Motivated by this, we propose the "Look-back Gradient Multiplier" (LBGM) algorithm, which utilizes this low-rank property of the gradient-space in federated learning. Operationally, LBGM recycles the gradients between model update rounds to significantly reduce the number of parameters to be propagated through the system. We analytically characterize the convergence behavior of LBGM, revealing the nature of the trade-off between communication savings and model performance. Our subsequent experimental results demonstrate the improvement LBGM obtains on communication overhead compared to federated learning baselines. Additionally, we show that LBGM is a general plug-and-play algorithm that can be used standalone or stacked on top of existing sparsification techniques for distributed model training. | https://openreview.net/pdf/76e2433c08e957e7f19a49e6815d0f6b52da92cd.pdf |
Is Homophily a Necessity for Graph Neural Networks? | https://openreview.net/forum?id=ucASPPD9GKN | https://openreview.net/forum?id=ucASPPD9GKN | Yao Ma,Xiaorui Liu,Neil Shah,Jiliang Tang | ICLR 2022,Poster | Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks. When applied to semi-supervised node classification, GNNs are widely believed to work well due to the homophily assumption (``like attracts like''), and fail to generalize to heterophilous graphs where dissimilar nodes connect. Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion. In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than such carefully designed methods on some commonly used heterophilous graphs. This motivates us to reconsider whether homophily is truly necessary for good GNN performance. We find that this claim is not quite true, and in fact, GCNs can achieve strong performance on heterophilous graphs under certain conditions. Our work carefully characterizes these conditions and provides supporting theoretical understanding and empirical observations. Finally, we examine existing heterophilous graphs benchmarks and reconcile how the GCN (under)performs on them based on this understanding. | https://openreview.net/pdf/dba6b2a528efebfb036a0b908ecfc59201204429.pdf |
DEGREE: Decomposition Based Explanation for Graph Neural Networks | https://openreview.net/forum?id=Ve0Wth3ptT_ | https://openreview.net/forum?id=Ve0Wth3ptT_ | Qizhang Feng,Ninghao Liu,Fan Yang,Ruixiang Tang,Mengnan Du,Xia Hu | ICLR 2022,Poster | Graph Neural Networks (GNNs) are gaining extensive attention for their application in graph data. However, the black-box nature of GNNs prevents users from understanding and trusting the models, thus hampering their applicability. Whereas explaining GNNs remains a challenge, most existing methods fall into approximation based and perturbation based approaches with suffer from faithfulness problems and unnatural artifacts respectively. To tackle these problems, we propose DEGREE (Decomposition based Explanation for GRaph nEural nEtworks) to provide a faithful explanation for GNN predictions. By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction. Based on this, we further design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods. The efficiency of our algorithm can be further improved by utilizing GNN characteristics. Finally, we conduct quantitative and qualitative experiments on synthetic and real-world datasets to demonstrate the effectiveness of DEGREE on node classification and graph classification tasks. | https://openreview.net/pdf/fd7de8640028480fa9fe56dd9ed7bcad9182bf31.pdf |
Improving Mutual Information Estimation with Annealed and Energy-Based Bounds | https://openreview.net/forum?id=T0B9AoM_bFg | https://openreview.net/forum?id=T0B9AoM_bFg | Rob Brekelmans,Sicong Huang,Marzyeh Ghassemi,Greg Ver Steeg,Roger Baker Grosse,Alireza Makhzani | ICLR 2022,Poster | Mutual information (MI) is a fundamental quantity in information theory and machine learning. However, direct estimation of MI is intractable, even if the true joint probability density for the variables of interest is known, as it involves estimating a potentially high-dimensional log partition function. In this work, we present a unifying view of existing MI bounds from the perspective of importance sampling, and propose three novel bounds based on this approach. Since a tight MI bound without density information requires a sample size exponential in the true MI, we assume either a single marginal or the full joint density information is known. In settings where the full joint density is available, we propose Multi-Sample Annealed Importance Sampling (AIS) bounds on MI, which we demonstrate can tightly estimate large values of MI in our experiments. In settings where only a single marginal distribution is known, we propose Generalized IWAE (GIWAE) and MINE-AIS bounds. Our GIWAE bound unifies variational and contrastive bounds in a single framework that generalizes InfoNCE, IWAE, and Barber-Agakov bounds. Our MINE-AIS method improves upon existing energy-based methods such as MINE-DV and MINE-F by directly optimizing a tighter lower bound on MI. MINE-AIS uses MCMC sampling to estimate gradients for training and Multi-Sample AIS for evaluating the bound. Our methods are particularly suitable for evaluating MI in deep generative models, since explicit forms of the marginal or joint densities are often available. We evaluate our bounds on estimating the MI of VAEs and GANs trained on the MNIST and CIFAR datasets, and showcase significant gains over existing bounds in these challenging settings with high ground truth MI. | https://openreview.net/pdf/a68f8e4bbad21f5599f372c94827c5f596c6555b.pdf |
Sequence Approximation using Feedforward Spiking Neural Network for Spatiotemporal Learning: Theory and Optimization Methods | https://openreview.net/forum?id=bp-LJ4y_XC | https://openreview.net/forum?id=bp-LJ4y_XC | Xueyuan She,Saurabh Dash,Saibal Mukhopadhyay | ICLR 2022,Poster | A dynamical system of spiking neurons with only feedforward connections can classify spatiotemporal patterns without recurrent connections. However, the theoretical construct of a feedforward spiking neural network (SNN) for approximating a temporal sequence remains unclear, making it challenging to optimize SNN architectures for learning complex spatiotemporal patterns. In this work, we establish a theoretical framework to understand and improve sequence approximation using a feedforward SNN. Our framework shows that a feedforward SNN with one neuron per layer and skip-layer connections can approximate the mapping function between any arbitrary pairs of input and output spike train on a compact domain. Moreover, we prove that heterogeneous neurons with varying dynamics and skip-layer connections improve sequence approximation using feedforward SNN. Consequently, we propose SNN architectures incorporating the preceding constructs that are trained using supervised backpropagation-through-time (BPTT) and unsupervised spiking-timing-dependent plasticity (STDP) algorithms for classification of spatiotemporal data. A dual-search-space Bayesian optimization method is developed to optimize architecture and parameters of the proposed SNN with heterogeneous neuron dynamics and skip-layer connections. | https://openreview.net/pdf/043f00a3e618d0c71bbd79dffbdfdaf6d9fd4d1b.pdf |
Diverse Client Selection for Federated Learning via Submodular Maximization | https://openreview.net/forum?id=nwKXyFvaUm | https://openreview.net/forum?id=nwKXyFvaUm | Ravikumar Balakrishnan,Tian Li,Tianyi Zhou,Nageen Himayat,Virginia Smith,Jeff Bilmes | ICLR 2022,Poster | In every communication round of federated learning, a random subset of clients communicate their model updates back to the server which then aggregates them all. The optimal size of this subset is not known and several studies have shown that typically random selection does not perform very well in terms of convergence, learning efficiency and fairness. We, in this paper, propose to select a small diverse subset of clients, namely those carrying representative gradient information, and we transmit only these updates to the server. Our aim is for updating via only a subset to approximate updating via aggregating all client information. We achieve this by choosing a subset that maximizes a submodular facility location function defined over gradient space. We introduce “federated averaging with diverse client selection (DivFL)”. We provide a thorough analysis of its convergence in the heterogeneous setting and apply it both to synthetic and to real datasets. Empirical results show several benefits to our approach including improved learning efficiency, faster convergence and also more uniform (i.e., fair) performance across clients. We further show a communication-efficient version of DivFL that can still outperform baselines on the above metrics. | https://openreview.net/pdf/4d539789e55d133a96781cda576be4ab34ec5982.pdf |
From Intervention to Domain Transportation: A Novel Perspective to Optimize Recommendation | https://openreview.net/forum?id=jT1EwXu-4hj | https://openreview.net/forum?id=jT1EwXu-4hj | Da Xu,Yuting Ye,Chuanwei Ruan,Evren Korpeoglu,Sushant Kumar,Kannan Achan | ICLR 2022,Poster | The interventional nature of recommendation has attracted increasing attention in recent years. It particularly motivates researchers to formulate learning and evaluating recommendation as causal inference and data missing-not-at-random problems. However, few take seriously the consequence of violating the critical assumption of overlapping, which we prove can significantly threaten the validity and interpretation of the outcome. We find a critical piece missing in the current understanding of information retrieval (IR) systems: as interventions, recommendation not only affects the already observed data, but it also interferes with the target domain (distribution) of interest. We then rephrase optimizing recommendation as finding an intervention that best transports the patterns it learns from the observed domain to its intervention domain. Towards this end, we use domain transportation to characterize the learning-intervention mechanism of recommendation. We design a principled transportation-constraint risk minimization objective and convert it to a two-player minimax game.
We prove the consistency, generalization, and excessive risk bounds for the proposed objective, and elaborate how they compare to the current results. Finally, we carry out extensive real-data and semi-synthetic experiments to demonstrate the advantage of our approach, and launch online testing with a real-world IR system. | https://openreview.net/pdf/22322b458fd437ff0b3cf13debd29cc381b25ccc.pdf |
Variational Predictive Routing with Nested Subjective Timescales | https://openreview.net/forum?id=JxFgJbZ-wft | https://openreview.net/forum?id=JxFgJbZ-wft | Alexey Zakharov,Qinghai Guo,Zafeirios Fountas | ICLR 2022,Poster | Discovery and learning of an underlying spatiotemporal hierarchy in sequential data is an important topic for machine learning. Despite this, little work has been done to explore hierarchical generative models that can flexibly adapt their layerwise representations in response to datasets with different temporal dynamics. Here, we present Variational Predictive Routing (VPR) – a neural probabilistic inference system that organizes latent representations of video features in a temporal hierarchy, based on their rates of change, thus modeling continuous data as a hierarchical renewal process. By employing an event detection mechanism that relies solely on the system’s latent representations (without the need of a separate model), VPR is able to dynamically adjust its internal state following changes in the observed features, promoting an optimal organisation of representations across the levels of the model’s latent hierarchy. Using several video datasets, we show that VPR is able to detect event boundaries, disentangle spatiotemporal features across its hierarchy, adapt to the dynamics of the data, and produce accurate time-agnostic rollouts of the future. Our approach integrates insights from neuroscience and introduces a framework with high potential for applications in model-based reinforcement learning, where flexible and informative state-space rollouts are of particular interest. | https://openreview.net/pdf/712c74938a55973dd0b3f46e154fc0696194b578.pdf |
Sample and Computation Redistribution for Efficient Face Detection | https://openreview.net/forum?id=RhB1AdoFfGE | https://openreview.net/forum?id=RhB1AdoFfGE | Jia Guo,Jiankang Deng,Alexandros Lattas,Stefanos Zafeiriou | ICLR 2022,Poster | Although tremendous strides have been made in uncontrolled face detection, accurate face detection with a low computation cost remains an open challenge. In this paper, we point out that computation distribution and scale augmentation are the keys to detecting small faces from low-resolution images. Motivated by these observations, we introduce two simple but effective methods: (1) Computation Redistribution (CR), which reallocates the computation between the backbone, neck and head of the model; and (2) Sample Redistribution (SR), which augments training samples for the most needed stages. The proposed Sample and Computation Redistribution for Face Detection (SCRFD) is implemented by a random search in a meticulously designed search space. Extensive experiments conducted on WIDER FACE demonstrate the state-of-the-art accuracy-efficiency trade-off for the proposed SCRFD family across a wide range of compute regimes. In particular, SCRFD-34GF outperforms the best competitor, TinaFace, by $4.78\%$ (AP at hard set) while being more than 3$\times$ faster on GPUs with VGA-resolution images. Code is available at: https://github.com/deepinsight/insightface/tree/master/detection/scrfd. | https://openreview.net/pdf/d7b9dd38011f418b1c66bb378aef38a25d8c9bf5.pdf |
Sound Adversarial Audio-Visual Navigation | https://openreview.net/forum?id=NkZq4OEYN- | https://openreview.net/forum?id=NkZq4OEYN- | Yinfeng Yu,Wenbing Huang,Fuchun Sun,Changan Chen,Yikai Wang,Xiaohong Liu | ICLR 2022,Poster | Audio-visual navigation task requires an agent to find a sound source in a realistic, unmapped 3D environment by utilizing egocentric audio-visual observations. Existing audio-visual navigation works assume a clean environment that solely contains the target sound, which, however, would not be suitable in most real-world applications due to the unexpected sound noise or intentional interference. In this work, we design an acoustically complex environment in which, besides the target sound, there exists a sound attacker playing a zero-sum game with the agent. More specifically, the attacker can move and change the volume and category of the sound to make the agent suffer from finding the sounding object while the agent tries to dodge the attack and navigate to the goal under the intervention. Under certain constraints to the attacker, we can improve the robustness of the agent towards unexpected sound attacks in audio-visual navigation. For better convergence, we develop a joint training mechanism by employing the property of a centralized critic with decentralized actors. Experiments on two real-world 3D scan datasets, Replica, and Matterport3D, verify the effectiveness and the robustness of the agent trained under our designed environment when transferred to the clean environment or the one containing sound attackers with random policy. Project: https://yyf17.github.io/SAAVN . | https://openreview.net/pdf/892cdd541646cc28a0880494951fbd89079c2a3d.pdf |
Out-of-distribution Generalization in the Presence of Nuisance-Induced Spurious Correlations | https://openreview.net/forum?id=12RoR2o32T | https://openreview.net/forum?id=12RoR2o32T | Aahlad Manas Puli,Lily H Zhang,Eric Karl Oermann,Rajesh Ranganath | ICLR 2022,Poster | In many prediction problems, spurious correlations are induced by a changing relationship between the label and a nuisance variable that is also correlated with the covariates. For example, in classifying animals in natural images, the background, which is a nuisance, can predict the type of animal. This nuisance-label relationship does not always hold, and the performance of a model trained under one such relationship may be poor on data with a different nuisance-label relationship. To build predictive models that perform well regardless of the nuisance-label relationship, we develop Nuisance-Randomized Distillation (NURD). We introduce the nuisance-randomized distribution, a distribution where the nuisance and the label are independent. Under this distribution, we define the set of representations such that conditioning on any member, the nuisance and the label remain independent. We prove that the representations in this set always perform better than chance, while representations outside of this set may not. NURD finds a representation from this set that is most informative of the label under the nuisance-randomized distribution, and we prove that this representation achieves the highest performance regardless of the nuisance-label relationship. We evaluate NURD on several tasks including chest X-ray classification where, using non-lung patches as the nuisance, NURD produces models that predict pneumonia under strong spurious correlations. | https://openreview.net/pdf/7128d52f12e20439db2d07083f3de3995967bb53.pdf |
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis | https://openreview.net/forum?id=OM_lYiHXiCL | https://openreview.net/forum?id=OM_lYiHXiCL | Junfeng Guo,Ang Li,Cong Liu | ICLR 2022,Poster | Deep neural networks (DNNs) are proved to be vulnerable against backdoor attacks. A backdoor could be embedded in the target DNNs through injecting a backdoor trigger into the training examples, which can cause the target DNNs misclassify an input attached with the backdoor trigger. Recent backdoor detection methods often require the access to the original poisoned training data, the parameters of the target DNNs, or the predictive confidence for each given input, which are impractical in many real-world applications, e.g., on-device de-ployed DNNs. We address the black-box hard-label backdoor detection problem where the DNN is a fully black-box and only its final output label is accessible. We approach this problem from the optimization perspective and show that the objective of backdoor detection is bounded by an adversarial objective. Further theoretical and empirical studies reveal that this adversarial objective leads to a solution with highly skewed distribution; a singularity is often observed in the adversarial map of a backdoor-infected example, which we call the adversarial singularity phenomenon. Based on this observation, we propose the adversarial extreme value analysis(AEVA) algorithm to detect backdoors in black-box neural networks. The AEVA algorithm is based on an extreme value analysis on the adversarial map, computed from the monte-carlo gradient estimation due to the black-box hard-label constraint. Evidenced by extensive experiments across three popular tasks and backdoor attacks, our approach is shown effective in detecting backdoor attacks under the black-box hard-label scenarios | https://openreview.net/pdf/b8ad85b4ddd615a5abac4d7c1d5713fc92b9f0e9.pdf |
Resonance in Weight Space: Covariate Shift Can Drive Divergence of SGD with Momentum | https://openreview.net/forum?id=5ECQL05ub0J | https://openreview.net/forum?id=5ECQL05ub0J | Kirby Banman,Garnet Liam Peet-Pare,Nidhi Hegde,Alona Fyshe,Martha White | ICLR 2022,Poster | Most convergence guarantees for stochastic gradient descent with momentum (SGDm) rely on iid sampling. Yet, SGDm is often used outside this regime, in settings with temporally correlated input samples such as continual learning and reinforcement learning. Existing work has shown that SGDm with a decaying step-size can converge under Markovian temporal correlation. In this work, we show that SGDm under covariate shift with a fixed step-size can be unstable and diverge. In particular, we show SGDm under covariate shift is a parametric oscillator, and so can suffer from a phenomenon known as resonance. We approximate the learning system as a time varying system of ordinary differential equations, and leverage existing theory to characterize the system's divergence/convergence as resonant/nonresonant modes. The theoretical result is limited to the linear setting with periodic covariate shift, so we empirically supplement this result to show that resonance phenomena persist even under non-periodic covariate shift, nonlinear dynamics with neural networks, and optimizers other than SGDm. | https://openreview.net/pdf/967691b8c1cb517500d87dfd7dbf7dd6293c0e89.pdf |
Top-label calibration and multiclass-to-binary reductions | https://openreview.net/forum?id=WqoBaaPHS- | https://openreview.net/forum?id=WqoBaaPHS- | Chirag Gupta,Aaditya Ramdas | ICLR 2022,Poster | We propose a new notion of multiclass calibration called top-label calibration. A classifier is said to be top-label calibrated if the reported probability for the predicted class label---the top-label---is calibrated, conditioned on the top-label. This conditioning is essential for practical utility of the calibration property, since the top-label is always reported and we must condition on what is reported. However, the popular notion of confidence calibration erroneously skips this conditioning. Furthermore, we outline a multiclass-to-binary (M2B) reduction framework that unifies confidence, top-label, and class-wise calibration, among others. As its name suggests, M2B works by reducing multiclass calibration to different binary calibration problems; various types of multiclass calibration can then be achieved using simple binary calibration routines. We instantiate the M2B framework with the well-studied histogram binning (HB) binary calibrator, and prove that the overall procedure is multiclass calibrated without making any assumptions on the underlying data distribution. In an empirical evaluation with four deep net architectures on CIFAR-10 and CIFAR-100, we find that the M2B + HB procedure achieves lower top-label and class-wise calibration error than other approaches such as temperature scaling. Code for this work is available at https://github.com/aigen/df-posthoc-calibration. | https://openreview.net/pdf/a580ad8d84d1a31adcccb9f9e2102c3b503121df.pdf |
Anisotropic Random Feature Regression in High Dimensions | https://openreview.net/forum?id=JfaWawZ8BmX | https://openreview.net/forum?id=JfaWawZ8BmX | Gabriel Mel,Jeffrey Pennington | ICLR 2022,Poster | In contrast to standard statistical wisdom, modern learning algorithms typically find their best performance in the overparameterized regime in which the model has many more parameters than needed to fit the training data. A growing number of recent works have shown that random feature models can offer a detailed theoretical explanation for this unexpected behavior, but typically these analyses have utilized isotropic distributional assumptions on the underlying data generation process, thereby failing to provide a realistic characterization of real-world models that are designed to identify and harness the structure in natural data. In this work, we examine the high-dimensional asymptotics of random feature regression in the presence of structured data, allowing for arbitrary input correlations and arbitrary alignment between the data and the weights of the target function. We define a partial order on the space of weight-data alignments and prove that generalization performance improves in response to stronger alignment. We also clarify several previous observations in the literature by distinguishing the behavior of the sample-wise and parameter-wise learning curves, finding that sample-wise multiple descent can occur at scales dictated by the eigenstructure of the data covariance, but that parameter-wise multiple descent is limited to double descent, although strong anisotropy can induce additional signatures such as wide plateaus and steep cliffs. Finally, these signatures are related to phase transitions in the spectrum of the feature kernel matrix, and unlike the double descent peak, persist even under optimal regularization. | https://openreview.net/pdf/bc2ddad146bd93609c8510aac28ae824072d1832.pdf |
Back2Future: Leveraging Backfill Dynamics for Improving Real-time Predictions in Future | https://openreview.net/forum?id=L01Nn_VJ9i | https://openreview.net/forum?id=L01Nn_VJ9i | Harshavardhan Kamarthi,Alexander Rodríguez,B. Aditya Prakash | ICLR 2022,Poster | For real-time forecasting in domains like public health and macroeconomics, data collection is a non-trivial and demanding task. Often after being initially released, it undergoes several revisions later (maybe due to human or technical constraints) - as a result, it may take weeks until the data reaches a stable value. This so-called ‘backfill’ phenomenon and its effect on model performance have been barely addressed in the prior literature. In this paper, we introduce the multi-variate backfill problem using COVID-19 as the motivating example.
We construct a detailed dataset composed of relevant signals over the past year of the pandemic.
We then systematically characterize several patterns in backfill dynamics and leverage our observations for formulating a novel problem and neural framework, Back2Future, that aims to refines a given model's predictions in real-time. Our extensive experiments demonstrate that our method refines the performance of the diverse set of top models for COVID-19 forecasting and GDP growth forecasting. Specifically, we show that Back2Future refined top COVID-19 models by 6.65% to 11.24% and yield an 18% improvement over non-trivial baselines. In addition, we show that our model improves model evaluation too; hence policy-makers can better understand the true accuracy of forecasting models in real-time. | https://openreview.net/pdf/5ff5a41a0773c6764d009a86a74cce3dd35e8ec3.pdf |
Approximation and Learning with Deep Convolutional Models: a Kernel Perspective | https://openreview.net/forum?id=lrocYB-0ST2 | https://openreview.net/forum?id=lrocYB-0ST2 | Alberto Bietti | ICLR 2022,Poster | The empirical success of deep convolutional networks on tasks involving high-dimensional data such as images or audio suggests that they can efficiently approximate certain functions that are well-suited for such tasks. In this paper, we study this through the lens of kernel methods, by considering simple hierarchical kernels with two or three convolution and pooling layers, inspired by convolutional kernel networks. These achieve good empirical performance on standard vision datasets, while providing a precise description of their functional space that yields new insights on their inductive bias. We show that the RKHS consists of additive models of interaction terms between patches, and that its norm encourages spatial similarities between these terms through pooling layers. We then provide generalization bounds which illustrate how pooling and patches yield improved sample complexity guarantees when the target function presents such regularities. | https://openreview.net/pdf/35eeb8c9531f39eb14e07db8fb296d38b7f1a369.pdf |
Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning | https://openreview.net/forum?id=vgqS1vkkCbE | https://openreview.net/forum?id=vgqS1vkkCbE | Dhruv Shah,Peng Xu,Yao Lu,Ted Xiao,Alexander T Toshev,Sergey Levine,brian ichter | ICLR 2022,Poster | Reinforcement learning can train policies that effectively perform complex tasks. However for long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and chaining lower-level skills. Hierarchical reinforcement learning aims to enable this by providing a bank of low-level skills as action abstractions. Hierarchies can further improve on this by abstracting the space states as well. We posit that a suitable state abstraction should depend on the capabilities of the available lower-level policies. We propose Value Function Spaces: a simple approach that produces such a representation by using the value functions corresponding to each lower-level skill. These value functions capture the affordances of the scene, thus forming a representation that compactly abstracts task relevant information and robustly ignores distractors. Empirical evaluations for maze-solving and robotic manipulation tasks demonstrate that our approach improves long-horizon performance and enables better zero-shot generalization than alternative model-free and model-based methods. | https://openreview.net/pdf/c49d03d6fc757e37898cc5399159de2e30589146.pdf |
Fast Regression for Structured Inputs | https://openreview.net/forum?id=gNp54NxHUPJ | https://openreview.net/forum?id=gNp54NxHUPJ | Raphael A Meyer,Cameron N Musco,Christopher P Musco,David Woodruff,Samson Zhou | ICLR 2022,Poster | We study the $\ell_p$ regression problem, which requires finding $\mathbf{x}\in\mathbb R^{d}$ that minimizes $\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_p$ for a matrix $\mathbf{A}\in\mathbb R^{n \times d}$ and response vector $\mathbf{b}\in\mathbb R^{n}$. There has been recent interest in developing subsampling methods for this problem that can outperform standard techniques when $n$ is very large. However, all known subsampling approaches have run time that depends exponentially on $p$, typically, $d^{\mathcal{O}(p)}$, which can be prohibitively expensive.
We improve on this work by showing that for a large class of common \emph{structured matrices}, such as combinations of low-rank matrices, sparse matrices, and Vandermonde matrices, there are subsampling based methods for $\ell_p$ regression that depend polynomially on $p$. For example, we give an algorithm for $\ell_p$ regression on Vandermonde matrices that runs in time $\mathcal{O}(n\log^3 n+(dp^2)^{0.5+\omega}\cdot\text{polylog}\,n)$, where $\omega$ is the exponent of matrix multiplication. The polynomial dependence on $p$ crucially allows our algorithms to extend naturally to efficient algorithms for $\ell_\infty$ regression, via approximation of $\ell_\infty$ by $\ell_{\mathcal{O}(\log n)}$. Of practical interest, we also develop a new subsampling algorithm for $\ell_p$ regression for arbitrary matrices, which is simpler than previous approaches for $p \ge 4$. | https://openreview.net/pdf/a76864e8c343a5dcb3414cc8caa6fc2fdd2afc19.pdf |
CrossBeam: Learning to Search in Bottom-Up Program Synthesis | https://openreview.net/forum?id=qhC8mr2LEKq | https://openreview.net/forum?id=qhC8mr2LEKq | Kensen Shi,Hanjun Dai,Kevin Ellis,Charles Sutton | ICLR 2022,Poster | Many approaches to program synthesis perform a search within an enormous space of programs to find one that satisfies a given specification. Prior works have used neural models to guide combinatorial search algorithms, but such approaches still explore a huge portion of the search space and quickly become intractable as the size of the desired program increases. To tame the search space blowup, we propose training a neural model to learn a hands-on search policy for bottom-up synthesis, instead of relying on a combinatorial search algorithm. Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs, taking into account the search history and partial program executions. Motivated by work in structured prediction on learning to search, CrossBeam is trained on-policy using data extracted from its own bottom-up searches on training tasks. We evaluate CrossBeam in two very different domains, string manipulation and logic programming. We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.
| https://openreview.net/pdf/d098dde7689c9940303ddd8c11f5f44e8b866692.pdf |
PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning | https://openreview.net/forum?id=M6M8BEmd6dq | https://openreview.net/forum?id=M6M8BEmd6dq | Seng Pei Liew,Tsubasa Takahashi,Michihiko Ueno | ICLR 2022,Poster | We propose a new framework of synthesizing data using deep generative models in a differentially private manner.
Within our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion, such that training deep generative models is possible without re-using the original data.
Hence, no extra privacy costs or model constraints are incurred, in contrast to popular gradient sanitization approaches, which, among other issues, cause degradation in privacy guarantees as the training iteration increases.
We demonstrate a realization of our framework by making use of the characteristic function and an adversarial re-weighting objective, which are of independent interest as well.
Our proposal has theoretical guarantees of performance, and empirical evaluations on multiple datasets show that our approach outperforms other methods at reasonable levels of privacy. | https://openreview.net/pdf/3efedef6ce8396ae22861cd7154606c25bd31e95.pdf |
Divisive Feature Normalization Improves Image Recognition Performance in AlexNet | https://openreview.net/forum?id=aOX3a9q3RVV | https://openreview.net/forum?id=aOX3a9q3RVV | Michelle Miller,SueYeon Chung,Kenneth D. Miller | ICLR 2022,Poster | Local divisive normalization provides a phenomenological description of many nonlinear response properties of neurons across visual cortical areas. To gain insight into the utility of this operation, we studied the effects on AlexNet of a local divisive normalization between features, with learned parameters. Developing features were arranged in a line topology, with the influence between features determined by an exponential function of the distance between them. We compared an AlexNet model with no normalization or with canonical normalizations (Batch, Group, Layer) to the same models with divisive normalization added. Divisive normalization always improved performance for models with batch or group or no normalization, generally by 1-2 percentage points, on both the CIFAR-100 and ImageNet databases. To gain insight into mechanisms underlying the improved performance, we examined several aspects of network representations. In the early layers both canonical and divisive normalizations reduced manifold capacities and increased average dimension of the individual categorical manifolds. In later layers the capacity was higher and manifold dimension lower for models roughly in order of their performance improvement. Examining the sparsity of activations across a given layer, divisive normalization layers increased sparsity, while the canonical normalization layers decreased it. Nonetheless, in the final layer, the sparseness of activity increased in the order of no normalization, divisive, com- bined, and canonical. We also investigated how the receptive fields (RFs) in the first convolutional layer (where RFs are most interpretable) change with normalization. Divisive normalization enhanced RF Fourier power at low wavelengths, while divisive+canonical enhanced power at mid (batch, group) or low (layer) wavelengths, compared to canonical alone or no normalization. In conclusion, divisive normalization enhances image recognition performance, most strongly when combined with canonical normalization, and in doing so it reduces manifold capacity and sparsity in early layers while increasing them in final layers, and increases low- or mid-wavelength power in the first-layer receptive fields. | https://openreview.net/pdf/452011d69839dd4fa39ba4bec882b24cb5bb2649.pdf |
Evaluating Distributional Distortion in Neural Language Modeling | https://openreview.net/forum?id=bTteFbU99ye | https://openreview.net/forum?id=bTteFbU99ye | Benjamin LeBrun,Alessandro Sordoni,Timothy J. O'Donnell | ICLR 2022,Poster | A fundamental characteristic of natural language is the high rate at which speakers produce novel expressions. Because of this novelty, a heavy-tail of rare events accounts for a significant amount of the total probability mass of distributions in language (Baayen, 2001). Standard language modeling metrics such as perplexity quantify the performance of language models (LM) in aggregate. As a result, we have relatively little understanding of whether neural LMs accurately estimate the probability of sequences in this heavy-tail of rare events. To address this gap, we develop a controlled evaluation scheme which uses generative models trained on natural data as artificial languages from which we can exactly compute sequence probabilities. Training LMs on generations from these artificial languages, we compare the sequence-level probability estimates given by LMs to the true probabilities in the target language. Our experiments reveal that LSTM and Transformer language models (i) systematically underestimate the probability of sequences drawn from the target language, and (ii) do so more severely for less-probable sequences. Investigating where this probability mass went, (iii) we find that LMs tend to overestimate the probability of ill formed (perturbed) sequences. In addition, we find that this underestimation behaviour (iv) is weakened, but not eliminated by greater amounts of training data, and (v) is exacerbated for target distributions with lower entropy. | https://openreview.net/pdf/c22ea9d1df97b96c390eb350b4c09eb8e2388128.pdf |
MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining | https://openreview.net/forum?id=r5qumLiYwf9 | https://openreview.net/forum?id=r5qumLiYwf9 | Ahmed Imtiaz Humayun,Randall Balestriero,Richard Baraniuk | ICLR 2022,Poster | Deep Generative Networks (DGNs) are extensively employed in Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and their variants to approximate the data manifold, and data distribution on that manifold. However, training samples are often obtained based on preferences, costs, or convenience producing artifacts in the empirical data distribution e.g. the large fraction of smiling faces in the CelebA dataset or the large fraction of dark-haired individuals in FFHQ). {\em These inconsistencies will be reproduced when sampling from the trained DGN, which has far-reaching potential implications for fairness, data augmentation, anomaly detection, domain adaptation, and beyond.} In response, we develop a differential geometry based sampler -coined MaGNET- that, given any trained DGN, produces samples that are uniformly distributed on the learned manifold. We prove theoretically and empirically that our technique produces a uniform distribution on the manifold regardless of the training set distribution. We perform a range of experiments on various datasets and DGNs. One of them considers the state-of-the-art StyleGAN2 trained on FFHQ dataset, where uniform sampling via MaGNET increases distribution precision \& recall by 4.12\% \& 3.01\% and decreases gender bias by 41.2\%, without requiring labels or retraining. | https://openreview.net/pdf/e9c0ccdf7ecc11a5666ac100d75f89816ce7c0f7.pdf |
Neural Contextual Bandits with Deep Representation and Shallow Exploration | https://openreview.net/forum?id=xnYACQquaGV | https://openreview.net/forum?id=xnYACQquaGV | Pan Xu,Zheng Wen,Handong Zhao,Quanquan Gu | ICLR 2022,Poster | We study neural contextual bandits, a general class of contextual bandits, where each context-action pair is associated with a raw feature vector, but the specific reward generating function is unknown. We propose a novel learning algorithm that transforms the raw feature vector using the last hidden layer of a deep ReLU neural network (deep representation learning), and uses an upper confidence bound (UCB) approach to explore in the last linear layer (shallow exploration). We prove that under standard assumptions, our proposed algorithm achieves $\tilde{O}(\sqrt{T})$ finite-time regret, where $T$ is the learning time horizon. Compared with existing neural contextual bandit algorithms, our approach is computationally much more efficient since it only needs to explore in the last layer of the deep neural network. | https://openreview.net/pdf/c6ee94e7fd22670895280aaf06535b6373d428eb.pdf |
PI3NN: Out-of-distribution-aware Prediction Intervals from Three Neural Networks | https://openreview.net/forum?id=NoB8YgRuoFU | https://openreview.net/forum?id=NoB8YgRuoFU | Siyan Liu,Pei Zhang,Dan Lu,Guannan Zhang | ICLR 2022,Poster | We propose a novel prediction interval (PI) method for uncertainty quantification, which addresses three major issues with the state-of-the-art PI methods. First, existing PI methods require retraining of neural networks (NNs) for every given confidence level and suffer from the crossing issue in calculating multiple PIs. Second, they usually rely on customized loss functions with extra sensitive hyperparameters for which fine tuning is required to achieve a well-calibrated PI. Third, they usually underestimate uncertainties of out-of-distribution (OOD) samples leading to over-confident PIs. Our PI3NN method calculates PIs from linear combinations of three NNs, each of which is independently trained using the standard mean squared error loss. The coefficients of the linear combinations are computed using root-finding algorithms to ensure tight PIs for a given confidence level. We theoretically prove that PI3NN can calculate PIs for a series of confidence levels without retraining NNs and it completely avoids the crossing issue. Additionally, PI3NN does not introduce any unusual hyperparameters resulting in a stable performance. Furthermore, we address OOD identification challenge by introducing an initialization scheme which provides reasonably larger PIs of the OOD samples than those of the in-distribution samples. Benchmark and real-world experiments show that our method outperforms several state-of-the-art approaches with respect to predictive uncertainty quality, robustness, and OOD samples identification. | https://openreview.net/pdf/84a3741f26e65df3c7b232779bcfb5dac283d41e.pdf |
Discriminative Similarity for Data Clustering | https://openreview.net/forum?id=kj0_45Y4r9i | https://openreview.net/forum?id=kj0_45Y4r9i | Yingzhen Yang,Ping Li | ICLR 2022,Poster | Similarity-based clustering methods separate data into clusters according to the pairwise similarity between the data, and the pairwise similarity is crucial for their performance. In this paper, we propose {\em Clustering by Discriminative Similarity (CDS)}, a novel method which learns discriminative similarity for data clustering. CDS learns an unsupervised similarity-based classifier from each data partition, and searches for the optimal partition of the data by minimizing the generalization error of the learnt classifiers associated with the data partitions. By generalization analysis via Rademacher complexity, the generalization error bound for the unsupervised similarity-based classifier is expressed as the sum of discriminative similarity between the data from different classes. It is proved that the derived discriminative similarity can also be induced by the integrated squared error bound for kernel density classification. In order to evaluate the performance of the proposed discriminative similarity, we propose a new clustering method using a kernel as the similarity function, CDS via unsupervised kernel classification (CDSK), with its effectiveness demonstrated by experimental results. | https://openreview.net/pdf/b159fb24355dd1bf64f74a757973bbc8cc96d57e.pdf |
It Takes Four to Tango: Multiagent Self Play for Automatic Curriculum Generation | https://openreview.net/forum?id=q4tZR1Y-UIs | https://openreview.net/forum?id=q4tZR1Y-UIs | Yuqing Du,Pieter Abbeel,Aditya Grover | ICLR 2022,Poster | We are interested in training general-purpose reinforcement learning agents that can solve a wide variety of goals. Training such agents efficiently requires automatic generation of a goal curriculum. This is challenging as it requires (a) exploring goals of increasing difficulty, while ensuring that the agent (b) is exposed to a diverse set of goals in a sample efficient manner and (c) does not catastrophically forget previously solved goals. We propose Curriculum Self Play (CuSP), an automated goal generation framework that seeks to satisfy these desiderata by virtue of a multi-player game with 4 agents. We extend the asymmetric curricula learning in PAIRED (Dennis et al., 2020) to a symmetrized game that carefully balances cooperation and competition between two off-policy student learners and two regret-maximizing teachers. CuSP additionally introduces entropic goal coverage and accounts for the non-stationary nature of the students, allowing us to automatically induce a curriculum that balances progressive exploration with anti-catastrophic exploitation. We demonstrate that our method succeeds at generating an effective curricula of goals for a range of control tasks, outperforming other methods at zero-shot test-time generalization to novel out-of-distribution goals. | https://openreview.net/pdf/68a6237e79699c723ce9c9c39537422391df3e2b.pdf |
CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing | https://openreview.net/forum?id=HOjLHrlZhmx | https://openreview.net/forum?id=HOjLHrlZhmx | Fan Wu,Linyi Li,Zijian Huang,Yevgeniy Vorobeychik,Ding Zhao,Bo Li | ICLR 2022,Poster | As reinforcement learning (RL) has achieved great success and been even adopted in safety-critical domains such as autonomous vehicles, a range of empirical studies have been conducted to improve its robustness against adversarial attacks. However, how to certify its robustness with theoretical guarantees still remains challenging. In this paper, we present the first unified framework CROP (Certifying Robust Policies for RL) to provide robustness certification on both action and reward levels. In particular, we propose two robustness certification criteria: robustness of per-state actions and lower bound of cumulative rewards. We then develop a local smoothing algorithm for policies derived from Q-functions to guarantee the robustness of actions taken along the trajectory; we also develop a global smoothing algorithm for certifying the lower bound of a finite-horizon cumulative reward, as well as a novel local smoothing algorithm to perform adaptive search in order to obtain tighter reward certification. Empirically, we apply CROP to evaluate several existing empirically robust RL algorithms, including adversarial training and different robust regularization, in four environments (two representative Atari games, Highway, and CartPole). Furthermore, by evaluating these algorithms against adversarial attacks, we demonstrate that our certifications are often tight. All experiment results are available at website https://crop-leaderboard.github.io. | https://openreview.net/pdf/b79f87ced196c2a5a13ca10bae3d39a8924b08b8.pdf |
Neural Link Prediction with Walk Pooling | https://openreview.net/forum?id=CCu6RcUMwK0 | https://openreview.net/forum?id=CCu6RcUMwK0 | Liming Pan,Cheng Shi,Ivan Dokmanić | ICLR 2022,Poster | Graph neural networks achieve high accuracy in link prediction by jointly leveraging graph topology and node attributes. Topology, however, is represented indirectly; state-of-the-art methods based on subgraph classification label nodes with distance to the target link, so that, although topological information is present, it is tempered by pooling. This makes it challenging to leverage features like loops and motifs associated with network formation mechanisms. We propose a link prediction algorithm based on a new pooling scheme called WalkPool. WalkPool combines the expressivity of topological heuristics with the feature-learning ability of neural networks. It summarizes a putative link by random walk probabilities of adjacent paths. Instead of extracting transition probabilities from the original graph, it computes the transition matrix of a ``predictive'' latent graph by applying attention to learned features; this may be interpreted as feature-sensitive topology fingerprinting. WalkPool can leverage unsupervised node features or be combined with GNNs and trained end-to-end. It outperforms state-of-the-art methods on all common link prediction benchmarks, both homophilic and heterophilic, with and without node attributes. Applying WalkPool to a set of unsupervised GNNs significantly improves prediction accuracy, suggesting that it may be used as a general-purpose graph pooling scheme. | https://openreview.net/pdf/ad031c5e836c55357e2f13cdb18fa502a7eecc80.pdf |
On the Convergence of Certified Robust Training with Interval Bound Propagation | https://openreview.net/forum?id=YeShU5mLfLt | https://openreview.net/forum?id=YeShU5mLfLt | Yihan Wang,Zhouxing Shi,Quanquan Gu,Cho-Jui Hsieh | ICLR 2022,Poster | Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training neural networks with certifiable robustness guarantees when potential adversarial perturbations present, while the convergence of IBP training remains unknown in existing literature. In this paper, we present a theoretical analysis on the convergence of IBP training. With an overparameterized assumption, we analyze the convergence of IBP robust training. We show that when using IBP training to train a randomly initialized two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error with a high probability if we have sufficiently small perturbation radius and large network width. | https://openreview.net/pdf/4e7f7f34a6f11b062e283b3a04324bb373e39067.pdf |
Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators | https://openreview.net/forum?id=sX3XaHwotOg | https://openreview.net/forum?id=sX3XaHwotOg | Yu Meng,Chenyan Xiong,Payal Bajaj,saurabh tiwary,Paul N. Bennett,Jiawei Han,Xia Song | ICLR 2022,Poster | We present a new framework AMOS that pretrains text encoders with an Adversarial learning curriculum via a Mixture Of Signals from multiple auxiliary generators. Following ELECTRA-style pretraining, the main encoder is trained as a discriminator to detect replaced tokens generated by auxiliary masked language models (MLMs). Different from ELECTRA which trains one MLM as the generator, we jointly train multiple MLMs of different sizes to provide training signals at various levels of difficulty. To push the discriminator to learn better with challenging replaced tokens, we learn mixture weights over the auxiliary MLMs' outputs to maximize the discriminator loss by backpropagating the gradient from the discriminator via Gumbel-Softmax. For better pretraining efficiency, we propose a way to assemble multiple MLMs into one unified auxiliary model. AMOS outperforms ELECTRA and recent state-of-the-art pretrained models by about 1 point on the GLUE benchmark for BERT base-sized models. | https://openreview.net/pdf/4127a755f1e5ee998e6423f7a8d734f9e88b8cab.pdf |
Towards Training Billion Parameter Graph Neural Networks for Atomic Simulations | https://openreview.net/forum?id=0jP2n0YFmKG | https://openreview.net/forum?id=0jP2n0YFmKG | Anuroop Sriram,Abhishek Das,Brandon M Wood,Siddharth Goyal,C. Lawrence Zitnick | ICLR 2022,Poster | Recent progress in Graph Neural Networks (GNNs) for modeling atomic simulations has the potential to revolutionize catalyst discovery, which is a key step in making progress towards the energy breakthroughs needed to combat climate change. However, the GNNs that have proven most effective for this task are memory intensive as they model higher-order interactions in the graphs such as those between triplets or quadruplets of atoms, making it challenging to scale these models. In this paper, we introduce Graph Parallelism, a method to distribute input graphs across multiple GPUs, enabling us to train very large GNNs with hundreds of millions or billions of parameters. We empirically evaluate our method by scaling up the recently proposed DimeNet++ and GemNet models by over an order of magnitude in the number of parameters. On the large-scale Open Catalyst 2020 (OC20) dataset, these graph-parallelized models lead to relative improvements of 1) 15% on the force MAE metric on the S2EF task and 2) 21% on the AFbT metric on the IS2RS task, establishing new state-of-the-art results. | https://openreview.net/pdf/d00345679f2290baeabb225428516fad14fea79e.pdf |
Understanding and Leveraging Overparameterization in Recursive Value Estimation | https://openreview.net/forum?id=shbAgEsk3qM | https://openreview.net/forum?id=shbAgEsk3qM | Chenjun Xiao,Bo Dai,Jincheng Mei,Oscar A Ramirez,Ramki Gummadi,Chris Harris,Dale Schuurmans | ICLR 2022,Poster | The theory of function approximation in reinforcement learning (RL) typically considers low capacity representations that incur a tradeoff between approximation error, stability and generalization. Current deep architectures, however, operate in an overparameterized regime where approximation error is not necessarily a bottleneck. To better understand the utility of deep models in RL we present an analysis of recursive value estimation using \emph{overparameterized} linear representations that provides useful, transferable findings. First, we show that classical updates such as temporal difference (TD) learning or fitted-value-iteration (FVI) converge to \emph{different} fixed points than residual minimization (RM) in the overparameterized linear case. We then develop a unified interpretation of overparameterized linear value estimation as minimizing the Euclidean norm of the weights subject to alternative constraints. A practical consequence is that RM can be modified by a simple alteration of the backup targets to obtain the same fixed points as FVI and TD (when they converge), while universally ensuring stability. Further, we provide an analysis of the generalization error of these methods, demonstrating per iterate bounds on the value prediction error of FVI, and fixed point bounds for TD and RM.
Given this understanding, we then develop new algorithmic tools for improving recursive value estimation with deep models.
In particular, we extract two regularizers that penalize out-of-span top-layer weights and co-linearity in top-layer features respectively. Empirically we find that these regularizers dramatically improve the stability of TD and FVI, while allowing RM to match and even sometimes surpass their generalization performance with assured stability. | https://openreview.net/pdf/c5131ad5930c1a9f32ede673f284175158a75792.pdf |
Optimization and Adaptive Generalization of Three layer Neural Networks | https://openreview.net/forum?id=dPyRNUlttBv | https://openreview.net/forum?id=dPyRNUlttBv | Khashayar Gatmiry,Stefanie Jegelka,Jonathan Kelner | ICLR 2022,Poster | While there has been substantial recent work studying generalization of neural networks,
the ability of deep nets in automating the process of feature extraction still evades a thorough mathematical understanding.
As a step toward this goal, we analyze learning and generalization of a three-layer neural network with ReLU activations in a regime that goes beyond the linear approximation of the network, and is hence not captured by the common Neural Tangent Kernel. We show that despite nonconvexity of the empirical loss, a variant of SGD converges in polynomially many iterations to a good solution that generalizes. In particular, our generalization bounds are adaptive: they automatically optimize over a family of kernels that includes the Neural Tangent Kernel, to provide the tightest bound. | https://openreview.net/pdf/086ce10c9607a92d59635b0ac0f1f0bd8c86ae5b.pdf |
Non-Parallel Text Style Transfer with Self-Parallel Supervision | https://openreview.net/forum?id=-TSe5o7STVR | https://openreview.net/forum?id=-TSe5o7STVR | Ruibo Liu,Chongyang Gao,Chenyan Jia,Guangxuan Xu,Soroush Vosoughi | ICLR 2022,Poster | The performance of existing text style transfer models is severely limited by the non-parallel datasets on which the models are trained. In non-parallel datasets, no direct mapping exists between sentences of the source and target style; the style transfer models thus only receive weak supervision of the target sentences during training, which often leads the model to discard too much style-independent information, or utterly fail to transfer the style.
In this work, we propose LaMer, a novel text style transfer framework based on large-scale language models. LaMer first mines the roughly parallel expressions in the non-parallel datasets with scene graphs, and then employs MLE training, followed by imitation learning refinement, to leverage the intrinsic parallelism within the data. On two benchmark tasks (sentiment & formality transfer) and a newly proposed challenging task (political stance transfer), our model achieves qualitative advances in transfer accuracy, content preservation, and fluency. Further empirical and human evaluations demonstrate that our model not only makes training more efficient, but also generates more readable and diverse expressions than previous models. | https://openreview.net/pdf/7858e341aa92c11991455a43e9a78c35ee4655a2.pdf |
Can an Image Classifier Suffice For Action Recognition? | https://openreview.net/forum?id=qhkFX-HLuHV | https://openreview.net/forum?id=qhkFX-HLuHV | Quanfu Fan,Chun-Fu Chen,Rameswar Panda | ICLR 2022,Poster | We explore a new perspective on video understanding by casting the video recognition problem as an image recognition task. Our approach rearranges input video frames into super images, which allow for training an image classifier directly to fulfill the task of action recognition, in exactly the same way as image classification. With such a simple idea, we show that transformer-based image classifiers alone can suffice for action recognition. In particular, our approach demonstrates strong and promising performance against SOTA methods on several public datasets including Kinetics400, Moments In Time, Something-Something V2 (SSV2), Jester and Diving48. We also experiment with the prevalent ResNet image classifiers in computer vision to further validate our idea. The results on both Kinetics400 and SSV2 are comparable to some of the best-performed CNN approaches based on spatio-temporal modeling. Our source codes and models are available at \url{https://github.com/IBM/sifar-pytorch}. | https://openreview.net/pdf/30716aa30d9fbd5e0f9a95e4c0e1255607ab8bc4.pdf |
Interacting Contour Stochastic Gradient Langevin Dynamics | https://openreview.net/forum?id=IK9ap6nxXr2 | https://openreview.net/forum?id=IK9ap6nxXr2 | Wei Deng,Siqi Liang,Botao Hao,Guang Lin,Faming Liang | ICLR 2022,Poster | We propose an interacting contour stochastic gradient Langevin dynamics (ICSGLD) sampler, an embarrassingly parallel multiple-chain contour stochastic gradient Langevin dynamics (CSGLD) sampler with efficient interactions. We show that ICSGLD can be theoretically more efficient than a single-chain CSGLD with an equivalent computational budget. We also present a novel random-field function, which facilitates the estimation of self-adapting parameters in big data and obtains free mode explorations. Empirically, we compare the proposed algorithm with popular benchmark methods for posterior sampling. The numerical results show a great potential of ICSGLD for large-scale uncertainty estimation tasks. | https://openreview.net/pdf/bf454b672f7afe0c72e3a83029c7238309a1b4a0.pdf |
NeuPL: Neural Population Learning | https://openreview.net/forum?id=MIX3fJkl_1 | https://openreview.net/forum?id=MIX3fJkl_1 | Siqi Liu,Luke Marris,Daniel Hennes,Josh Merel,Nicolas Heess,Thore Graepel | ICLR 2022,Poster | Learning in strategy games (e.g. StarCraft, poker) requires the discovery of diverse policies. This is often achieved by iteratively training new policies against existing ones, growing a policy population that is robust to exploit. This iterative approach suffers from two issues in real-world games: a) under finite budget, approximate best-response operators at each iteration needs truncating, resulting in under-trained good-responses populating the population; b) repeated learning of basic skills at each iteration is wasteful and becomes intractable in the presence of increasingly strong opponents. In this work, we propose Neural Population Learning (NeuPL) as a solution to both issues. NeuPL offers convergence guarantees to a population of best-responses under mild assumptions. By representing a population of policies within a single conditional model, NeuPL enables transfer learning across policies. Empirically, we show the generality, improved performance and efficiency of NeuPL across several test domains. Most interestingly, we show that novel strategies become more accessible, not less, as the neural population expands. | https://openreview.net/pdf/eeeb391c4885267d9c80ba3a8ea3dfd9e9ea8832.pdf |
DeSKO: Stability-Assured Robust Control with a Deep Stochastic Koopman Operator | https://openreview.net/forum?id=hniLRD_XCA | https://openreview.net/forum?id=hniLRD_XCA | Minghao Han,Jacob Euler-Rolle,Robert K. Katzschmann | ICLR 2022,Poster | The Koopman operator theory linearly describes nonlinear dynamical systems in a high-dimensional functional space and it allows to apply linear control methods to highly nonlinear systems. However, the Koopman operator does not account for any uncertainty in dynamical systems, causing it to perform poorly in real-world applications.
Therefore, we propose a deep stochastic Koopman operator (DeSKO) model in a robust learning control framework to guarantee stability of nonlinear stochastic systems. The DeSKO model captures a dynamical system's uncertainty by inferring a distribution of observables. We use the inferred distribution to design a robust, stabilizing closed-loop controller for a dynamical system. Modeling and control experiments on several advanced control benchmarks show that our framework is more robust and scalable than state-of-the-art deep Koopman operators and reinforcement learning methods. Tested control benchmarks include a soft robotic arm, a legged robot, and a biological gene regulatory network. We also demonstrate that this robust control method resists previously unseen uncertainties, such as external disturbances, with a magnitude of up to five times the maximum control input. Our approach opens up new possibilities in learning control for high-dimensional nonlinear systems while robustly managing internal or external uncertainty. | https://openreview.net/pdf/862602026e43c103de39be4295ff8f7288f3acf2.pdf |
Neural Network Approximation based on Hausdorff distance of Tropical Zonotopes | https://openreview.net/forum?id=oiZJwC_fyS | https://openreview.net/forum?id=oiZJwC_fyS | Panagiotis Misiakos,Georgios Smyrnis,George Retsinas,Petros Maragos | ICLR 2022,Poster | In this work we theoretically contribute to neural network approximation by providing a novel tropical geometrical viewpoint to structured neural network compression. In particular, we show that the approximation error between two neural networks with ReLU activations and one hidden layer depends on the Hausdorff distance of the tropical zonotopes of the networks. This theorem comes as a first step towards a purely geometrical interpretation of neural network approximation. Based on this theoretical contribution, we propose geometrical methods that employ the K-means algorithm to compress the fully connected parts of ReLU activated deep neural networks. We analyze the error bounds of our algorithms theoretically based on our approximation theorem and evaluate them empirically on neural network compression. Our experiments follow a proof-of-concept strategy and indicate that our geometrical tools achieve improved performance over relevant tropical geometry techniques and can be competitive against non-tropical methods. | https://openreview.net/pdf/e09efd74b974abec052126ca4cbb787b04fd3265.pdf |
ICLR 2022 International Conference on Learning Representations 2022 Accepted Paper Meta Info Dataset
This dataset is collect from the ICLR 2022 OpenReview website (https://openreview.net/group?id=ICLR.cc/2022/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/iclr2022). For researchers who are interested in doing analysis of ICLR 2022 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the ICLR 2022 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.
Meta Information of Json File
{
"title": "Domino: Discovering Systematic Errors with Cross-Modal Embeddings",
"url": "https://openreview.net/forum?id=FPCMqjI0jXN",
"detail_url": "https://openreview.net/forum?id=FPCMqjI0jXN",
"authors": "Sabri Eyuboglu,Maya Varma,Khaled Kamal Saab,Jean-Benoit Delbrouck,Christopher Lee-Messer,Jared Dunnmon,James Zou,Christopher Re",
"tags": "ICLR 2022,Oral",
"abstract": "Machine learning models that achieve high overall accuracy often make systematic errors on important subsets (or slices) of data. Identifying underperforming slices is particularly challenging when working with high-dimensional inputs (e.g. images, audio), where important slices are often unlabeled. In order to address this issue, recent studies have proposed automated slice discovery methods (SDMs), which leverage learned model representations to mine input data for slices on which a model performs poorly. To be useful to a practitioner, these methods must identify slices that are both underperforming and coherent (i.e. united by a human-understandable concept). However, no quantitative evaluation framework currently exists for rigorously assessing SDMs with respect to these criteria. Additionally, prior qualitative evaluations have shown that SDMs often identify slices that are incoherent. In this work, we address these challenges by first designing a principled evaluation framework that enables a quantitative comparison of SDMs across 1,235 slice discovery settings in three input domains (natural images, medical images, and time-series data).\nThen, motivated by the recent development of powerful cross-modal representation learning approaches, we present Domino, an SDM that leverages cross-modal embeddings and a novel error-aware mixture model to discover and describe coherent slices. We find that Domino accurately identifies 36% of the 1,235 slices in our framework -- a 12 percentage point improvement over prior methods. Further, Domino is the first SDM that can provide natural language descriptions of identified slices, correctly generating the exact name of the slice in 35% of settings. ",
"pdf": "https://openreview.net/pdf/a5ca838a35d810400cfa090453cd85abe02ab6b0.pdf"
}
Related
AI Equation
List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex
AI Agent Marketplace and Search
AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog
AI Agent Reviews
AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews
- Downloads last month
- 8