{"title": "Domino: Discovering Systematic Errors with Cross-Modal Embeddings", "url": "https://openreview.net/forum?id=FPCMqjI0jXN", "detail_url": "https://openreview.net/forum?id=FPCMqjI0jXN", "authors": "Sabri Eyuboglu,Maya Varma,Khaled Kamal Saab,Jean-Benoit Delbrouck,Christopher Lee-Messer,Jared Dunnmon,James Zou,Christopher Re", "tags": "ICLR 2022,Oral", "abstract": "Machine learning models that achieve high overall accuracy often make systematic errors on important subsets (or slices) of data. Identifying underperforming slices is particularly challenging when working with high-dimensional inputs (e.g. images, audio), where important slices are often unlabeled. In order to address this issue, recent studies have proposed automated slice discovery methods (SDMs), which leverage learned model representations to mine input data for slices on which a model performs poorly. To be useful to a practitioner, these methods must identify slices that are both underperforming and coherent (i.e. united by a human-understandable concept). However, no quantitative evaluation framework currently exists for rigorously assessing SDMs with respect to these criteria. Additionally, prior qualitative evaluations have shown that SDMs often identify slices that are incoherent. In this work, we address these challenges by first designing a principled evaluation framework that enables a quantitative comparison of SDMs across 1,235 slice discovery settings in three input domains (natural images, medical images, and time-series data).\nThen, motivated by the recent development of powerful cross-modal representation learning approaches, we present Domino, an SDM that leverages cross-modal embeddings and a novel error-aware mixture model to discover and describe coherent slices. We find that Domino accurately identifies 36% of the 1,235 slices in our framework -- a 12 percentage point improvement over prior methods. Further, Domino is the first SDM that can provide natural language descriptions of identified slices, correctly generating the exact name of the slice in 35% of settings. ", "pdf": "https://openreview.net/pdf/a5ca838a35d810400cfa090453cd85abe02ab6b0.pdf"} {"title": "Natural Language Descriptions of Deep Visual Features", "url": "https://openreview.net/forum?id=NudBMY-tzDr", "detail_url": "https://openreview.net/forum?id=NudBMY-tzDr", "authors": "Evan Hernandez,Sarah Schwettmann,David Bau,Teona Bagashvili,Antonio Torralba,Jacob Andreas", "tags": "ICLR 2022,Oral", "abstract": "Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope, labeling only a small subset of neurons and behaviors in any network. Is a richer characterization of neuron-level computation possible? We introduce a procedure (called MILAN, for mutual information-guided linguistic annotation of neurons) that automatically labels neurons with open-ended, compositional, natural language descriptions. Given a neuron, MILAN generates a description by searching for a natural language string that maximizes pointwise mutual information with the image regions in which the neuron is active. MILAN produces fine-grained descriptions that capture categorical, relational, and logical structure in learned features. These descriptions obtain high agreement with human-generated feature descriptions across a diverse set of model architectures and tasks, and can aid in understanding and controlling learned models. We highlight three applications of natural language neuron descriptions. First, we use MILAN for analysis, characterizing the distribution and importance of neurons selective for attribute, category, and relational information in vision models. Second, we use MILAN for auditing, surfacing neurons sensitive to human faces in datasets designed to obscure them. Finally, we use MILAN for editing, improving robustness in an image classifier by deleting neurons sensitive to text features spuriously correlated with class labels.", "pdf": "https://openreview.net/pdf/842234024e58a8d5073a88b3c04282011b8e20a7.pdf"} {"title": "Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization", "url": "https://openreview.net/forum?id=tYRrOdSnVUy", "detail_url": "https://openreview.net/forum?id=tYRrOdSnVUy", "authors": "Lixu Wang,Shichao Xu,Ruiqi Xu,Xiao Wang,Qi Zhu", "tags": "ICLR 2022,Oral", "abstract": "As Artificial Intelligence as a Service gains popularity, protecting well-trained models as intellectual property is becoming increasingly important. There are two common types of protection methods: ownership verification and usage authorization. In this paper, we propose Non-Transferable Learning (NTL), a novel approach that captures the exclusive data representation in the learned model and restricts the model generalization ability to certain domains. This approach provides effective solutions to both model verification and authorization. Specifically: 1) For ownership verification, watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods. By comparison, our NTL-based ownership verification provides robust resistance to state-of-the-art watermark removal methods, as shown in extensive experiments with 6 removal approaches over the digits, CIFAR10 & STL10, and VisDA datasets. 2) For usage authorization, prior solutions focus on authorizing specific users to access the model, but authorized users can still apply the model to any data without restriction. Our NTL-based authorization approach instead provides data-centric protection, which we call applicability authorization, by significantly degrading the performance of the model on unauthorized data. Its effectiveness is also shown through experiments on aforementioned datasets. ", "pdf": "https://openreview.net/pdf/cc0b829e495ebd36c4e0dcce6f5d044ad4dce58d.pdf"} {"title": "Neural Structured Prediction for Inductive Node Classification", "url": "https://openreview.net/forum?id=YWNAX0caEjI", "detail_url": "https://openreview.net/forum?id=YWNAX0caEjI", "authors": "Meng Qu,Huiyu Cai,Jian Tang", "tags": "ICLR 2022,Oral", "abstract": "This paper studies node classification in the inductive setting, i.e., aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs. This problem has been extensively studied with graph neural networks (GNNs) by learning effective node representations, as well as traditional structured prediction methods for modeling the structured output of node labels, e.g., conditional random fields (CRFs). In this paper, we present a new approach called the Structured Proxy Network (SPN), which combines the advantages of both worlds. SPN defines flexible potential functions of CRFs with GNNs. However, learning such a model is nontrivial as it involves optimizing a maximin game with high-cost inference. Inspired by the underlying connection between joint and marginal distributions defined by Markov networks, we propose to solve an approximate version of the optimization problem as a proxy, which yields a near-optimal solution, making learning more efficient. Extensive experiments on two settings show that our approach outperforms many competitive baselines.", "pdf": "https://openreview.net/pdf/df1b628202430dff01a7eeed5b5e5a2e703d1bad.pdf"} {"title": "A New Perspective on \"How Graph Neural Networks Go Beyond Weisfeiler-Lehman?\"", "url": "https://openreview.net/forum?id=uxgg9o7bI_3", "detail_url": "https://openreview.net/forum?id=uxgg9o7bI_3", "authors": "Asiri Wijesinghe,Qing Wang", "tags": "ICLR 2022,Oral", "abstract": "We propose a new perspective on designing powerful Graph Neural Networks (GNNs). In a nutshell, this enables a general solution to inject structural properties of graphs into a message-passing aggregation scheme of GNNs. As a theoretical basis, we develop a new hierarchy of local isomorphism on neighborhood subgraphs. Then, we theoretically characterize how message-passing GNNs can be designed to be more expressive than the Weisfeiler Lehman test. To elaborate this characterization, we propose a novel neural model, called GraphSNN, and prove that this model is strictly more expressive than the Weisfeiler Lehman test in distinguishing graph structures. We empirically verify the strength of our model on different graph learning tasks. It is shown that our model consistently improves the state-of-the-art methods on the benchmark tasks without sacrificing computational simplicity and efficiency.", "pdf": "https://openreview.net/pdf/376e7da3d7f86a2bd40cd51fadfc278e94372443.pdf"} {"title": "Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond", "url": "https://openreview.net/forum?id=LdlwbBP2mlq", "detail_url": "https://openreview.net/forum?id=LdlwbBP2mlq", "authors": "Chulhee Yun,Shashank Rajput,Suvrit Sra", "tags": "ICLR 2022,Oral", "abstract": "In distributed learning, local SGD (also known as federated averaging) and its simple baseline minibatch SGD are widely studied optimization methods. Most existing analyses of these methods assume independent and unbiased gradient estimates obtained via with-replacement sampling. In contrast, we study shuffling-based variants: minibatch and local Random Reshuffling, which draw stochastic gradients without replacement and are thus closer to practice. For smooth functions satisfying the Polyak-\u0141ojasiewicz condition, we obtain convergence bounds (in the large epoch regime) which show that these shuffling-based variants converge faster than their with-replacement counterparts. Moreover, we prove matching lower bounds showing that our convergence analysis is tight. Finally, we propose an algorithmic modification called synchronized shuffling that leads to convergence rates faster than our lower bounds in near-homogeneous settings.", "pdf": "https://openreview.net/pdf/1669f6cc32c853b0d69068b7ed1a230ce3f321d0.pdf"} {"title": "The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions", "url": "https://openreview.net/forum?id=Z7Lk2cQEG8a", "detail_url": "https://openreview.net/forum?id=Z7Lk2cQEG8a", "authors": "Yifei Wang,Jonathan Lacotte,Mert Pilanci", "tags": "ICLR 2022,Oral", "abstract": "We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints. Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into convex spaces. Given the set of solutions of our convex optimization program, we show how to construct exactly the entire set of optimal neural networks. We provide a detailed characterization of this optimal set and its invariant transformations. As additional consequences of our convex perspective, (i) we establish that Clarke stationary points found by stochastic gradient descent correspond to the global optimum of a subsampled convex problem (ii) we provide a polynomial-time algorithm for checking if a neural network is a global minimum of the training loss (iii) we provide an explicit construction of a continuous path between any neural network and the global minimum of its sublevel set and (iv) characterize the minimal size of the hidden layer so that the neural network optimization landscape has no spurious valleys.\nOverall, we provide a rich framework for studying the landscape of neural network training loss through convexity.", "pdf": "https://openreview.net/pdf/9733b1623c23b45535cc2c126e6fb496e55e8049.pdf"} {"title": "Provably Filtering Exogenous Distractors using Multistep Inverse Dynamics", "url": "https://openreview.net/forum?id=RQLLzMCefQu", "detail_url": "https://openreview.net/forum?id=RQLLzMCefQu", "authors": "Yonathan Efroni,Dipendra Misra,Akshay Krishnamurthy,Alekh Agarwal,John Langford", "tags": "ICLR 2022,Oral", "abstract": "Many real-world applications of reinforcement learning (RL) require the agent to deal with high-dimensional observations such as those generated from a megapixel camera. Prior work has addressed such problems with representation learning, through which the agent can provably extract endogenous, latent state information from raw observations and subsequently plan efficiently. However, such approaches can fail in the presence of temporally correlated noise in the observations, a phenomenon that is common in practice. We initiate the formal study of latent state discovery in the presence of such exogenous noise sources by proposing a new model, the Exogenous Block MDP (EX-BMDP), for rich observation RL. We start by establishing several negative results, by highlighting failure cases of prior representation learning based approaches. Then, we introduce the Predictive Path Elimination (PPE) algorithm, that learns a generalization of inverse dynamics and is provably sample and computationally efficient in EX-BMDPs when the endogenous state dynamics are near deterministic. The sample complexity of PPE depends polynomially on the size of the latent endogenous state space while not directly depending on the size of the observation space, nor the exogenous state space. We provide experiments on challenging exploration problems which show that our approach works empirically. ", "pdf": "https://openreview.net/pdf/310151127bcaaee206f6987dfe48a6f9a49ae848.pdf"} {"title": "Bootstrapped Meta-Learning", "url": "https://openreview.net/forum?id=b-ny3x071E5", "detail_url": "https://openreview.net/forum?id=b-ny3x071E5", "authors": "Sebastian Flennerhag,Yannick Schroecker,Tom Zahavy,Hado van Hasselt,David Silver,Satinder Singh", "tags": "ICLR 2022,Oral", "abstract": "Meta-learning empowers artificial intelligence to increase its efficiency by learning how to learn. Unlocking this potential involves overcoming a challenging meta-optimisation problem. We propose an algorithm that tackles this problem by letting the meta-learner teach itself. The algorithm first bootstraps a target from the meta-learner, then optimises the meta-learner by minimising the distance to that target under a chosen (pseudo-)metric. Focusing on meta-learning with gradients, we establish conditions that guarantee performance improvements and show that metric can be used to control meta-optimisation. Meanwhile, the bootstrapping mechanism can extend the effective meta-learning horizon without requiring backpropagation through all updates. We achieve a new state-of-the art for model-free agents on the Atari ALE benchmark and demonstrate that it yields both performance and efficiency gains in multi-task meta-learning. Finally, we explore how bootstrapping opens up new possibilities and find that it can meta-learn efficient exploration in an epsilon-greedy Q-learning agent - without backpropagating through the update rule.", "pdf": "https://openreview.net/pdf/0eccd48eddcbf9cfc77b50cb0e97fb58937aee70.pdf"} {"title": "Coordination Among Neural Modules Through a Shared Global Workspace", "url": "https://openreview.net/forum?id=XzTtHjgPDsT", "detail_url": "https://openreview.net/forum?id=XzTtHjgPDsT", "authors": "Anirudh Goyal,Aniket Rajiv Didolkar,Alex Lamb,Kartikeya Badola,Nan Rosemary Ke,Nasim Rahaman,Jonathan Binas,Charles Blundell,Michael Curtis Mozer,Yoshua Bengio", "tags": "ICLR 2022,Oral", "abstract": " Deep learning has seen a movement away from representing examples with a monolithic hidden state towards a richly structured state. For example, Transformers segment by position, and object-centric architectures decompose images into entities. In all these architectures, interactions between different elements are modeled via pairwise interactions: Transformers make use of self-attention to incorporate information from other positions and object-centric architectures make use of graph neural networks to model interactions among entities. We consider how to improve on pairwise interactions in terms of global coordination and a coherent, integrated representation that can be used for downstream tasks. In cognitive science, a global workspace architecture has been proposed in which functionally specialized components share information through a common, bandwidth-limited communication channel. We explore the use of such a communication channel in the context of deep learning for modeling the structure of complex environments. The proposed method includes a shared workspace through which communication among different specialist modules takes place but due to limits on the communication bandwidth, specialist modules must compete for access. We show that capacity limitations have a rational basis in that (1) they encourage specialization and compositionality and (2) they facilitate the synchronization of otherwise independent specialists.\n", "pdf": "https://openreview.net/pdf/19aac83e8824498df7b9d1e6952523f7c068218b.pdf"} {"title": "Data-Efficient Graph Grammar Learning for Molecular Generation", "url": "https://openreview.net/forum?id=l4IHywGq6a", "detail_url": "https://openreview.net/forum?id=l4IHywGq6a", "authors": "Minghao Guo,Veronika Thost,Beichen Li,Payel Das,Jie Chen,Wojciech Matusik", "tags": "ICLR 2022,Oral", "abstract": "The problem of molecular generation has received significant attention recently. Existing methods are typically based on deep neural networks and require training on large datasets with tens of thousands of samples. In practice, however, the size of class-specific chemical datasets is usually limited (e.g., dozens of samples) due to labor-intensive experimentation and data collection. Another major challenge is to generate only physically synthesizable molecules. This is a non-trivial task for neural network-based generative models since the relevant chemical knowledge can only be extracted and generalized from the limited training data. In this work, we propose a data-efficient generative model that can be learned from datasets with orders of magnitude smaller sizes than common benchmarks. At the heart of this method is a learnable graph grammar that generates molecules from a sequence of production rules. Without any human assistance, these production rules are automatically constructed from training data. Furthermore, additional chemical knowledge can be incorporated into the model by further grammar optimization. Our learned graph grammar yields state-of-the-art results on generating high-quality molecules for three monomer datasets that contain only ${\\sim}20$ samples each. Our approach also achieves remarkable performance in a challenging polymer generation task with $only$ $117$ training samples and is competitive against existing methods using $81$k data points.\n", "pdf": "https://openreview.net/pdf/c17b0db09f98b3279ad677650f18acbf907883ce.pdf"} {"title": "Poisoning and Backdooring Contrastive Learning", "url": "https://openreview.net/forum?id=iC4UHbQ01Mp", "detail_url": "https://openreview.net/forum?id=iC4UHbQ01Mp", "authors": "Nicholas Carlini,Andreas Terzis", "tags": "ICLR 2022,Oral", "abstract": "Multimodal contrastive learning methods like CLIP train on noisy and uncurated training datasets. This is cheaper than labeling datasets manually, and even improves out-of-distribution robustness. We show that this practice makes backdoor and poisoning attacks a significant threat. By poisoning just 0.01% of a dataset (e.g., just 300 images of the 3 million-example Conceptual Captions dataset), we can cause the model to misclassify test images by overlaying a small patch. Targeted poisoning attacks, whereby the model misclassifies a particular test input with an adversarially-desired label, are even easier requiring control of 0.0001% of the dataset (e.g., just three out of the 3 million images). Our attacks call into question whether training on noisy and uncurated Internet scrapes is desirable.", "pdf": "https://openreview.net/pdf/abd77f0543a72cd26da355efc5680de233f120af.pdf"} {"title": "Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path", "url": "https://openreview.net/forum?id=w1UbdvWH_R3", "detail_url": "https://openreview.net/forum?id=w1UbdvWH_R3", "authors": "X.Y. Han,Vardan Papyan,David L. Donoho", "tags": "ICLR 2022,Oral", "abstract": "The recently discovered Neural Collapse (NC) phenomenon occurs pervasively in today's deep net training paradigm of driving cross-entropy (CE) loss towards zero. During NC, last-layer features collapse to their class-means, both classifiers and class-means collapse to the same Simplex Equiangular Tight Frame, and classifier behavior collapses to the nearest-class-mean decision rule. Recent works demonstrated that deep nets trained with mean squared error (MSE) loss perform comparably to those trained with CE. As a preliminary, we empirically establish that NC emerges in such MSE-trained deep nets as well through experiments on three canonical networks and five benchmark datasets. We provide, in a Google Colab notebook, PyTorch code for reproducing MSE-NC and CE-NC: https://colab.research.google.com/github/neuralcollapse/neuralcollapse/blob/main/neuralcollapse.ipynb. The analytically-tractable MSE loss offers more mathematical opportunities than the hard-to-analyze CE loss, inspiring us to leverage MSE loss towards the theoretical investigation of NC. We develop three main contributions: (I) We show a new decomposition of the MSE loss into (A) terms directly interpretable through the lens of NC and which assume the last-layer classifier is exactly the least-squares classifier; and (B) a term capturing the deviation from this least-squares classifier. (II) We exhibit experiments on canonical datasets and networks demonstrating that term-(B) is negligible during training. This motivates us to introduce a new theoretical construct: the central path, where the linear classifier stays MSE-optimal for feature activations throughout the dynamics. (III) By studying renormalized gradient flow along the central path, we derive exact dynamics that predict NC.", "pdf": "https://openreview.net/pdf/75799bbe466f7240935655cbfaa930c9628a915e.pdf"} {"title": "Weighted Training for Cross-Task Learning", "url": "https://openreview.net/forum?id=ltM1RMZntpu", "detail_url": "https://openreview.net/forum?id=ltM1RMZntpu", "authors": "Shuxiao Chen,Koby Crammer,Hangfeng He,Dan Roth,Weijie J Su", "tags": "ICLR 2022,Oral", "abstract": "In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and target tasks. We show that TAWT is easy to implement, is computationally efficient, requires little hyperparameter tuning, and enjoys non-asymptotic learning-theoretic guarantees. The effectiveness of TAWT is corroborated through extensive experiments with BERT on four sequence tagging tasks in natural language processing (NLP), including part-of-speech (PoS) tagging, chunking, predicate detection, and named entity recognition (NER). As a byproduct, the proposed representation-based task distance allows one to reason in a theoretically principled way about several critical aspects of cross-task learning, such as the choice of the source data and the impact of fine-tuning.", "pdf": "https://openreview.net/pdf/579ed2f74ecc130396039eae33e13de66b8de08b.pdf"} {"title": "iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data", "url": "https://openreview.net/forum?id=wRODLDHaAiW", "detail_url": "https://openreview.net/forum?id=wRODLDHaAiW", "authors": "Marine Schimel,Ta-Chu Kao,Kristopher T Jensen,Guillaume Hennequin", "tags": "ICLR 2022,Oral", "abstract": "Understanding how neural dynamics give rise to behaviour is one of the most fundamental questions in systems neuroscience. To achieve this, a common approach is to record neural populations in behaving animals, and model these data as emanating from a latent dynamical system whose state trajectories can then be related back to behavioural observations via some form of decoding. As recordings are typically performed in localized circuits that form only a part of the wider implicated network, it is important to simultaneously learn the local dynamics and infer any unobserved external input that might drive them. Here, we introduce iLQR-VAE, a novel control-based approach to variational inference in nonlinear dynamical systems, capable of learning both latent dynamics, initial conditions, and ongoing external inputs. As in recent deep learning approaches, our method is based on an input-driven sequential variational autoencoder (VAE). The main novelty lies in the use of the powerful iterative linear quadratic regulator algorithm (iLQR) in the recognition model. Optimization of the standard evidence lower-bound requires differentiating through iLQR solutions, which is made possible by recent advances in differentiable control. Importantly, having the recognition model be implicitly defined by the generative model greatly reduces the number of free parameters and allows for flexible, high-quality inference. This makes it possible for instance to evaluate the model on a single long trial after training on smaller chunks. We demonstrate the effectiveness of iLQR-VAE on a range of synthetic systems, with autonomous as well as input-driven dynamics. We further apply it to neural and behavioural recordings in non-human primates performing two different reaching tasks, and show that iLQR-VAE yields high-quality kinematic reconstructions from the neural data. ", "pdf": "https://openreview.net/pdf/c4b2a10a835b79e5cbaff71f6577c29236e964b5.pdf"} {"title": "Extending the WILDS Benchmark for Unsupervised Adaptation", "url": "https://openreview.net/forum?id=z7p2V6KROOV", "detail_url": "https://openreview.net/forum?id=z7p2V6KROOV", "authors": "Shiori Sagawa,Pang Wei Koh,Tony Lee,Irena Gao,Sang Michael Xie,Kendrick Shen,Ananya Kumar,Weihua Hu,Michihiro Yasunaga,Henrik Marklund,Sara Beery,Etienne David,Ian Stavness,Wei Guo,Jure Leskovec,Kate Saenko,Tatsunori Hashimoto,Sergey Levine,Chelsea Finn,Percy Liang", "tags": "ICLR 2022,Oral", "abstract": "Machine learning systems deployed in the wild are often trained on a source distribution but deployed on a different target distribution. Unlabeled data can be a powerful point of leverage for mitigating these distribution shifts, as it is frequently much more available than labeled data and can often be obtained from distributions beyond the source distribution as well. However, existing distribution shift benchmarks with unlabeled data do not reflect the breadth of scenarios that arise in real-world applications. In this work, we present the WILDS 2.0 update, which extends 8 of the 10 datasets in the WILDS benchmark of distribution shifts to include curated unlabeled data that would be realistically obtainable in deployment. These datasets span a wide range of applications (from histology to wildlife conservation), tasks (classification, regression, and detection), and modalities (photos, satellite images, microscope slides, text, molecular graphs). The update maintains consistency with the original WILDS benchmark by using identical labeled training, validation, and test sets, as well as identical evaluation metrics. We systematically benchmark state-of-the-art methods that use unlabeled data, including domain-invariant, self-training, and self-supervised methods, and show that their success on WILDS is limited. To facilitate method development, we provide an open-source package that automates data loading and contains the model architectures and methods used in this paper. Code and leaderboards are available at https://wilds.stanford.edu.", "pdf": "https://openreview.net/pdf/16bc69d47c7ff67867bfc50009d6b9fc5043a00f.pdf"} {"title": "Asymmetry Learning for Counterfactually-invariant Classification in OOD Tasks", "url": "https://openreview.net/forum?id=avgclFZ221l", "detail_url": "https://openreview.net/forum?id=avgclFZ221l", "authors": "S Chandra Mouli,Bruno Ribeiro", "tags": "ICLR 2022,Oral", "abstract": "Generalizing from observed to new related environments (out-of-distribution) is central to the reliability of classifiers. However, most classifiers fail to predict label $Y$ from input $X$ when the change in environment is due a (stochastic) input transformation $T^\\text{te} \\circ X'$ not observed in training, as in training we observe $T^\\text{tr} \\circ X'$, where $X'$ is a hidden variable. This work argues that when the transformations in train $T^\\text{tr}$ and test $T^\\text{te}$ are (arbitrary) symmetry transformations induced by a collection of known $m$ equivalence relations, the task of finding a robust OOD classifier can be defined as finding the simplest causal model that defines a causal connection between the target labels and the symmetry transformations that are associated with label changes. We then propose a new learning paradigm, asymmetry learning, that identifies which symmetries the classifier must break in order to correctly predict $Y$ in both train and test. Asymmetry learning performs a causal model search that, under certain identifiability conditions, finds classifiers that perform equally well in-distribution and out-of-distribution. Finally, we show how to learn counterfactually-invariant representations with asymmetry learning in two physics tasks.", "pdf": "https://openreview.net/pdf/f15da1dc02ded9aba4a26e8ade750b28429da30f.pdf"} {"title": "Comparing Distributions by Measuring Differences that Affect Decision Making", "url": "https://openreview.net/forum?id=KB5onONJIAU", "detail_url": "https://openreview.net/forum?id=KB5onONJIAU", "authors": "Shengjia Zhao,Abhishek Sinha,Yutong He,Aidan Perreault,Jiaming Song,Stefano Ermon", "tags": "ICLR 2022,Oral", "abstract": "Measuring the discrepancy between two probability distributions is a fundamental problem in machine learning and statistics. We propose a new class of discrepancies based on the optimal loss for a decision task -- two distributions are different if the optimal decision loss is higher on their mixture than on each individual distribution. By suitably choosing the decision task, this generalizes the Jensen-Shannon divergence and the maximum mean discrepancy family. We apply our approach to two-sample tests, and on various benchmarks, we achieve superior test power compared to competing methods. In addition, a modeler can directly specify their preferences when comparing distributions through the decision loss. We apply this property to understanding the effects of climate change on different social and economic activities, evaluating sample quality, and selecting features targeting different decision tasks.", "pdf": "https://openreview.net/pdf/e99719a7a6796b569cc6afdf6f42024d0df2fbea.pdf"} {"title": "MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling", "url": "https://openreview.net/forum?id=UseMOjWENv", "detail_url": "https://openreview.net/forum?id=UseMOjWENv", "authors": "Yusong Wu,Ethan Manilow,Yi Deng,Rigel Swavely,Kyle Kastner,Tim Cooijmans,Aaron Courville,Cheng-Zhi Anna Huang,Jesse Engel", "tags": "ICLR 2022,Oral", "abstract": "Musical expression requires control of both what notes that are played, and how they are performed. Conventional audio synthesizers provide detailed expressive controls, but at the cost of realism. Black-box neural audio synthesis and concatenative samplers can produce realistic audio, but have few mechanisms for control. In this work, we introduce MIDI-DDSP a hierarchical model of musical instruments that enables both realistic neural audio synthesis and detailed user control. Starting from interpretable Differentiable Digital Signal Processing (DDSP) synthesis parameters, we infer musical notes and high-level properties of their expressive performance (such as timbre, vibrato, dynamics, and articulation). This creates a 3-level hierarchy (notes, performance, synthesis) that affords individuals the option to intervene at each level, or utilize trained priors (performance given notes, synthesis given performance) for creative assistance. Through quantitative experiments and listening tests, we demonstrate that this hierarchy can reconstruct high-fidelity audio, accurately predict performance attributes for a note sequence, independently manipulate the attributes of a given performance, and as a complete system, generate realistic audio from a novel note sequence. By utilizing an interpretable hierarchy, with multiple levels of granularity, MIDI-DDSP opens the door to assistive tools to empower individuals across a diverse range of musical experience.", "pdf": "https://openreview.net/pdf/e26b385d95d67af36d02a385047be6f7a0d6f47b.pdf"} {"title": "Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling", "url": "https://openreview.net/forum?id=N0n_QyQ5lBF", "detail_url": "https://openreview.net/forum?id=N0n_QyQ5lBF", "authors": "Bo Wan,Wenjuan Han,Zilong Zheng,Tinne Tuytelaars", "tags": "ICLR 2022,Oral", "abstract": "We introduce a new task, unsupervised vision-language (VL) grammar induction. Given an image-caption pair, the goal is to extract a shared hierarchical structure for both image and language simultaneously. We argue that such structured output, grounded in both modalities, is a clear step towards the high-level understanding of multimodal information. Besides challenges existing in conventional visually grounded grammar induction tasks, VL grammar induction requires a model to capture contextual semantics and perform a fine-grained alignment. To address these challenges, we propose a novel method, CLIORA, which constructs a shared vision-language constituency tree structure with context-dependent semantics for all possible phrases in different levels of the tree. It computes a matching score between each constituent and image region, trained via contrastive learning. It integrates two levels of fusion, namely at feature-level and at score-level, so as to allow fine-grained alignment. We introduce a new evaluation metric for VL grammar induction, CCRA, and show a 3.3% improvement over a strong baseline on Flickr30k Entities. We also evaluate our model via two derived tasks, i.e., language grammar induction and phrase grounding, and improve over the state-of-the-art for both.", "pdf": "https://openreview.net/pdf/5c104842d13e8d6efd55b6d7c04f4373a39eae18.pdf"} {"title": "PiCO: Contrastive Label Disambiguation for Partial Label Learning", "url": "https://openreview.net/forum?id=EhYjZy6e1gJ", "detail_url": "https://openreview.net/forum?id=EhYjZy6e1gJ", "authors": "Haobo Wang,Ruixuan Xiao,Yixuan Li,Lei Feng,Gang Niu,Gang Chen,Junbo Zhao", "tags": "ICLR 2022,Oral", "abstract": "Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL---representation learning and label disambiguation---in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Extensive experiments demonstrate that PiCO significantly outperforms the current state-of-the-art approaches in PLL and even achieves comparable results to fully supervised learning. Code and data available: https://github.com/hbzju/PiCO.", "pdf": "https://openreview.net/pdf/f9275b96d741f229db4e61a15ce5f2a499c9ee67.pdf"} {"title": "Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting", "url": "https://openreview.net/forum?id=0EXmFzUn5I", "detail_url": "https://openreview.net/forum?id=0EXmFzUn5I", "authors": "Shizhan Liu,Hang Yu,Cong Liao,Jianguo Li,Weiyao Lin,Alex X. Liu,Schahram Dustdar", "tags": "ICLR 2022,Oral", "abstract": "Accurate prediction of the future given the past based on time series data is of paramount importance, since it opens the door for decision making and risk management ahead of time. In practice, the challenge is to build a flexible but parsimonious model that can capture a wide range of temporal dependencies. In this paper, we propose Pyraformer by exploring the multiresolution representation of the time series. Specifically, we introduce the pyramidal attention module (PAM) in which the inter-scale tree structure summarizes features at different resolutions and the intra-scale neighboring connections model the temporal dependencies of different ranges. Under mild conditions, the maximum length of the signal traversing path in Pyraformer is a constant (i.e., $\\mathcal O(1)$) with regard to the sequence length $L$, while its time and space complexity scale linearly with $L$. Extensive numerical results show that Pyraformer typically achieves the highest prediction accuracy in both single-step and long-range forecasting tasks with the least amount of time and memory consumption, especially when the sequence is long.", "pdf": "https://openreview.net/pdf/2ac159853cd001bbca6a8a12da497c8013914b31.pdf"} {"title": "Expressiveness and Approximation Properties of Graph Neural Networks", "url": "https://openreview.net/forum?id=wIzUeM3TAU", "detail_url": "https://openreview.net/forum?id=wIzUeM3TAU", "authors": "Floris Geerts,Juan L Reutter", "tags": "ICLR 2022,Oral", "abstract": "Characterizing the separation power of graph neural networks (GNNs) provides an understanding of their limitations for graph learning tasks. Results regarding separation power are, however, usually geared at specific GNNs architectures, and tools for understanding arbitrary GNN architectures are generally lacking. We provide an elegant way to easily obtain bounds on the separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests, which have become the yardstick to measure the separation power of GNNs. The crux is to view GNNs as expressions in a procedural tensor language describing the computations in the layers of the GNNs. Then, by a simple analysis of the obtained expressions, in terms of the number of indexes used and the nesting depth of summations, bounds on the separation power in terms of the WL-tests readily follow. We use tensor language to define Higher-Order Message-Passing Neural Networks (or k-MPNNs), a natural extension of MPNNs. Furthermore, the tensor language point of view allows for the derivation of universality results for classes of GNNs in a natural way. Our approach provides a toolbox with which GNN architecture designers can analyze the separation power of their GNNs, without needing to know the intricacies of the WL-tests. We also provide insights in what is needed to boost the separation power of GNNs.", "pdf": "https://openreview.net/pdf/9d0fe7ff08261aae56611b7f670de9875c2a9cd9.pdf"} {"title": "Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space", "url": "https://openreview.net/forum?id=1L0C5ROtFp", "detail_url": "https://openreview.net/forum?id=1L0C5ROtFp", "authors": "Steeven JANNY,Fabien Baradel,Natalia Neverova,Madiha Nadri,Greg Mori,Christian Wolf", "tags": "ICLR 2022,Oral", "abstract": "Learning causal relationships in high-dimensional data (images, videos) is a hard task, as they are often defined on low dimensional manifolds and must be extracted from complex signals dominated by appearance, lighting, textures and also spurious correlations in the data. We present a method for learning counterfactual reasoning of physical processes in pixel space, which requires the prediction of the impact of interventions on initial conditions. Going beyond the identification of structural relationships, we deal with the challenging problem of forecasting raw video over long horizons. Our method does not require the knowledge or supervision of any ground truth positions or other object or scene properties. Our model learns and acts on a suitable hybrid latent representation based on a combination of dense features, sets of 2D keypoints and an additional latent vector per keypoint. We show that this better captures the dynamics of physical processes than purely dense or sparse representations. We introduce a new challenging and carefully designed counterfactual benchmark for predictions in pixel space and outperform strong baselines in physics-inspired ML and video prediction.", "pdf": "https://openreview.net/pdf/cbd75b662eaa377753b892113b221d062f26511e.pdf"} {"title": "BEiT: BERT Pre-Training of Image Transformers", "url": "https://openreview.net/forum?id=p-BhZSz59o4", "detail_url": "https://openreview.net/forum?id=p-BhZSz59o4", "authors": "Hangbo Bao,Li Dong,Songhao Piao,Furu Wei", "tags": "ICLR 2022,Oral", "abstract": "We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e., image patches (such as 16 x 16 pixels), and visual tokens (i.e., discrete tokens). We first ``tokenize'' the original image into visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. Experimental results on image classification and semantic segmentation show that our model achieves competitive results with previous pre-training methods.", "pdf": "https://openreview.net/pdf/1be2cb0e0edf9af45f8ef450b802b459897cec3d.pdf"} {"title": "Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution", "url": "https://openreview.net/forum?id=UYneFzXSJWh", "detail_url": "https://openreview.net/forum?id=UYneFzXSJWh", "authors": "Ananya Kumar,Aditi Raghunathan,Robbie Matthew Jones,Tengyu Ma,Percy Liang", "tags": "ICLR 2022,Oral", "abstract": "When transferring a pretrained model to a downstream task, two popular methods are full fine-tuning (updating all the model parameters) and linear probing (updating only the last linear layer---the \"head\"). It is well known that fine-tuning leads to better accuracy in-distribution (ID). However, in this paper, we find that fine-tuning can achieve worse accuracy than linear probing out-of-distribution (OOD) when the pretrained features are good and the distribution shift is large. On 10 distribution shift datasets (BREEDS-Living17, BREEDS-Entity30, DomainNet, CIFAR $\\to$ STL, CIFAR-10.1, FMoW, ImageNetV2, ImageNet-R, ImageNet-A, ImageNet-Sketch), fine-tuning obtains on average 2% higher accuracy ID but 7% lower accuracy OOD than linear probing. We show theoretically that this tradeoff between ID and OOD accuracy arises even in a simple setting: fine-tuning overparameterized two-layer linear networks. We prove that the OOD error of fine-tuning is high when we initialize with a fixed or random head---this is because while fine-tuning learns the head, the lower layers of the neural network change simultaneously and distort the pretrained features. Our analysis suggests that the easy two-step strategy of linear probing then full fine-tuning (LP-FT), sometimes used as a fine-tuning heuristic, combines the benefits of both fine-tuning and linear probing. Empirically, LP-FT outperforms both fine-tuning and linear probing on the above datasets (1% better ID, 10% better OOD than full fine-tuning).", "pdf": "https://openreview.net/pdf/5d8a4ae4492042b22b07eabc7a9abcfa517f419c.pdf"} {"title": "StyleAlign: Analysis and Applications of Aligned StyleGAN Models", "url": "https://openreview.net/forum?id=Qg2vi4ZbHM9", "detail_url": "https://openreview.net/forum?id=Qg2vi4ZbHM9", "authors": "Zongze Wu,Yotam Nitzan,Eli Shechtman,Dani Lischinski", "tags": "ICLR 2022,Oral", "abstract": "In this paper, we perform an in-depth study of the properties and applications of aligned generative models.\nWe refer to two models as aligned if they share the same architecture, and one of them (the child) is obtained from the other (the parent) via fine-tuning to another domain, a common practice in transfer learning. Several works already utilize some basic properties of aligned StyleGAN models to perform image-to-image translation. Here, we perform the first detailed exploration of model alignment, also focusing on StyleGAN. First, we empirically analyze aligned models and provide answers to important questions regarding their nature. In particular, we find that the child model's latent spaces are semantically aligned with those of the parent, inheriting incredibly rich semantics, even for distant data domains such as human faces and churches. Second, equipped with this better understanding, we leverage aligned models to solve a diverse set of tasks. In addition to image translation, we demonstrate fully automatic cross-domain image morphing. We further show that zero-shot vision tasks may be performed in the child domain, while relying exclusively on supervision in the parent domain. We demonstrate qualitatively and quantitatively that our approach yields state-of-the-art results, while requiring only simple fine-tuning and inversion. ", "pdf": "https://openreview.net/pdf/a75f48f49713ac38baaaee51cb3273177975f96b.pdf"} {"title": "Variational Inference for Discriminative Learning with Generative Modeling of Feature Incompletion", "url": "https://openreview.net/forum?id=qnQN4yr6FJz", "detail_url": "https://openreview.net/forum?id=qnQN4yr6FJz", "authors": "Kohei Miyaguchi,Takayuki Katsuki,Akira Koseki,Toshiya Iwamori", "tags": "ICLR 2022,Oral", "abstract": "We are concerned with the problem of distributional prediction with incomplete features: The goal is to estimate the distribution of target variables given feature vectors with some of the elements missing. A typical approach to this problem is to perform missing-value imputation and regression, simultaneously or sequentially, which we call the generative approach. Another approach is to perform regression after appropriately encoding missing values into the feature, which we call the discriminative approach. In comparison, the generative approach is more robust to the feature corruption while the discriminative approach is more favorable to maximize the performance of prediction. \nIn this study, we propose a hybrid method to take the best of both worlds. Our method utilizes the black-box variational inference framework so that it can be applied to a wide variety of modern machine learning models, including the variational autoencoders. We also confirmed the effectiveness of the proposed method empirically.\n", "pdf": "https://openreview.net/pdf/537474668e8264be0d7e7963ad009564621ad25e.pdf"} {"title": "Efficiently Modeling Long Sequences with Structured State Spaces", "url": "https://openreview.net/forum?id=uYLFoz1vlAC", "detail_url": "https://openreview.net/forum?id=uYLFoz1vlAC", "authors": "Albert Gu,Karan Goel,Christopher Re", "tags": "ICLR 2022,Oral", "abstract": "A central goal of sequence modeling is designing a single principled model that can address sequence data across a range of modalities and tasks, particularly on long-range dependencies. Although conventional models including RNNs, CNNs, and Transformers have specialized variants for capturing long dependencies, they still struggle to scale to very long sequences of $10000$ or more steps. A promising recent approach proposed modeling sequences by simulating the fundamental state space model (SSM) \\( x'(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t) \\), and showed that for appropriate choices of the state matrix \\( A \\), this system could handle long-range dependencies mathematically and empirically. However, this method has prohibitive computation and memory requirements, rendering it infeasible as a general sequence modeling solution. We propose the Structured State Space sequence model (S4) based on a new parameterization for the SSM, and show that it can be computed much more efficiently than prior approaches while preserving their theoretical strengths. Our technique involves conditioning \\( A \\) with a low-rank correction, allowing it to be diagonalized stably and reducing the SSM to the well-studied computation of a Cauchy kernel. S4 achieves strong empirical results across a diverse range of established benchmarks, including (i) 91\\% accuracy on sequential CIFAR-10 with no data augmentation or auxiliary losses, on par with a larger 2-D ResNet, (ii) substantially closing the gap to Transformers on image and language modeling tasks, while performing generation $60\\times$ faster (iii) SoTA on every task from the Long Range Arena benchmark, including solving the challenging Path-X task of length 16k that all prior work fails on, while being as efficient as all competitors.", "pdf": "https://openreview.net/pdf/a8eedf494f6698cb467c310c59d3ea6488546805.pdf"} {"title": "Large Language Models Can Be Strong Differentially Private Learners", "url": "https://openreview.net/forum?id=bVuP3ltATMz", "detail_url": "https://openreview.net/forum?id=bVuP3ltATMz", "authors": "Xuechen Li,Florian Tramer,Percy Liang,Tatsunori Hashimoto", "tags": "ICLR 2022,Oral", "abstract": "Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and straightforward attempts at applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead.\nWe show that this performance drop can be mitigated with (1) the use of large pretrained language models; (2) non-standard hyperparameters that suit DP optimization; and (3) fine-tuning objectives which are aligned with the pretraining procedure.\nWith the above, we obtain NLP models that outperform state-of-the-art DP-trained models under the same privacy budget and strong non-private baselines---by directly fine-tuning pretrained models with DP optimization on moderately-sized corpora. \nTo address the computational challenge of running DP-SGD with large Transformers, we propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients for any linear layer in the model. \nThe technique enables privately training Transformers with almost the same memory cost as non-private training at a modest run-time overhead. \nContrary to conventional wisdom that DP optimization fails at learning high-dimensional models (due to noise that scales with dimension) empirical results reveal that private learning with pretrained language models tends to not suffer from dimension-dependent performance degradation.\nCode to reproduce results can be found at https://github.com/lxuechen/private-transformers.\n", "pdf": "https://openreview.net/pdf/d88e1e721c4085b8a6403837f45b8c483ad0225b.pdf"} {"title": "GeoDiff: A Geometric Diffusion Model for Molecular Conformation Generation", "url": "https://openreview.net/forum?id=PzcvxEMzvQC", "detail_url": "https://openreview.net/forum?id=PzcvxEMzvQC", "authors": "Minkai Xu,Lantao Yu,Yang Song,Chence Shi,Stefano Ermon,Jian Tang", "tags": "ICLR 2022,Oral", "abstract": "Predicting molecular conformations from molecular graphs is a fundamental problem in cheminformatics and drug discovery. Recently, significant progress has been achieved with machine learning approaches, especially with deep generative models. Inspired by the diffusion process in classical non-equilibrium thermodynamics where heated particles will diffuse from original states to a noise distribution, in this paper, we propose a novel generative model named GeoDiff for molecular conformation prediction. GeoDiff treats each atom as a particle and learns to directly reverse the diffusion process (i.e., transforming from a noise distribution to stable conformations) as a Markov chain. Modeling such a generation process is however very challenging as the likelihood of conformations should be roto-translational invariant. We theoretically show that Markov chains evolving with equivariant Markov kernels can induce an invariant distribution by design, and further propose building blocks for the Markov kernels to preserve the desirable equivariance property. The whole framework can be efficiently trained in an end-to-end fashion by optimizing a weighted variational lower bound to the (conditional) likelihood. Experiments on multiple benchmarks show that GeoDiff is superior or comparable to existing state-of-the-art approaches, especially on large molecules. ", "pdf": "https://openreview.net/pdf/d6be0299d7f2d2bf947d450fffe98c8395458c75.pdf"} {"title": "Frame Averaging for Invariant and Equivariant Network Design", "url": "https://openreview.net/forum?id=zIUyj55nXR", "detail_url": "https://openreview.net/forum?id=zIUyj55nXR", "authors": "Omri Puny,Matan Atzmon,Edward J. Smith,Ishan Misra,Aditya Grover,Heli Ben-Hamu,Yaron Lipman", "tags": "ICLR 2022,Oral", "abstract": "Many machine learning tasks involve learning functions that are known to be invariant or equivariant to certain symmetries of the input data. However, it is often challenging to design neural network architectures that respect these symmetries while being expressive and computationally efficient. For example, Euclidean motion invariant/equivariant graph or point cloud neural networks. \nWe introduce Frame Averaging (FA), a highly general purpose and systematic framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types. Our framework builds on the well known group averaging operator that guarantees invariance or equivariance but is intractable. In contrast, we observe that for many important classes of symmetries, this operator can be replaced with an averaging operator over a small subset of the group elements, called a frame. We show that averaging over a frame guarantees exact invariance or equivariance while often being much simpler to compute than averaging over the entire group. Furthermore, we prove that FA-based models have maximal expressive power in a broad setting and in general preserve the expressive power of their backbone architectures. Using frame averaging, we propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs. We demonstrate the practical effectiveness of FA on several applications including point cloud normal estimation, beyond $2$-WL graph separation, and $n$-body dynamics prediction, achieving state-of-the-art results in all of these benchmarks.", "pdf": "https://openreview.net/pdf/d7849f0ef0f911d06889785dc7116564d5342442.pdf"} {"title": "Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation", "url": "https://openreview.net/forum?id=oapKSVM2bcj", "detail_url": "https://openreview.net/forum?id=oapKSVM2bcj", "authors": "Alex Rogozhnikov", "tags": "ICLR 2022,Oral", "abstract": "Tensor computations underlie modern scientific computing and deep learning.\nA number of tensor frameworks emerged varying in execution model, hardware support, memory management, model definition, etc.\nHowever, tensor operations in all frameworks follow the same paradigm.\nRecent neural network architectures demonstrate demand for higher expressiveness of tensor operations.\nThe current paradigm is not suited to write readable, reliable, or easy-to-modify code for multidimensional tensor manipulations. \nMoreover, some commonly used operations do not provide sufficient checks and can break a tensor structure.\nThese mistakes are elusive as no tools or tests can detect them.\nIndependently, API discrepancies complicate code transfer between frameworks.\nWe propose einops notation: a uniform and generic way to manipulate tensor structure, that significantly improves code readability and flexibility by focusing on the structure of input and output tensors.\nWe implement einops notation in a Python package that efficiently supports multiple widely used frameworks and provides framework-independent minimalist API for tensor manipulations.", "pdf": "https://openreview.net/pdf/d568f6e36eaa377888611b8e0d84076777edc330.pdf"} {"title": "A Fine-Grained Analysis on Distribution Shift", "url": "https://openreview.net/forum?id=Dl4LetuLdyK", "detail_url": "https://openreview.net/forum?id=Dl4LetuLdyK", "authors": "Olivia Wiles,Sven Gowal,Florian Stimberg,Sylvestre-Alvise Rebuffi,Ira Ktena,Krishnamurthy Dj Dvijotham,Ali Taylan Cemgil", "tags": "ICLR 2022,Oral", "abstract": "Robustness to distribution shifts is critical for deploying machine learning models in the real world. Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts. To this end, we introduce a framework that enables fine-grained analysis of various distribution shifts. We provide a holistic analysis of current state-of-the-art methods by evaluating 19 distinct methods grouped into five categories across both synthetic and real-world datasets. Overall, we train more than 85K models. Our experimental framework can be easily extended to include new methods, shifts, and datasets. We find, unlike previous work (Gulrajani & Lopez-Paz, 2021), that progress has been made over a standard ERM baseline; in particular, pretraining and augmentations (learned or heuristic) offer large gains in many cases. However, the best methods are not consistent over different datasets and shifts. We will open source our experimental framework, allowing future work to evaluate new methods over multiple shifts to obtain a more complete picture of a method's effectiveness. \nCode is available at github.com/deepmind/distribution_shift_framework.\n", "pdf": "https://openreview.net/pdf/6be366539738706234ad0b104ed82361a3c5f6e7.pdf"} {"title": "Open-Set Recognition: A Good Closed-Set Classifier is All You Need", "url": "https://openreview.net/forum?id=5hLP5JY9S2d", "detail_url": "https://openreview.net/forum?id=5hLP5JY9S2d", "authors": "Sagar Vaze,Kai Han,Andrea Vedaldi,Andrew Zisserman", "tags": "ICLR 2022,Oral", "abstract": "The ability to identify whether or not a test sample belongs to one of the semantic classes in a classifier's training set is critical to practical deployment of the model. This task is termed open-set recognition (OSR) and has received significant attention in recent years. In this paper, we first demonstrate that the ability of a classifier to make the 'none-of-above' decision is highly correlated with its accuracy on the closed-set classes. We find that this relationship holds across loss objectives and architectures, and further demonstrate the trend both on the standard OSR benchmarks as well as on a large-scale ImageNet evaluation. Second, we use this correlation to boost the performance of the maximum softmax probability OSR 'baseline' by improving its closed-set accuracy, and with this strong baseline achieve state-of-the-art on a number of OSR benchmarks. Similarly, we boost the performance of the existing state-of-the-art method by improving its closed-set accuracy, but the resulting discrepancy with the strong baseline is marginal. Our third contribution is to present the 'Semantic Shift Benchmark' (SSB), which better respects the task of detecting semantic novelty, as opposed to low-level distributional shifts as tackled by neighbouring machine learning fields. On this new evaluation, we again demonstrate that there is negligible difference between the strong baseline and the existing state-of-the-art. Code available at: https://github.com/sgvaze/osr_closed_set_all_you_need.", "pdf": "https://openreview.net/pdf/a9e422d293a936fe65575b5e1ea6a86549b84bca.pdf"} {"title": "Learning Strides in Convolutional Neural Networks", "url": "https://openreview.net/forum?id=M752z9FKJP", "detail_url": "https://openreview.net/forum?id=M752z9FKJP", "authors": "Rachid Riad,Olivier Teboul,David Grangier,Neil Zeghidour", "tags": "ICLR 2022,Oral", "abstract": "Convolutional neural networks typically contain several downsampling operators, such as strided convolutions or pooling layers, that progressively reduce the resolution of intermediate representations. This provides some shift-invariance while reducing the computational complexity of the whole architecture. A critical hyperparameter of such layers is their stride: the integer factor of downsampling. As strides are not differentiable, finding the best configuration either requires cross-validation or discrete optimization (e.g. architecture search), which rapidly become prohibitive as the search space grows exponentially with the number of downsampling layers. Hence, exploring this search space by gradient descent would allow finding better configurations at a lower computational cost. This work introduces DiffStride, the first downsampling layer with learnable strides. Our layer learns the size of a cropping mask in the Fourier domain, that effectively performs resizing in a differentiable way. Experiments on audio and image classification show the generality and effectiveness of our solution: we use DiffStride as a drop-in replacement to standard downsampling layers and outperform them. In particular, we show that introducing our layer into a ResNet-18 architecture allows keeping consistent high performance on CIFAR10, CIFAR100 and ImageNet even when training starts from poor random stride configurations. Moreover, formulating strides as learnable variables allows us to introduce a regularization term that controls the computational complexity of the architecture. We show how this regularization allows trading off accuracy for efficiency on ImageNet.", "pdf": "https://openreview.net/pdf/1bc01ea49b5a288387ac5de300847b1d6690d940.pdf"} {"title": "Understanding over-squashing and bottlenecks on graphs via curvature", "url": "https://openreview.net/forum?id=7UmjRGzp-A", "detail_url": "https://openreview.net/forum?id=7UmjRGzp-A", "authors": "Jake Topping,Francesco Di Giovanni,Benjamin Paul Chamberlain,Xiaowen Dong,Michael M. Bronstein", "tags": "ICLR 2022,Oral", "abstract": "Most graph neural networks (GNNs) use the message passing paradigm, in which node features are propagated on the input graph. Recent works pointed to the distortion of information flowing from distant nodes as a factor limiting the efficiency of message passing for tasks relying on long-distance interactions. This phenomenon, referred to as 'over-squashing', has been heuristically attributed to graph bottlenecks where the number of $k$-hop neighbors grows rapidly with $k$. We provide a precise description of the over-squashing phenomenon in GNNs and analyze how it arises from bottlenecks in the graph. For this purpose, we introduce a new edge-based combinatorial curvature and prove that negatively curved edges are responsible for the over-squashing issue. We also propose and experimentally test a curvature-based graph rewiring method to alleviate the over-squashing.", "pdf": "https://openreview.net/pdf/f6b974eac8792a0d8d59633044276dabbf9d01c9.pdf"} {"title": "Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme", "url": "https://openreview.net/forum?id=8c50f-DoWAu", "detail_url": "https://openreview.net/forum?id=8c50f-DoWAu", "authors": "Vadim Popov,Ivan Vovk,Vladimir Gogoryan,Tasnima Sadekova,Mikhail Sergeevich Kudinov,Jiansheng Wei", "tags": "ICLR 2022,Oral", "abstract": "Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario. The most challenging one often referred to as one-shot many-to-many voice conversion consists in copying target voice from only one reference utterance in the most general case when both source and target speakers do not belong to the training dataset. We present a scalable high-quality solution based on diffusion probabilistic modeling and demonstrate its superior quality compared to state-of-the-art one-shot voice conversion approaches. Moreover, focusing on real-time applications, we investigate general principles which can make diffusion models faster while keeping synthesis quality at a high level. As a result, we develop a novel Stochastic Differential Equations solver suitable for various diffusion model types and generative tasks as shown through empirical studies and justify it by theoretical analysis.", "pdf": "https://openreview.net/pdf/468145b46e459c5ba69e7017b6ef4eaece277e94.pdf"} {"title": "Resolving Training Biases via Influence-based Data Relabeling", "url": "https://openreview.net/forum?id=EskfH0bwNVn", "detail_url": "https://openreview.net/forum?id=EskfH0bwNVn", "authors": "Shuming Kong,Yanyan Shen,Linpeng Huang", "tags": "ICLR 2022,Oral", "abstract": "The performance of supervised learning methods easily suffers from the training bias issue caused by train-test distribution mismatch or label noise. Influence function is a technique that estimates the impacts of a training sample on the model\u2019s predictions. Recent studies on \\emph{data resampling} have employed influence functions to identify \\emph{harmful} training samples that will degrade model's test performance. They have shown that discarding or downweighting the identified harmful training samples is an effective way to resolve training biases. In this work, we move one step forward and propose an influence-based relabeling framework named RDIA for reusing harmful training samples toward better model performance. To achieve this, we use influence functions to estimate how relabeling a training sample would affect model's test performance and further develop a novel relabeling function R. We theoretically prove that applying R to relabel harmful training samples allows the model to achieve lower test loss than simply discarding them for any classification tasks using cross-entropy loss. Extensive experiments on ten real-world datasets demonstrate RDIA outperforms the state-of-the-art data resampling methods and improves model's robustness against label noise. ", "pdf": "https://openreview.net/pdf/64c51657be7bb5a9efecafe39344c719ccb4d394.pdf"} {"title": "Representational Continuity for Unsupervised Continual Learning", "url": "https://openreview.net/forum?id=9Hrka5PA7LW", "detail_url": "https://openreview.net/forum?id=9Hrka5PA7LW", "authors": "Divyam Madaan,Jaehong Yoon,Yuanchun Li,Yunxin Liu,Sung Ju Hwang", "tags": "ICLR 2022,Oral", "abstract": "Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. However, recent CL advances are restricted to supervised continual learning (SCL) scenarios. Consequently, they are not scalable to real-world applications where the data distribution is often biased and unannotated. In this work, we focus on unsupervised continual learning (UCL), where we learn the feature representations on an unlabelled sequence of tasks and show that reliance on annotated data is not necessary for continual learning. We conduct a systematic study analyzing the learned feature representations and show that unsupervised visual representations are surprisingly more robust to catastrophic forgetting, consistently achieve better performance, and generalize better to out-of-distribution tasks than SCL. Furthermore, we find that UCL achieves a smoother loss landscape through qualitative analysis of the learned representations and learns meaningful feature representations. Additionally, we propose Lifelong Unsupervised Mixup (LUMP), a simple yet effective technique that interpolates between the current task and previous tasks' instances to alleviate catastrophic forgetting for unsupervised representations. ", "pdf": "https://openreview.net/pdf/947f2c6dc3cd63a83d402bf9cbaddf42e362709e.pdf"} {"title": "Vision-Based Manipulators Need to Also See from Their Hands", "url": "https://openreview.net/forum?id=RJkAHKp7kNZ", "detail_url": "https://openreview.net/forum?id=RJkAHKp7kNZ", "authors": "Kyle Hsu,Moo Jin Kim,Rafael Rafailov,Jiajun Wu,Chelsea Finn", "tags": "ICLR 2022,Oral", "abstract": "We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. Compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consistently improves training efficiency and out-of-distribution generalization. These benefits hold across a variety of learning algorithms, experimental settings, and distribution shifts, and for both simulated and real robot apparatuses. However, this is only the case when hand-centric observability is sufficient; otherwise, including a third-person perspective is necessary for learning, but also harms out-of-distribution generalization. To mitigate this, we propose to regularize the third-person information stream via a variational information bottleneck. On six representative manipulation tasks with varying hand-centric observability adapted from the Meta-World benchmark, this results in a state-of-the-art reinforcement learning agent operating from both perspectives improving its out-of-distribution generalization on every task. While some practitioners have long put cameras in the hands of robots, our work systematically analyzes the benefits of doing so and provides simple and broadly applicable insights for improving end-to-end learned vision-based robotic manipulation.", "pdf": "https://openreview.net/pdf/bf5308ad68220347e7cbf2dcbedbf7bb4e0a21b1.pdf"} {"title": "Meta-Learning with Fewer Tasks through Task Interpolation", "url": "https://openreview.net/forum?id=ajXWF7bVR8d", "detail_url": "https://openreview.net/forum?id=ajXWF7bVR8d", "authors": "Huaxiu Yao,Linjun Zhang,Chelsea Finn", "tags": "ICLR 2022,Oral", "abstract": "Meta-learning enables algorithms to quickly learn a newly encountered task with just a few labeled examples by transferring previously learned knowledge. However, the bottleneck of current meta-learning algorithms is the requirement of a large number of meta-training tasks, which may not be accessible in real-world scenarios. To address the challenge that available tasks may not densely sample the space of tasks, we propose to augment the task set through interpolation. By meta-learning with task interpolation (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels. Under both gradient-based and metric-based meta-learning settings, our theoretical analysis shows MLTI corresponds to a data-adaptive meta-regularization and further improves the generalization. Empirically, in our experiments on eight datasets from diverse domains including image recognition, pose prediction, molecule property prediction, and medical image classification, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.", "pdf": "https://openreview.net/pdf/ebbfc5841da414394c96beeba92500546061461a.pdf"} {"title": "DISCOVERING AND EXPLAINING THE REPRESENTATION BOTTLENECK OF DNNS", "url": "https://openreview.net/forum?id=iRCUlgmdfHJ", "detail_url": "https://openreview.net/forum?id=iRCUlgmdfHJ", "authors": "Huiqi Deng,Qihan Ren,Hao Zhang,Quanshi Zhang", "tags": "ICLR 2022,Oral", "abstract": "This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interactions. We discover that a DNN is more likely to encode both too simple and too complex interactions, but usually fails to learn interactions of intermediate complexity. Such a phenomenon is widely shared by different DNNs for different tasks. This phenomenon indicates a cognition gap between DNNs and humans, and we call it a representation bottleneck. We theoretically prove the underlying reason for the representation bottleneck. Furthermore, we propose losses to encourage/penalize the learning of interactions of specific complexities, and analyze the representation capacities of interactions of different complexities. The code is available at https://github.com/Nebularaid2000/bottleneck.", "pdf": "https://openreview.net/pdf/e470657e4d47a20411713a973ed0282f87c9f9a9.pdf"} {"title": "Sparse Communication via Mixed Distributions", "url": "https://openreview.net/forum?id=WAid50QschI", "detail_url": "https://openreview.net/forum?id=WAid50QschI", "authors": "Ant\u00f3nio Farinhas,Wilker Aziz,Vlad Niculae,Andre Martins", "tags": "ICLR 2022,Oral", "abstract": "Neural networks and other machine learning models compute continuous representations, while humans communicate mostly through discrete symbols. Reconciling these two forms of communication is desirable for generating human-readable interpretations or learning discrete latent variable models, while maintaining end-to-end differentiability. Some existing approaches (such as the Gumbel-Softmax transformation) build continuous relaxations that are discrete approximations in the zero-temperature limit, while others (such as sparsemax transformations and the Hard Concrete distribution) produce discrete/continuous hybrids. In this paper, we build rigorous theoretical foundations for these hybrids, which we call \"mixed random variables.'' Our starting point is a new \"direct sum'' base measure defined on the face lattice of the probability simplex. From this measure, we introduce new entropy and Kullback-Leibler divergence functions that subsume the discrete and differential cases and have interpretations in terms of code optimality. Our framework suggests two strategies for representing and sampling mixed random variables, an extrinsic (\"sample-and-project'\u2019) and an intrinsic one (based on face stratification). We experiment with both approaches on an emergent communication benchmark and on modeling MNIST and Fashion-MNIST data with variational auto-encoders with mixed latent variables.", "pdf": "https://openreview.net/pdf/f8c966f98befffb0bfbd9af921a4e4dd831d549f.pdf"} {"title": "Finetuned Language Models are Zero-Shot Learners", "url": "https://openreview.net/forum?id=gEZrGCozdqR", "detail_url": "https://openreview.net/forum?id=gEZrGCozdqR", "authors": "Jason Wei,Maarten Bosma,Vincent Zhao,Kelvin Guu,Adams Wei Yu,Brian Lester,Nan Du,Andrew M. Dai,Quoc V Le", "tags": "ICLR 2022,Oral", "abstract": "This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning\u2014finetuning language models on a collection of datasets described via instructions\u2014substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction tune it on over 60 NLP datasets verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning.", "pdf": "https://openreview.net/pdf/16b50405ab1e3ac1e2f76190ee62a48c496c568d.pdf"} {"title": "F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization", "url": "https://openreview.net/forum?id=_CfpJazzXT2", "detail_url": "https://openreview.net/forum?id=_CfpJazzXT2", "authors": "Qing Jin,Jian Ren,Richard Zhuang,Sumant Hanumante,Zhengang Li,Zhiyu Chen,Yanzhi Wang,Kaiyuan Yang,Sergey Tulyakov", "tags": "ICLR 2022,Oral", "abstract": "Neural network quantization is a promising compression technique to reduce memory footprint and save energy consumption, potentially leading to real-time inference. However, there is a performance gap between quantized and full-precision models. To reduce it, existing quantization approaches require high-precision INT32 or full-precision multiplication during inference for scaling or dequantization. This introduces a noticeable cost in terms of memory, speed, and required energy. To tackle these issues, we present F8Net, a novel quantization framework consisting in only \ufb01xed-point 8-bit multiplication. To derive our method, we \ufb01rst discuss the advantages of \ufb01xed-point multiplication with different formats of \ufb01xed-point numbers and study the statistical behavior of the associated \ufb01xed-point numbers. Second, based on the statistical and algorithmic analysis, we apply different \ufb01xed-point formats for weights and activations of different layers. We introduce a novel algorithm to automatically determine the right format for each layer during training. Third, we analyze a previous quantization algorithm\u2014parameterized clipping activation (PACT)\u2014and reformulate it using \ufb01xed-point arithmetic. Finally, we unify the recently proposed method for quantization \ufb01ne-tuning and our \ufb01xed-point approach to show the potential of our method. We verify F8Net on ImageNet for MobileNet V1/V2 and ResNet18/50. Our approach achieves comparable and better performance, when compared not only to existing quantization techniques with INT32 multiplication or \ufb02oating point arithmetic, but also to the full-precision counterparts, achieving state-of-the-art performance.", "pdf": "https://openreview.net/pdf/aed69dd0c10990a2c4948e6d230de04c5719fb7d.pdf"} {"title": "Transform2Act: Learning a Transform-and-Control Policy for Efficient Agent Design", "url": "https://openreview.net/forum?id=UcDUxjPYWSr", "detail_url": "https://openreview.net/forum?id=UcDUxjPYWSr", "authors": "Ye Yuan,Yuda Song,Zhengyi Luo,Wen Sun,Kris M. Kitani", "tags": "ICLR 2022,Oral", "abstract": "An agent's functionality is largely determined by its design, i.e., skeletal structure and joint attributes (e.g., length, size, strength). However, finding the optimal agent design for a given function is extremely challenging since the problem is inherently combinatorial and the design space is prohibitively large. Additionally, it can be costly to evaluate each candidate design which requires solving for its optimal controller. To tackle these problems, our key idea is to incorporate the design procedure of an agent into its decision-making process. Specifically, we learn a conditional policy that, in an episode, first applies a sequence of transform actions to modify an agent's skeletal structure and joint attributes, and then applies control actions under the new design. To handle a variable number of joints across designs, we use a graph-based policy where each graph node represents a joint and uses message passing with its neighbors to output joint-specific actions. Using policy gradient methods, our approach enables joint optimization of agent design and control as well as experience sharing across different designs, which improves sample efficiency substantially. Experiments show that our approach, Transform2Act, outperforms prior methods significantly in terms of convergence speed and final performance. Notably, Transform2Act can automatically discover plausible designs similar to giraffes, squids, and spiders. Code and videos are available at https://sites.google.com/view/transform2act.", "pdf": "https://openreview.net/pdf/511a5c95afacad18125605721a8d1e530c07018b.pdf"} {"title": "ProtoRes: Proto-Residual Network for Pose Authoring via Learned Inverse Kinematics", "url": "https://openreview.net/forum?id=s03AQxehtd_", "detail_url": "https://openreview.net/forum?id=s03AQxehtd_", "authors": "Boris N. Oreshkin,Florent Bocquelet,Felix G. Harvey,Bay Raitt,Dominic Laflamme", "tags": "ICLR 2022,Oral", "abstract": "Our work focuses on the development of a learnable neural representation of human pose for advanced AI assisted animation tooling. Specifically, we tackle the problem of constructing a full static human pose based on sparse and variable user inputs (e.g. locations and/or orientations of a subset of body joints). To solve this problem, we propose a novel neural architecture that combines residual connections with prototype encoding of a partially specified pose to create a new complete pose from the learned latent space. We show that our architecture outperforms a baseline based on Transformer, both in terms of accuracy and computational efficiency. Additionally, we develop a user interface to integrate our neural model in Unity, a real-time 3D development platform. Furthermore, we introduce two new datasets representing the static human pose modeling problem, based on high-quality human motion capture data, which will be released publicly along with model code.", "pdf": "https://openreview.net/pdf/72eadcfe21558f0be18ff071adc50adc3ae85e5e.pdf"} {"title": "Hyperparameter Tuning with Renyi Differential Privacy", "url": "https://openreview.net/forum?id=-70L8lpp9DF", "detail_url": "https://openreview.net/forum?id=-70L8lpp9DF", "authors": "Nicolas Papernot,Thomas Steinke", "tags": "ICLR 2022,Oral", "abstract": "For many differentially private algorithms, such as the prominent noisy stochastic gradient descent (DP-SGD), the analysis needed to bound the privacy leakage of a single training run is well understood. However, few studies have reasoned about the privacy leakage resulting from the multiple training runs needed to fine tune the value of the training algorithm\u2019s hyperparameters. In this work, we first illustrate how simply setting hyperparameters based on non-private training runs can leak private information. Motivated by this observation, we then provide privacy guarantees for hyperparameter search procedures within the framework of Renyi Differential Privacy. Our results improve and extend the work of Liu and Talwar (STOC 2019). Our analysis supports our previous observation that tuning hyperparameters does indeed leak private information, but we prove that, under certain assumptions, this leakage is modest, as long as each candidate training run needed to select hyperparameters is itself differentially private.", "pdf": "https://openreview.net/pdf/8832d0e112b9fd6c5c8f0be8a093625e4de6e337.pdf"} {"title": "Real-Time Neural Voice Camouflage", "url": "https://openreview.net/forum?id=qj1IZ-6TInc", "detail_url": "https://openreview.net/forum?id=qj1IZ-6TInc", "authors": "Mia Chiquier,Chengzhi Mao,Carl Vondrick", "tags": "ICLR 2022,Oral", "abstract": "Automatic speech recognition systems have created exciting possibilities for applications, however they also enable opportunities for systematic eavesdropping.We propose a method to camouflage a person's voice from these systems without inconveniencing the conversation between people in the room. Standard adversarial attacks are not effective in real-time streaming situations because the characteristics of the signal will have changed by the time the attack is executed. We introduce predictive adversarial attacks, which achieves real-time performance by forecasting the attack vector that will be the most effective in the future. Under real-time constraints, our method jams the established speech recognition system DeepSpeech 3.9x more than online projected gradient descent as measured through word error rate, and 6.6x more as measured through character error rate. We furthermore demonstrate our approach is practically effective in realistic environments with complex scene geometries. ", "pdf": "https://openreview.net/pdf/e2b96a38db73636bfa51d5ee4097373ddda15329.pdf"} {"title": "CycleMLP: A MLP-like Architecture for Dense Prediction", "url": "https://openreview.net/forum?id=NMEceG4v69Y", "detail_url": "https://openreview.net/forum?id=NMEceG4v69Y", "authors": "Shoufa Chen,Enze Xie,Chongjian GE,Runjian Chen,Ding Liang,Ping Luo", "tags": "ICLR 2022,Oral", "abstract": "This paper presents a simple MLP-like architecture, CycleMLP, which is a versatile backbone for visual recognition and dense predictions. As compared to modern MLP architectures, e.g. , MLP-Mixer, ResMLP, and gMLP, whose architectures are correlated to image size and thus are infeasible in object detection and segmentation, CycleMLP has two advantages compared to modern approaches. (1) It can cope\nwith various image sizes. (2) It achieves linear computational complexity to image size by using local windows. In contrast, previous MLPs have $O(N^2)$ computations due to fully spatial connections. We build a family of models which surpass existing MLPs and even state-of-the-art Transformer-based models, e.g. Swin Transformer, while using fewer parameters and FLOPs. We expand the MLP-like models\u2019 applicability, making them a versatile backbone for dense prediction tasks. CycleMLP achieves competitive results on object detection, instance segmentation, and semantic segmentation. In particular, CycleMLP-Tiny outperforms Swin-Tiny by 1.3% mIoU on ADE20K dataset with fewer FLOPs. Moreover, CycleMLP also shows excellent zero-shot robustness on ImageNet-C dataset.", "pdf": "https://openreview.net/pdf/0ff0f728cbc430b36ea84288793e887e216cff59.pdf"} {"title": "Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models", "url": "https://openreview.net/forum?id=0xiJLKH-ufZ", "detail_url": "https://openreview.net/forum?id=0xiJLKH-ufZ", "authors": "Fan Bao,Chongxuan Li,Jun Zhu,Bo Zhang", "tags": "ICLR 2022,Oral", "abstract": "Diffusion probabilistic models (DPMs) represent a class of powerful generative models. Despite their success, the inference of DPMs is expensive since it generally needs to iterate over thousands of timesteps. A key problem in the inference is to estimate the variance in each timestep of the reverse process. In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w.r.t. its score function. Building upon it, we propose \\textit{Analytic-DPM}, a training-free inference framework that estimates the analytic forms of the variance and KL divergence using the Monte Carlo method and a pretrained score-based model. Further, to correct the potential bias caused by the score-based model, we derive both lower and upper bounds of the optimal variance and clip the estimate for a better result. Empirically, our analytic-DPM improves the log-likelihood of various DPMs, produces high-quality samples, and meanwhile enjoys a $20\\times$ to $80\\times$ speed up.", "pdf": "https://openreview.net/pdf/541cdc9e000367bb0bd3fc42201573ed434094c8.pdf"} {"title": "RISP: Rendering-Invariant State Predictor with Differentiable Simulation and Rendering for Cross-Domain Parameter Estimation", "url": "https://openreview.net/forum?id=uSE03demja", "detail_url": "https://openreview.net/forum?id=uSE03demja", "authors": "Pingchuan Ma,Tao Du,Joshua B. Tenenbaum,Wojciech Matusik,Chuang Gan", "tags": "ICLR 2022,Oral", "abstract": "This work considers identifying parameters characterizing a physical system's dynamic motion directly from a video whose rendering configurations are inaccessible. Existing solutions require massive training data or lack generalizability to unknown rendering configurations. We propose a novel approach that marries domain randomization and differentiable rendering gradients to address this problem. Our core idea is to train a rendering-invariant state-prediction (RISP) network that transforms image differences into state differences independent of rendering configurations, e.g., lighting, shadows, or material reflectance. To train this predictor, we formulate a new loss on rendering variances using gradients from differentiable rendering. Moreover, we present an efficient, second-order method to compute the gradients of this loss, allowing it to be integrated seamlessly into modern deep learning frameworks. We evaluate our method in rigid-body and deformable-body simulation environments using four tasks: state estimation, system identification, imitation learning, and visuomotor control. We further demonstrate the efficacy of our approach on a real-world example: inferring the state and action sequences of a quadrotor from a video of its motion sequences. Compared with existing methods, our approach achieves significantly lower reconstruction errors and has better generalizability among unknown rendering configurations.", "pdf": "https://openreview.net/pdf/999353870633727a2d50bc5b4ee873b50401eba7.pdf"} {"title": "The Information Geometry of Unsupervised Reinforcement Learning", "url": "https://openreview.net/forum?id=3wU2UX0voE", "detail_url": "https://openreview.net/forum?id=3wU2UX0voE", "authors": "Benjamin Eysenbach,Ruslan Salakhutdinov,Sergey Levine", "tags": "ICLR 2022,Oral", "abstract": "How can a reinforcement learning (RL) agent prepare to solve downstream tasks if those tasks are not known a priori? One approach is unsupervised skill discovery, a class of algorithms that learn a set of policies without access to a reward function. Such algorithms bear a close resemblance to representation learning algorithms (e.g., contrastive learning) in supervised learning, in that both are pretraining algorithms that maximize some approximation to a mutual information objective. While prior work has shown that the set of skills learned by such methods can accelerate downstream RL tasks, prior work offers little analysis into whether these skill learning algorithms are optimal, or even what notion of optimality would be appropriate to apply to them. In this work, we show that unsupervised skill discovery algorithms based on mutual information maximization do not learn skills that are optimal for every possible reward function. However, we show that the distribution over skills provides an optimal initialization minimizing regret against adversarially-chosen reward functions, assuming a certain type of adaptation procedure. Our analysis also provides a geometric perspective on these skill learning methods.", "pdf": "https://openreview.net/pdf/4709236cdf10497a057511e94fe99f87770c5bf6.pdf"} {"title": "Language modeling via stochastic processes", "url": "https://openreview.net/forum?id=pMQwKL1yctf", "detail_url": "https://openreview.net/forum?id=pMQwKL1yctf", "authors": "Rose E Wang,Esin Durmus,Noah Goodman,Tatsunori Hashimoto", "tags": "ICLR 2022,Oral", "abstract": "Modern language models can generate high-quality short texts. However, they often meander or are incoherent when generating longer texts. These issues arise from the next-token-only language modeling objective. To address these issues, we introduce Time Control (TC), a language model that implicitly plans via a latent stochastic process. TC does this by learning a representation which maps the dynamics of how text changes in a document to the dynamics of a stochastic process of interest. Using this representation, the language model can generate text by first implicitly generating a document plan via a stochastic process, and then generating text that is consistent with this latent plan. Compared to domain-specific methods and fine-tuning GPT2 across a variety of text domains, TC improves performance on text infilling and discourse coherence. On long text generation settings, TC preserves the text structure both in terms of ordering (up to +40% better) and text length consistency (up to +17% better). Human evaluators also prefer TC's output 28.6% more than the baselines.", "pdf": "https://openreview.net/pdf/ceeec650a60b1f87ad4dda26ecd02c9df0e3ed9d.pdf"} {"title": "Learning to Downsample for Segmentation of Ultra-High Resolution Images", "url": "https://openreview.net/forum?id=HndgQudNb91", "detail_url": "https://openreview.net/forum?id=HndgQudNb91", "authors": "Chen Jin,Ryutaro Tanno,Thomy Mertzanidou,Eleftheria Panagiotaki,Daniel C. Alexander", "tags": "ICLR 2022,Poster", "abstract": "Many computer vision systems require low-cost segmentation algorithms based on deep learning, either because of the enormous size of input images or limited computational budget. Common solutions uniformly downsample the input images to meet memory constraints, assuming all pixels are equally informative. In this work, we demonstrate that this assumption can harm the segmentation performance\nbecause the segmentation difficulty varies spatially (see Figure 1 \u201cUniform\u201d). We combat this problem by introducing a learnable downsampling module, which can be optimised together with the given segmentation model in an end-to-end fashion. We formulate the problem of training such downsampling module as optimisation of sampling density distributions over the input images given their low-resolution views. To defend against degenerate solutions (e.g. over-sampling trivial regions like the backgrounds), we propose a regularisation term that encourages the sampling locations to concentrate around the object boundaries. We find the downsampling\nmodule learns to sample more densely at difficult locations, thereby improving the segmentation performance (see Figure 1 \"Ours\"). Our experiments on benchmarks of high-resolution street view, aerial and medical images demonstrate substantial improvements in terms of efficiency-and-accuracy trade-off compared to both uniform downsampling and two recent advanced downsampling techniques.", "pdf": "https://openreview.net/pdf/d2ade7120315e0521c4b97b593c4a2ebd44b0652.pdf"} {"title": "Variational Neural Cellular Automata", "url": "https://openreview.net/forum?id=7fFO4cMBx_9", "detail_url": "https://openreview.net/forum?id=7fFO4cMBx_9", "authors": "Rasmus Berg Palm,Miguel Gonz\u00e1lez Duque,Shyam Sudhakaran,Sebastian Risi", "tags": "ICLR 2022,Poster", "abstract": "In nature, the process of cellular growth and differentiation has lead to an amazing diversity of organisms --- algae, starfish, giant sequoia, tardigrades, and orcas are all created by the same generative process.\nInspired by the incredible diversity of this biological generative process, we propose a generative model, the Variational Neural Cellular Automata (VNCA), which is loosely inspired by the biological processes of cellular growth and differentiation. Unlike previous related works, the VNCA is a proper probabilistic generative model, and we evaluate it according to best practices. We find that the VNCA learns to reconstruct samples well and that despite its relatively few parameters and simple local-only communication, the VNCA can learn to generate a large variety of output from information encoded in a common vector format. While there is a significant gap to the current state-of-the-art in terms of generative modeling performance, we show that the VNCA can learn a purely self-organizing generative process of data. Additionally, the self-organizing nature bestows the VNCA with some inherent robustness against perturbations in the early stages of growth.", "pdf": "https://openreview.net/pdf/abec641c2a0c18536da3345e5cd92d673d90b69d.pdf"} {"title": "Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation", "url": "https://openreview.net/forum?id=FKp8-pIRo3y", "detail_url": "https://openreview.net/forum?id=FKp8-pIRo3y", "authors": "Todor Davchev,Oleg Olegovich Sushkov,Jean-Baptiste Regli,Stefan Schaal,Yusuf Aytar,Markus Wulfmeier,Jon Scholz", "tags": "ICLR 2022,Poster", "abstract": "Complex sequential tasks in continuous-control settings often require agents to successfully traverse a set of ``narrow passages'' in their state space. Solving such tasks with a sparse reward in a sample-efficient manner poses a challenge to modern reinforcement learning (RL) due to the associated long-horizon nature of the problem and the lack of sufficient positive signal during learning. \nVarious tools have been applied to address this challenge. When available, large sets of demonstrations can guide agent exploration. Hindsight relabelling on the other hand does not require additional sources of information. However, existing strategies explore based on task-agnostic goal distributions, which can render the solution of long-horizon tasks impractical. In this work, we extend hindsight relabelling mechanisms to guide exploration along task-specific distributions implied by a small set of successful demonstrations. We evaluate the approach on four complex, single and dual arm, robotics manipulation tasks against strong suitable baselines. The method requires far fewer demonstrations to solve all tasks and achieves a significantly higher overall performance as task complexity increases. Finally, we investigate the robustness of the proposed solution with respect to the quality of input representations and the number of demonstrations.", "pdf": "https://openreview.net/pdf/524d4c3cacc5ff7803cd7061b33991511fee7db7.pdf"} {"title": "L0-Sparse Canonical Correlation Analysis", "url": "https://openreview.net/forum?id=KntaNRo6R48", "detail_url": "https://openreview.net/forum?id=KntaNRo6R48", "authors": "Ofir Lindenbaum,Moshe Salhov,Amir Averbuch,Yuval Kluger", "tags": "ICLR 2022,Poster", "abstract": "Canonical Correlation Analysis (CCA) models are powerful for studying the associations between two sets of variables. The canonically correlated representations, termed \\textit{canonical variates} are widely used in unsupervised learning to analyze unlabeled multi-modal registered datasets. Despite their success, CCA models may break (or overfit) if the number of variables in either of the modalities exceeds the number of samples. Moreover, often a significant fraction of the variables measures modality-specific information, and thus removing them is beneficial for identifying the \\textit{canonically correlated variates}. Here, we propose $\\ell_0$-CCA, a method for learning correlated representations based on sparse subsets of variables from two observed modalities.\nSparsity is obtained by multiplying the input variables by stochastic gates, whose parameters are learned together with the CCA weights via an $\\ell_0$-regularized correlation loss. \nWe further propose $\\ell_0$-Deep CCA for solving the problem of non-linear sparse CCA by modeling the correlated representations using deep nets. We demonstrate the efficacy of the method using several synthetic and real examples. Most notably, by gating nuisance input variables, our approach improves the extracted representations compared to other linear, non-linear and sparse CCA-based models.", "pdf": "https://openreview.net/pdf/69ae8c04ac43812f7523f009313daec68f09ea3d.pdf"} {"title": "Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?", "url": "https://openreview.net/forum?id=B7ZbqNLDn-_", "detail_url": "https://openreview.net/forum?id=B7ZbqNLDn-_", "authors": "Sheikh Shams Azam,Seyyedali Hosseinalipour,Qiang Qiu,Christopher Brinton", "tags": "ICLR 2022,Poster", "abstract": "In this paper, we question the rationale behind propagating large numbers of parameters through a distributed system during federated learning. We start by examining the rank characteristics of the subspace spanned by gradients (i.e., the gradient-space) in centralized model training, and observe that the gradient-space often consists of a few leading principal components accounting for an overwhelming majority (95-99%) of the explained variance. Motivated by this, we propose the \"Look-back Gradient Multiplier\" (LBGM) algorithm, which utilizes this low-rank property of the gradient-space in federated learning. Operationally, LBGM recycles the gradients between model update rounds to significantly reduce the number of parameters to be propagated through the system. We analytically characterize the convergence behavior of LBGM, revealing the nature of the trade-off between communication savings and model performance. Our subsequent experimental results demonstrate the improvement LBGM obtains on communication overhead compared to federated learning baselines. Additionally, we show that LBGM is a general plug-and-play algorithm that can be used standalone or stacked on top of existing sparsification techniques for distributed model training.", "pdf": "https://openreview.net/pdf/76e2433c08e957e7f19a49e6815d0f6b52da92cd.pdf"} {"title": "Is Homophily a Necessity for Graph Neural Networks?", "url": "https://openreview.net/forum?id=ucASPPD9GKN", "detail_url": "https://openreview.net/forum?id=ucASPPD9GKN", "authors": "Yao Ma,Xiaorui Liu,Neil Shah,Jiliang Tang", "tags": "ICLR 2022,Poster", "abstract": "Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks. When applied to semi-supervised node classification, GNNs are widely believed to work well due to the homophily assumption (``like attracts like''), and fail to generalize to heterophilous graphs where dissimilar nodes connect. Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion. In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than such carefully designed methods on some commonly used heterophilous graphs. This motivates us to reconsider whether homophily is truly necessary for good GNN performance. We find that this claim is not quite true, and in fact, GCNs can achieve strong performance on heterophilous graphs under certain conditions. Our work carefully characterizes these conditions and provides supporting theoretical understanding and empirical observations. Finally, we examine existing heterophilous graphs benchmarks and reconcile how the GCN (under)performs on them based on this understanding.", "pdf": "https://openreview.net/pdf/dba6b2a528efebfb036a0b908ecfc59201204429.pdf"} {"title": "DEGREE: Decomposition Based Explanation for Graph Neural Networks", "url": "https://openreview.net/forum?id=Ve0Wth3ptT_", "detail_url": "https://openreview.net/forum?id=Ve0Wth3ptT_", "authors": "Qizhang Feng,Ninghao Liu,Fan Yang,Ruixiang Tang,Mengnan Du,Xia Hu", "tags": "ICLR 2022,Poster", "abstract": "Graph Neural Networks (GNNs) are gaining extensive attention for their application in graph data. However, the black-box nature of GNNs prevents users from understanding and trusting the models, thus hampering their applicability. Whereas explaining GNNs remains a challenge, most existing methods fall into approximation based and perturbation based approaches with suffer from faithfulness problems and unnatural artifacts respectively. To tackle these problems, we propose DEGREE (Decomposition based Explanation for GRaph nEural nEtworks) to provide a faithful explanation for GNN predictions. By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction. Based on this, we further design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods. The efficiency of our algorithm can be further improved by utilizing GNN characteristics. Finally, we conduct quantitative and qualitative experiments on synthetic and real-world datasets to demonstrate the effectiveness of DEGREE on node classification and graph classification tasks.", "pdf": "https://openreview.net/pdf/fd7de8640028480fa9fe56dd9ed7bcad9182bf31.pdf"} {"title": "Improving Mutual Information Estimation with Annealed and Energy-Based Bounds", "url": "https://openreview.net/forum?id=T0B9AoM_bFg", "detail_url": "https://openreview.net/forum?id=T0B9AoM_bFg", "authors": "Rob Brekelmans,Sicong Huang,Marzyeh Ghassemi,Greg Ver Steeg,Roger Baker Grosse,Alireza Makhzani", "tags": "ICLR 2022,Poster", "abstract": "Mutual information (MI) is a fundamental quantity in information theory and machine learning. However, direct estimation of MI is intractable, even if the true joint probability density for the variables of interest is known, as it involves estimating a potentially high-dimensional log partition function. In this work, we present a unifying view of existing MI bounds from the perspective of importance sampling, and propose three novel bounds based on this approach. Since a tight MI bound without density information requires a sample size exponential in the true MI, we assume either a single marginal or the full joint density information is known. In settings where the full joint density is available, we propose Multi-Sample Annealed Importance Sampling (AIS) bounds on MI, which we demonstrate can tightly estimate large values of MI in our experiments. In settings where only a single marginal distribution is known, we propose Generalized IWAE (GIWAE) and MINE-AIS bounds. Our GIWAE bound unifies variational and contrastive bounds in a single framework that generalizes InfoNCE, IWAE, and Barber-Agakov bounds. Our MINE-AIS method improves upon existing energy-based methods such as MINE-DV and MINE-F by directly optimizing a tighter lower bound on MI. MINE-AIS uses MCMC sampling to estimate gradients for training and Multi-Sample AIS for evaluating the bound. Our methods are particularly suitable for evaluating MI in deep generative models, since explicit forms of the marginal or joint densities are often available. We evaluate our bounds on estimating the MI of VAEs and GANs trained on the MNIST and CIFAR datasets, and showcase significant gains over existing bounds in these challenging settings with high ground truth MI.", "pdf": "https://openreview.net/pdf/a68f8e4bbad21f5599f372c94827c5f596c6555b.pdf"} {"title": "Sequence Approximation using Feedforward Spiking Neural Network for Spatiotemporal Learning: Theory and Optimization Methods", "url": "https://openreview.net/forum?id=bp-LJ4y_XC", "detail_url": "https://openreview.net/forum?id=bp-LJ4y_XC", "authors": "Xueyuan She,Saurabh Dash,Saibal Mukhopadhyay", "tags": "ICLR 2022,Poster", "abstract": "A dynamical system of spiking neurons with only feedforward connections can classify spatiotemporal patterns without recurrent connections. However, the theoretical construct of a feedforward spiking neural network (SNN) for approximating a temporal sequence remains unclear, making it challenging to optimize SNN architectures for learning complex spatiotemporal patterns. In this work, we establish a theoretical framework to understand and improve sequence approximation using a feedforward SNN. Our framework shows that a feedforward SNN with one neuron per layer and skip-layer connections can approximate the mapping function between any arbitrary pairs of input and output spike train on a compact domain. Moreover, we prove that heterogeneous neurons with varying dynamics and skip-layer connections improve sequence approximation using feedforward SNN. Consequently, we propose SNN architectures incorporating the preceding constructs that are trained using supervised backpropagation-through-time (BPTT) and unsupervised spiking-timing-dependent plasticity (STDP) algorithms for classification of spatiotemporal data. A dual-search-space Bayesian optimization method is developed to optimize architecture and parameters of the proposed SNN with heterogeneous neuron dynamics and skip-layer connections. ", "pdf": "https://openreview.net/pdf/043f00a3e618d0c71bbd79dffbdfdaf6d9fd4d1b.pdf"} {"title": "Diverse Client Selection for Federated Learning via Submodular Maximization", "url": "https://openreview.net/forum?id=nwKXyFvaUm", "detail_url": "https://openreview.net/forum?id=nwKXyFvaUm", "authors": "Ravikumar Balakrishnan,Tian Li,Tianyi Zhou,Nageen Himayat,Virginia Smith,Jeff Bilmes", "tags": "ICLR 2022,Poster", "abstract": "In every communication round of federated learning, a random subset of clients communicate their model updates back to the server which then aggregates them all. The optimal size of this subset is not known and several studies have shown that typically random selection does not perform very well in terms of convergence, learning efficiency and fairness. We, in this paper, propose to select a small diverse subset of clients, namely those carrying representative gradient information, and we transmit only these updates to the server. Our aim is for updating via only a subset to approximate updating via aggregating all client information. We achieve this by choosing a subset that maximizes a submodular facility location function defined over gradient space. We introduce \u201cfederated averaging with diverse client selection (DivFL)\u201d. We provide a thorough analysis of its convergence in the heterogeneous setting and apply it both to synthetic and to real datasets. Empirical results show several benefits to our approach including improved learning efficiency, faster convergence and also more uniform (i.e., fair) performance across clients. We further show a communication-efficient version of DivFL that can still outperform baselines on the above metrics.", "pdf": "https://openreview.net/pdf/4d539789e55d133a96781cda576be4ab34ec5982.pdf"} {"title": "From Intervention to Domain Transportation: A Novel Perspective to Optimize Recommendation", "url": "https://openreview.net/forum?id=jT1EwXu-4hj", "detail_url": "https://openreview.net/forum?id=jT1EwXu-4hj", "authors": "Da Xu,Yuting Ye,Chuanwei Ruan,Evren Korpeoglu,Sushant Kumar,Kannan Achan", "tags": "ICLR 2022,Poster", "abstract": "The interventional nature of recommendation has attracted increasing attention in recent years. It particularly motivates researchers to formulate learning and evaluating recommendation as causal inference and data missing-not-at-random problems. However, few take seriously the consequence of violating the critical assumption of overlapping, which we prove can significantly threaten the validity and interpretation of the outcome. We find a critical piece missing in the current understanding of information retrieval (IR) systems: as interventions, recommendation not only affects the already observed data, but it also interferes with the target domain (distribution) of interest. We then rephrase optimizing recommendation as finding an intervention that best transports the patterns it learns from the observed domain to its intervention domain. Towards this end, we use domain transportation to characterize the learning-intervention mechanism of recommendation. We design a principled transportation-constraint risk minimization objective and convert it to a two-player minimax game.\nWe prove the consistency, generalization, and excessive risk bounds for the proposed objective, and elaborate how they compare to the current results. Finally, we carry out extensive real-data and semi-synthetic experiments to demonstrate the advantage of our approach, and launch online testing with a real-world IR system.", "pdf": "https://openreview.net/pdf/22322b458fd437ff0b3cf13debd29cc381b25ccc.pdf"} {"title": "Variational Predictive Routing with Nested Subjective Timescales", "url": "https://openreview.net/forum?id=JxFgJbZ-wft", "detail_url": "https://openreview.net/forum?id=JxFgJbZ-wft", "authors": "Alexey Zakharov,Qinghai Guo,Zafeirios Fountas", "tags": "ICLR 2022,Poster", "abstract": "Discovery and learning of an underlying spatiotemporal hierarchy in sequential data is an important topic for machine learning. Despite this, little work has been done to explore hierarchical generative models that can flexibly adapt their layerwise representations in response to datasets with different temporal dynamics. Here, we present Variational Predictive Routing (VPR) \u2013 a neural probabilistic inference system that organizes latent representations of video features in a temporal hierarchy, based on their rates of change, thus modeling continuous data as a hierarchical renewal process. By employing an event detection mechanism that relies solely on the system\u2019s latent representations (without the need of a separate model), VPR is able to dynamically adjust its internal state following changes in the observed features, promoting an optimal organisation of representations across the levels of the model\u2019s latent hierarchy. Using several video datasets, we show that VPR is able to detect event boundaries, disentangle spatiotemporal features across its hierarchy, adapt to the dynamics of the data, and produce accurate time-agnostic rollouts of the future. Our approach integrates insights from neuroscience and introduces a framework with high potential for applications in model-based reinforcement learning, where flexible and informative state-space rollouts are of particular interest.", "pdf": "https://openreview.net/pdf/712c74938a55973dd0b3f46e154fc0696194b578.pdf"} {"title": "Sample and Computation Redistribution for Efficient Face Detection", "url": "https://openreview.net/forum?id=RhB1AdoFfGE", "detail_url": "https://openreview.net/forum?id=RhB1AdoFfGE", "authors": "Jia Guo,Jiankang Deng,Alexandros Lattas,Stefanos Zafeiriou", "tags": "ICLR 2022,Poster", "abstract": "Although tremendous strides have been made in uncontrolled face detection, accurate face detection with a low computation cost remains an open challenge. In this paper, we point out that computation distribution and scale augmentation are the keys to detecting small faces from low-resolution images. Motivated by these observations, we introduce two simple but effective methods: (1) Computation Redistribution (CR), which reallocates the computation between the backbone, neck and head of the model; and (2) Sample Redistribution (SR), which augments training samples for the most needed stages. The proposed Sample and Computation Redistribution for Face Detection (SCRFD) is implemented by a random search in a meticulously designed search space. Extensive experiments conducted on WIDER FACE demonstrate the state-of-the-art accuracy-efficiency trade-off for the proposed SCRFD family across a wide range of compute regimes. In particular, SCRFD-34GF outperforms the best competitor, TinaFace, by $4.78\\%$ (AP at hard set) while being more than 3$\\times$ faster on GPUs with VGA-resolution images. Code is available at: https://github.com/deepinsight/insightface/tree/master/detection/scrfd.", "pdf": "https://openreview.net/pdf/d7b9dd38011f418b1c66bb378aef38a25d8c9bf5.pdf"} {"title": "Sound Adversarial Audio-Visual Navigation", "url": "https://openreview.net/forum?id=NkZq4OEYN-", "detail_url": "https://openreview.net/forum?id=NkZq4OEYN-", "authors": "Yinfeng Yu,Wenbing Huang,Fuchun Sun,Changan Chen,Yikai Wang,Xiaohong Liu", "tags": "ICLR 2022,Poster", "abstract": "Audio-visual navigation task requires an agent to find a sound source in a realistic, unmapped 3D environment by utilizing egocentric audio-visual observations. Existing audio-visual navigation works assume a clean environment that solely contains the target sound, which, however, would not be suitable in most real-world applications due to the unexpected sound noise or intentional interference. In this work, we design an acoustically complex environment in which, besides the target sound, there exists a sound attacker playing a zero-sum game with the agent. More specifically, the attacker can move and change the volume and category of the sound to make the agent suffer from finding the sounding object while the agent tries to dodge the attack and navigate to the goal under the intervention. Under certain constraints to the attacker, we can improve the robustness of the agent towards unexpected sound attacks in audio-visual navigation. For better convergence, we develop a joint training mechanism by employing the property of a centralized critic with decentralized actors. Experiments on two real-world 3D scan datasets, Replica, and Matterport3D, verify the effectiveness and the robustness of the agent trained under our designed environment when transferred to the clean environment or the one containing sound attackers with random policy. Project: https://yyf17.github.io/SAAVN .", "pdf": "https://openreview.net/pdf/892cdd541646cc28a0880494951fbd89079c2a3d.pdf"} {"title": "Out-of-distribution Generalization in the Presence of Nuisance-Induced Spurious Correlations", "url": "https://openreview.net/forum?id=12RoR2o32T", "detail_url": "https://openreview.net/forum?id=12RoR2o32T", "authors": "Aahlad Manas Puli,Lily H Zhang,Eric Karl Oermann,Rajesh Ranganath", "tags": "ICLR 2022,Poster", "abstract": "In many prediction problems, spurious correlations are induced by a changing relationship between the label and a nuisance variable that is also correlated with the covariates. For example, in classifying animals in natural images, the background, which is a nuisance, can predict the type of animal. This nuisance-label relationship does not always hold, and the performance of a model trained under one such relationship may be poor on data with a different nuisance-label relationship. To build predictive models that perform well regardless of the nuisance-label relationship, we develop Nuisance-Randomized Distillation (NURD). We introduce the nuisance-randomized distribution, a distribution where the nuisance and the label are independent. Under this distribution, we define the set of representations such that conditioning on any member, the nuisance and the label remain independent. We prove that the representations in this set always perform better than chance, while representations outside of this set may not. NURD finds a representation from this set that is most informative of the label under the nuisance-randomized distribution, and we prove that this representation achieves the highest performance regardless of the nuisance-label relationship. We evaluate NURD on several tasks including chest X-ray classification where, using non-lung patches as the nuisance, NURD produces models that predict pneumonia under strong spurious correlations.", "pdf": "https://openreview.net/pdf/7128d52f12e20439db2d07083f3de3995967bb53.pdf"} {"title": "AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis", "url": "https://openreview.net/forum?id=OM_lYiHXiCL", "detail_url": "https://openreview.net/forum?id=OM_lYiHXiCL", "authors": "Junfeng Guo,Ang Li,Cong Liu", "tags": "ICLR 2022,Poster", "abstract": "Deep neural networks (DNNs) are proved to be vulnerable against backdoor attacks. A backdoor could be embedded in the target DNNs through injecting a backdoor trigger into the training examples, which can cause the target DNNs misclassify an input attached with the backdoor trigger. Recent backdoor detection methods often require the access to the original poisoned training data, the parameters of the target DNNs, or the predictive confidence for each given input, which are impractical in many real-world applications, e.g., on-device de-ployed DNNs. We address the black-box hard-label backdoor detection problem where the DNN is a fully black-box and only its final output label is accessible. We approach this problem from the optimization perspective and show that the objective of backdoor detection is bounded by an adversarial objective. Further theoretical and empirical studies reveal that this adversarial objective leads to a solution with highly skewed distribution; a singularity is often observed in the adversarial map of a backdoor-infected example, which we call the adversarial singularity phenomenon. Based on this observation, we propose the adversarial extreme value analysis(AEVA) algorithm to detect backdoors in black-box neural networks. The AEVA algorithm is based on an extreme value analysis on the adversarial map, computed from the monte-carlo gradient estimation due to the black-box hard-label constraint. Evidenced by extensive experiments across three popular tasks and backdoor attacks, our approach is shown effective in detecting backdoor attacks under the black-box hard-label scenarios", "pdf": "https://openreview.net/pdf/b8ad85b4ddd615a5abac4d7c1d5713fc92b9f0e9.pdf"} {"title": "Resonance in Weight Space: Covariate Shift Can Drive Divergence of SGD with Momentum", "url": "https://openreview.net/forum?id=5ECQL05ub0J", "detail_url": "https://openreview.net/forum?id=5ECQL05ub0J", "authors": "Kirby Banman,Garnet Liam Peet-Pare,Nidhi Hegde,Alona Fyshe,Martha White", "tags": "ICLR 2022,Poster", "abstract": "Most convergence guarantees for stochastic gradient descent with momentum (SGDm) rely on iid sampling. Yet, SGDm is often used outside this regime, in settings with temporally correlated input samples such as continual learning and reinforcement learning. Existing work has shown that SGDm with a decaying step-size can converge under Markovian temporal correlation. In this work, we show that SGDm under covariate shift with a fixed step-size can be unstable and diverge. In particular, we show SGDm under covariate shift is a parametric oscillator, and so can suffer from a phenomenon known as resonance. We approximate the learning system as a time varying system of ordinary differential equations, and leverage existing theory to characterize the system's divergence/convergence as resonant/nonresonant modes. The theoretical result is limited to the linear setting with periodic covariate shift, so we empirically supplement this result to show that resonance phenomena persist even under non-periodic covariate shift, nonlinear dynamics with neural networks, and optimizers other than SGDm.", "pdf": "https://openreview.net/pdf/967691b8c1cb517500d87dfd7dbf7dd6293c0e89.pdf"} {"title": "Top-label calibration and multiclass-to-binary reductions", "url": "https://openreview.net/forum?id=WqoBaaPHS-", "detail_url": "https://openreview.net/forum?id=WqoBaaPHS-", "authors": "Chirag Gupta,Aaditya Ramdas", "tags": "ICLR 2022,Poster", "abstract": "We propose a new notion of multiclass calibration called top-label calibration. A classifier is said to be top-label calibrated if the reported probability for the predicted class label---the top-label---is calibrated, conditioned on the top-label. This conditioning is essential for practical utility of the calibration property, since the top-label is always reported and we must condition on what is reported. However, the popular notion of confidence calibration erroneously skips this conditioning. Furthermore, we outline a multiclass-to-binary (M2B) reduction framework that unifies confidence, top-label, and class-wise calibration, among others. As its name suggests, M2B works by reducing multiclass calibration to different binary calibration problems; various types of multiclass calibration can then be achieved using simple binary calibration routines. We instantiate the M2B framework with the well-studied histogram binning (HB) binary calibrator, and prove that the overall procedure is multiclass calibrated without making any assumptions on the underlying data distribution. In an empirical evaluation with four deep net architectures on CIFAR-10 and CIFAR-100, we find that the M2B + HB procedure achieves lower top-label and class-wise calibration error than other approaches such as temperature scaling. Code for this work is available at https://github.com/aigen/df-posthoc-calibration.", "pdf": "https://openreview.net/pdf/a580ad8d84d1a31adcccb9f9e2102c3b503121df.pdf"} {"title": "Anisotropic Random Feature Regression in High Dimensions", "url": "https://openreview.net/forum?id=JfaWawZ8BmX", "detail_url": "https://openreview.net/forum?id=JfaWawZ8BmX", "authors": "Gabriel Mel,Jeffrey Pennington", "tags": "ICLR 2022,Poster", "abstract": "In contrast to standard statistical wisdom, modern learning algorithms typically find their best performance in the overparameterized regime in which the model has many more parameters than needed to fit the training data. A growing number of recent works have shown that random feature models can offer a detailed theoretical explanation for this unexpected behavior, but typically these analyses have utilized isotropic distributional assumptions on the underlying data generation process, thereby failing to provide a realistic characterization of real-world models that are designed to identify and harness the structure in natural data. In this work, we examine the high-dimensional asymptotics of random feature regression in the presence of structured data, allowing for arbitrary input correlations and arbitrary alignment between the data and the weights of the target function. We define a partial order on the space of weight-data alignments and prove that generalization performance improves in response to stronger alignment. We also clarify several previous observations in the literature by distinguishing the behavior of the sample-wise and parameter-wise learning curves, finding that sample-wise multiple descent can occur at scales dictated by the eigenstructure of the data covariance, but that parameter-wise multiple descent is limited to double descent, although strong anisotropy can induce additional signatures such as wide plateaus and steep cliffs. Finally, these signatures are related to phase transitions in the spectrum of the feature kernel matrix, and unlike the double descent peak, persist even under optimal regularization.", "pdf": "https://openreview.net/pdf/bc2ddad146bd93609c8510aac28ae824072d1832.pdf"} {"title": "Back2Future: Leveraging Backfill Dynamics for Improving Real-time Predictions in Future", "url": "https://openreview.net/forum?id=L01Nn_VJ9i", "detail_url": "https://openreview.net/forum?id=L01Nn_VJ9i", "authors": "Harshavardhan Kamarthi,Alexander Rodr\u00edguez,B. Aditya Prakash", "tags": "ICLR 2022,Poster", "abstract": "For real-time forecasting in domains like public health and macroeconomics, data collection is a non-trivial and demanding task. Often after being initially released, it undergoes several revisions later (maybe due to human or technical constraints) - as a result, it may take weeks until the data reaches a stable value. This so-called \u2018backfill\u2019 phenomenon and its effect on model performance have been barely addressed in the prior literature. In this paper, we introduce the multi-variate backfill problem using COVID-19 as the motivating example. \nWe construct a detailed dataset composed of relevant signals over the past year of the pandemic. \nWe then systematically characterize several patterns in backfill dynamics and leverage our observations for formulating a novel problem and neural framework, Back2Future, that aims to refines a given model's predictions in real-time. Our extensive experiments demonstrate that our method refines the performance of the diverse set of top models for COVID-19 forecasting and GDP growth forecasting. Specifically, we show that Back2Future refined top COVID-19 models by 6.65% to 11.24% and yield an 18% improvement over non-trivial baselines. In addition, we show that our model improves model evaluation too; hence policy-makers can better understand the true accuracy of forecasting models in real-time.", "pdf": "https://openreview.net/pdf/5ff5a41a0773c6764d009a86a74cce3dd35e8ec3.pdf"} {"title": "Approximation and Learning with Deep Convolutional Models: a Kernel Perspective", "url": "https://openreview.net/forum?id=lrocYB-0ST2", "detail_url": "https://openreview.net/forum?id=lrocYB-0ST2", "authors": "Alberto Bietti", "tags": "ICLR 2022,Poster", "abstract": "The empirical success of deep convolutional networks on tasks involving high-dimensional data such as images or audio suggests that they can efficiently approximate certain functions that are well-suited for such tasks. In this paper, we study this through the lens of kernel methods, by considering simple hierarchical kernels with two or three convolution and pooling layers, inspired by convolutional kernel networks. These achieve good empirical performance on standard vision datasets, while providing a precise description of their functional space that yields new insights on their inductive bias. We show that the RKHS consists of additive models of interaction terms between patches, and that its norm encourages spatial similarities between these terms through pooling layers. We then provide generalization bounds which illustrate how pooling and patches yield improved sample complexity guarantees when the target function presents such regularities.", "pdf": "https://openreview.net/pdf/35eeb8c9531f39eb14e07db8fb296d38b7f1a369.pdf"} {"title": "Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning", "url": "https://openreview.net/forum?id=vgqS1vkkCbE", "detail_url": "https://openreview.net/forum?id=vgqS1vkkCbE", "authors": "Dhruv Shah,Peng Xu,Yao Lu,Ted Xiao,Alexander T Toshev,Sergey Levine,brian ichter", "tags": "ICLR 2022,Poster", "abstract": "Reinforcement learning can train policies that effectively perform complex tasks. However for long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and chaining lower-level skills. Hierarchical reinforcement learning aims to enable this by providing a bank of low-level skills as action abstractions. Hierarchies can further improve on this by abstracting the space states as well. We posit that a suitable state abstraction should depend on the capabilities of the available lower-level policies. We propose Value Function Spaces: a simple approach that produces such a representation by using the value functions corresponding to each lower-level skill. These value functions capture the affordances of the scene, thus forming a representation that compactly abstracts task relevant information and robustly ignores distractors. Empirical evaluations for maze-solving and robotic manipulation tasks demonstrate that our approach improves long-horizon performance and enables better zero-shot generalization than alternative model-free and model-based methods.", "pdf": "https://openreview.net/pdf/c49d03d6fc757e37898cc5399159de2e30589146.pdf"} {"title": "Fast Regression for Structured Inputs", "url": "https://openreview.net/forum?id=gNp54NxHUPJ", "detail_url": "https://openreview.net/forum?id=gNp54NxHUPJ", "authors": "Raphael A Meyer,Cameron N Musco,Christopher P Musco,David Woodruff,Samson Zhou", "tags": "ICLR 2022,Poster", "abstract": "We study the $\\ell_p$ regression problem, which requires finding $\\mathbf{x}\\in\\mathbb R^{d}$ that minimizes $\\|\\mathbf{A}\\mathbf{x}-\\mathbf{b}\\|_p$ for a matrix $\\mathbf{A}\\in\\mathbb R^{n \\times d}$ and response vector $\\mathbf{b}\\in\\mathbb R^{n}$. There has been recent interest in developing subsampling methods for this problem that can outperform standard techniques when $n$ is very large. However, all known subsampling approaches have run time that depends exponentially on $p$, typically, $d^{\\mathcal{O}(p)}$, which can be prohibitively expensive. \n\nWe improve on this work by showing that for a large class of common \\emph{structured matrices}, such as combinations of low-rank matrices, sparse matrices, and Vandermonde matrices, there are subsampling based methods for $\\ell_p$ regression that depend polynomially on $p$. For example, we give an algorithm for $\\ell_p$ regression on Vandermonde matrices that runs in time $\\mathcal{O}(n\\log^3 n+(dp^2)^{0.5+\\omega}\\cdot\\text{polylog}\\,n)$, where $\\omega$ is the exponent of matrix multiplication. The polynomial dependence on $p$ crucially allows our algorithms to extend naturally to efficient algorithms for $\\ell_\\infty$ regression, via approximation of $\\ell_\\infty$ by $\\ell_{\\mathcal{O}(\\log n)}$. Of practical interest, we also develop a new subsampling algorithm for $\\ell_p$ regression for arbitrary matrices, which is simpler than previous approaches for $p \\ge 4$.", "pdf": "https://openreview.net/pdf/a76864e8c343a5dcb3414cc8caa6fc2fdd2afc19.pdf"} {"title": "CrossBeam: Learning to Search in Bottom-Up Program Synthesis", "url": "https://openreview.net/forum?id=qhC8mr2LEKq", "detail_url": "https://openreview.net/forum?id=qhC8mr2LEKq", "authors": "Kensen Shi,Hanjun Dai,Kevin Ellis,Charles Sutton", "tags": "ICLR 2022,Poster", "abstract": "Many approaches to program synthesis perform a search within an enormous space of programs to find one that satisfies a given specification. Prior works have used neural models to guide combinatorial search algorithms, but such approaches still explore a huge portion of the search space and quickly become intractable as the size of the desired program increases. To tame the search space blowup, we propose training a neural model to learn a hands-on search policy for bottom-up synthesis, instead of relying on a combinatorial search algorithm. Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs, taking into account the search history and partial program executions. Motivated by work in structured prediction on learning to search, CrossBeam is trained on-policy using data extracted from its own bottom-up searches on training tasks. We evaluate CrossBeam in two very different domains, string manipulation and logic programming. We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.\n", "pdf": "https://openreview.net/pdf/d098dde7689c9940303ddd8c11f5f44e8b866692.pdf"} {"title": "PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning", "url": "https://openreview.net/forum?id=M6M8BEmd6dq", "detail_url": "https://openreview.net/forum?id=M6M8BEmd6dq", "authors": "Seng Pei Liew,Tsubasa Takahashi,Michihiko Ueno", "tags": "ICLR 2022,Poster", "abstract": "We propose a new framework of synthesizing data using deep generative models in a differentially private manner.\nWithin our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion, such that training deep generative models is possible without re-using the original data.\nHence, no extra privacy costs or model constraints are incurred, in contrast to popular gradient sanitization approaches, which, among other issues, cause degradation in privacy guarantees as the training iteration increases.\nWe demonstrate a realization of our framework by making use of the characteristic function and an adversarial re-weighting objective, which are of independent interest as well.\nOur proposal has theoretical guarantees of performance, and empirical evaluations on multiple datasets show that our approach outperforms other methods at reasonable levels of privacy.", "pdf": "https://openreview.net/pdf/3efedef6ce8396ae22861cd7154606c25bd31e95.pdf"} {"title": "Divisive Feature Normalization Improves Image Recognition Performance in AlexNet", "url": "https://openreview.net/forum?id=aOX3a9q3RVV", "detail_url": "https://openreview.net/forum?id=aOX3a9q3RVV", "authors": "Michelle Miller,SueYeon Chung,Kenneth D. Miller", "tags": "ICLR 2022,Poster", "abstract": "Local divisive normalization provides a phenomenological description of many nonlinear response properties of neurons across visual cortical areas. To gain insight into the utility of this operation, we studied the effects on AlexNet of a local divisive normalization between features, with learned parameters. Developing features were arranged in a line topology, with the influence between features determined by an exponential function of the distance between them. We compared an AlexNet model with no normalization or with canonical normalizations (Batch, Group, Layer) to the same models with divisive normalization added. Divisive normalization always improved performance for models with batch or group or no normalization, generally by 1-2 percentage points, on both the CIFAR-100 and ImageNet databases. To gain insight into mechanisms underlying the improved performance, we examined several aspects of network representations. In the early layers both canonical and divisive normalizations reduced manifold capacities and increased average dimension of the individual categorical manifolds. In later layers the capacity was higher and manifold dimension lower for models roughly in order of their performance improvement. Examining the sparsity of activations across a given layer, divisive normalization layers increased sparsity, while the canonical normalization layers decreased it. Nonetheless, in the final layer, the sparseness of activity increased in the order of no normalization, divisive, com- bined, and canonical. We also investigated how the receptive fields (RFs) in the first convolutional layer (where RFs are most interpretable) change with normalization. Divisive normalization enhanced RF Fourier power at low wavelengths, while divisive+canonical enhanced power at mid (batch, group) or low (layer) wavelengths, compared to canonical alone or no normalization. In conclusion, divisive normalization enhances image recognition performance, most strongly when combined with canonical normalization, and in doing so it reduces manifold capacity and sparsity in early layers while increasing them in final layers, and increases low- or mid-wavelength power in the first-layer receptive fields.", "pdf": "https://openreview.net/pdf/452011d69839dd4fa39ba4bec882b24cb5bb2649.pdf"} {"title": "Evaluating Distributional Distortion in Neural Language Modeling", "url": "https://openreview.net/forum?id=bTteFbU99ye", "detail_url": "https://openreview.net/forum?id=bTteFbU99ye", "authors": "Benjamin LeBrun,Alessandro Sordoni,Timothy J. O'Donnell", "tags": "ICLR 2022,Poster", "abstract": "A fundamental characteristic of natural language is the high rate at which speakers produce novel expressions. Because of this novelty, a heavy-tail of rare events accounts for a significant amount of the total probability mass of distributions in language (Baayen, 2001). Standard language modeling metrics such as perplexity quantify the performance of language models (LM) in aggregate. As a result, we have relatively little understanding of whether neural LMs accurately estimate the probability of sequences in this heavy-tail of rare events. To address this gap, we develop a controlled evaluation scheme which uses generative models trained on natural data as artificial languages from which we can exactly compute sequence probabilities. Training LMs on generations from these artificial languages, we compare the sequence-level probability estimates given by LMs to the true probabilities in the target language. Our experiments reveal that LSTM and Transformer language models (i) systematically underestimate the probability of sequences drawn from the target language, and (ii) do so more severely for less-probable sequences. Investigating where this probability mass went, (iii) we find that LMs tend to overestimate the probability of ill formed (perturbed) sequences. In addition, we find that this underestimation behaviour (iv) is weakened, but not eliminated by greater amounts of training data, and (v) is exacerbated for target distributions with lower entropy.", "pdf": "https://openreview.net/pdf/c22ea9d1df97b96c390eb350b4c09eb8e2388128.pdf"} {"title": "MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining", "url": "https://openreview.net/forum?id=r5qumLiYwf9", "detail_url": "https://openreview.net/forum?id=r5qumLiYwf9", "authors": "Ahmed Imtiaz Humayun,Randall Balestriero,Richard Baraniuk", "tags": "ICLR 2022,Poster", "abstract": "Deep Generative Networks (DGNs) are extensively employed in Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and their variants to approximate the data manifold, and data distribution on that manifold. However, training samples are often obtained based on preferences, costs, or convenience producing artifacts in the empirical data distribution e.g. the large fraction of smiling faces in the CelebA dataset or the large fraction of dark-haired individuals in FFHQ). {\\em These inconsistencies will be reproduced when sampling from the trained DGN, which has far-reaching potential implications for fairness, data augmentation, anomaly detection, domain adaptation, and beyond.} In response, we develop a differential geometry based sampler -coined MaGNET- that, given any trained DGN, produces samples that are uniformly distributed on the learned manifold. We prove theoretically and empirically that our technique produces a uniform distribution on the manifold regardless of the training set distribution. We perform a range of experiments on various datasets and DGNs. One of them considers the state-of-the-art StyleGAN2 trained on FFHQ dataset, where uniform sampling via MaGNET increases distribution precision \\& recall by 4.12\\% \\& 3.01\\% and decreases gender bias by 41.2\\%, without requiring labels or retraining.", "pdf": "https://openreview.net/pdf/e9c0ccdf7ecc11a5666ac100d75f89816ce7c0f7.pdf"} {"title": "Neural Contextual Bandits with Deep Representation and Shallow Exploration", "url": "https://openreview.net/forum?id=xnYACQquaGV", "detail_url": "https://openreview.net/forum?id=xnYACQquaGV", "authors": "Pan Xu,Zheng Wen,Handong Zhao,Quanquan Gu", "tags": "ICLR 2022,Poster", "abstract": "We study neural contextual bandits, a general class of contextual bandits, where each context-action pair is associated with a raw feature vector, but the specific reward generating function is unknown. We propose a novel learning algorithm that transforms the raw feature vector using the last hidden layer of a deep ReLU neural network (deep representation learning), and uses an upper confidence bound (UCB) approach to explore in the last linear layer (shallow exploration). We prove that under standard assumptions, our proposed algorithm achieves $\\tilde{O}(\\sqrt{T})$ finite-time regret, where $T$ is the learning time horizon. Compared with existing neural contextual bandit algorithms, our approach is computationally much more efficient since it only needs to explore in the last layer of the deep neural network.", "pdf": "https://openreview.net/pdf/c6ee94e7fd22670895280aaf06535b6373d428eb.pdf"} {"title": "PI3NN: Out-of-distribution-aware Prediction Intervals from Three Neural Networks", "url": "https://openreview.net/forum?id=NoB8YgRuoFU", "detail_url": "https://openreview.net/forum?id=NoB8YgRuoFU", "authors": "Siyan Liu,Pei Zhang,Dan Lu,Guannan Zhang", "tags": "ICLR 2022,Poster", "abstract": "We propose a novel prediction interval (PI) method for uncertainty quantification, which addresses three major issues with the state-of-the-art PI methods. First, existing PI methods require retraining of neural networks (NNs) for every given confidence level and suffer from the crossing issue in calculating multiple PIs. Second, they usually rely on customized loss functions with extra sensitive hyperparameters for which fine tuning is required to achieve a well-calibrated PI. Third, they usually underestimate uncertainties of out-of-distribution (OOD) samples leading to over-confident PIs. Our PI3NN method calculates PIs from linear combinations of three NNs, each of which is independently trained using the standard mean squared error loss. The coefficients of the linear combinations are computed using root-finding algorithms to ensure tight PIs for a given confidence level. We theoretically prove that PI3NN can calculate PIs for a series of confidence levels without retraining NNs and it completely avoids the crossing issue. Additionally, PI3NN does not introduce any unusual hyperparameters resulting in a stable performance. Furthermore, we address OOD identification challenge by introducing an initialization scheme which provides reasonably larger PIs of the OOD samples than those of the in-distribution samples. Benchmark and real-world experiments show that our method outperforms several state-of-the-art approaches with respect to predictive uncertainty quality, robustness, and OOD samples identification.", "pdf": "https://openreview.net/pdf/84a3741f26e65df3c7b232779bcfb5dac283d41e.pdf"} {"title": "Discriminative Similarity for Data Clustering", "url": "https://openreview.net/forum?id=kj0_45Y4r9i", "detail_url": "https://openreview.net/forum?id=kj0_45Y4r9i", "authors": "Yingzhen Yang,Ping Li", "tags": "ICLR 2022,Poster", "abstract": "Similarity-based clustering methods separate data into clusters according to the pairwise similarity between the data, and the pairwise similarity is crucial for their performance. In this paper, we propose {\\em Clustering by Discriminative Similarity (CDS)}, a novel method which learns discriminative similarity for data clustering. CDS learns an unsupervised similarity-based classifier from each data partition, and searches for the optimal partition of the data by minimizing the generalization error of the learnt classifiers associated with the data partitions. By generalization analysis via Rademacher complexity, the generalization error bound for the unsupervised similarity-based classifier is expressed as the sum of discriminative similarity between the data from different classes. It is proved that the derived discriminative similarity can also be induced by the integrated squared error bound for kernel density classification. In order to evaluate the performance of the proposed discriminative similarity, we propose a new clustering method using a kernel as the similarity function, CDS via unsupervised kernel classification (CDSK), with its effectiveness demonstrated by experimental results.", "pdf": "https://openreview.net/pdf/b159fb24355dd1bf64f74a757973bbc8cc96d57e.pdf"} {"title": "It Takes Four to Tango: Multiagent Self Play for Automatic Curriculum Generation", "url": "https://openreview.net/forum?id=q4tZR1Y-UIs", "detail_url": "https://openreview.net/forum?id=q4tZR1Y-UIs", "authors": "Yuqing Du,Pieter Abbeel,Aditya Grover", "tags": "ICLR 2022,Poster", "abstract": "We are interested in training general-purpose reinforcement learning agents that can solve a wide variety of goals. Training such agents efficiently requires automatic generation of a goal curriculum. This is challenging as it requires (a) exploring goals of increasing difficulty, while ensuring that the agent (b) is exposed to a diverse set of goals in a sample efficient manner and (c) does not catastrophically forget previously solved goals. We propose Curriculum Self Play (CuSP), an automated goal generation framework that seeks to satisfy these desiderata by virtue of a multi-player game with 4 agents. We extend the asymmetric curricula learning in PAIRED (Dennis et al., 2020) to a symmetrized game that carefully balances cooperation and competition between two off-policy student learners and two regret-maximizing teachers. CuSP additionally introduces entropic goal coverage and accounts for the non-stationary nature of the students, allowing us to automatically induce a curriculum that balances progressive exploration with anti-catastrophic exploitation. We demonstrate that our method succeeds at generating an effective curricula of goals for a range of control tasks, outperforming other methods at zero-shot test-time generalization to novel out-of-distribution goals.", "pdf": "https://openreview.net/pdf/68a6237e79699c723ce9c9c39537422391df3e2b.pdf"} {"title": "CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing", "url": "https://openreview.net/forum?id=HOjLHrlZhmx", "detail_url": "https://openreview.net/forum?id=HOjLHrlZhmx", "authors": "Fan Wu,Linyi Li,Zijian Huang,Yevgeniy Vorobeychik,Ding Zhao,Bo Li", "tags": "ICLR 2022,Poster", "abstract": "As reinforcement learning (RL) has achieved great success and been even adopted in safety-critical domains such as autonomous vehicles, a range of empirical studies have been conducted to improve its robustness against adversarial attacks. However, how to certify its robustness with theoretical guarantees still remains challenging. In this paper, we present the \ufb01rst uni\ufb01ed framework CROP (Certifying Robust Policies for RL) to provide robustness certi\ufb01cation on both action and reward levels. In particular, we propose two robustness certi\ufb01cation criteria: robustness of per-state actions and lower bound of cumulative rewards. We then develop a local smoothing algorithm for policies derived from Q-functions to guarantee the robustness of actions taken along the trajectory; we also develop a global smoothing algorithm for certifying the lower bound of a \ufb01nite-horizon cumulative reward, as well as a novel local smoothing algorithm to perform adaptive search in order to obtain tighter reward certi\ufb01cation. Empirically, we apply CROP to evaluate several existing empirically robust RL algorithms, including adversarial training and different robust regularization, in four environments (two representative Atari games, Highway, and CartPole). Furthermore, by evaluating these algorithms against adversarial attacks, we demonstrate that our certi\ufb01cations are often tight. All experiment results are available at website https://crop-leaderboard.github.io.", "pdf": "https://openreview.net/pdf/b79f87ced196c2a5a13ca10bae3d39a8924b08b8.pdf"} {"title": "Neural Link Prediction with Walk Pooling", "url": "https://openreview.net/forum?id=CCu6RcUMwK0", "detail_url": "https://openreview.net/forum?id=CCu6RcUMwK0", "authors": "Liming Pan,Cheng Shi,Ivan Dokmani\u0107", "tags": "ICLR 2022,Poster", "abstract": "Graph neural networks achieve high accuracy in link prediction by jointly leveraging graph topology and node attributes. Topology, however, is represented indirectly; state-of-the-art methods based on subgraph classification label nodes with distance to the target link, so that, although topological information is present, it is tempered by pooling. This makes it challenging to leverage features like loops and motifs associated with network formation mechanisms. We propose a link prediction algorithm based on a new pooling scheme called WalkPool. WalkPool combines the expressivity of topological heuristics with the feature-learning ability of neural networks. It summarizes a putative link by random walk probabilities of adjacent paths. Instead of extracting transition probabilities from the original graph, it computes the transition matrix of a ``predictive'' latent graph by applying attention to learned features; this may be interpreted as feature-sensitive topology fingerprinting. WalkPool can leverage unsupervised node features or be combined with GNNs and trained end-to-end. It outperforms state-of-the-art methods on all common link prediction benchmarks, both homophilic and heterophilic, with and without node attributes. Applying WalkPool to a set of unsupervised GNNs significantly improves prediction accuracy, suggesting that it may be used as a general-purpose graph pooling scheme. ", "pdf": "https://openreview.net/pdf/ad031c5e836c55357e2f13cdb18fa502a7eecc80.pdf"} {"title": "On the Convergence of Certified Robust Training with Interval Bound Propagation", "url": "https://openreview.net/forum?id=YeShU5mLfLt", "detail_url": "https://openreview.net/forum?id=YeShU5mLfLt", "authors": "Yihan Wang,Zhouxing Shi,Quanquan Gu,Cho-Jui Hsieh", "tags": "ICLR 2022,Poster", "abstract": "Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training neural networks with certifiable robustness guarantees when potential adversarial perturbations present, while the convergence of IBP training remains unknown in existing literature. In this paper, we present a theoretical analysis on the convergence of IBP training. With an overparameterized assumption, we analyze the convergence of IBP robust training. We show that when using IBP training to train a randomly initialized two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error with a high probability if we have sufficiently small perturbation radius and large network width.", "pdf": "https://openreview.net/pdf/4e7f7f34a6f11b062e283b3a04324bb373e39067.pdf"} {"title": "Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators", "url": "https://openreview.net/forum?id=sX3XaHwotOg", "detail_url": "https://openreview.net/forum?id=sX3XaHwotOg", "authors": "Yu Meng,Chenyan Xiong,Payal Bajaj,saurabh tiwary,Paul N. Bennett,Jiawei Han,Xia Song", "tags": "ICLR 2022,Poster", "abstract": "We present a new framework AMOS that pretrains text encoders with an Adversarial learning curriculum via a Mixture Of Signals from multiple auxiliary generators. Following ELECTRA-style pretraining, the main encoder is trained as a discriminator to detect replaced tokens generated by auxiliary masked language models (MLMs). Different from ELECTRA which trains one MLM as the generator, we jointly train multiple MLMs of different sizes to provide training signals at various levels of difficulty. To push the discriminator to learn better with challenging replaced tokens, we learn mixture weights over the auxiliary MLMs' outputs to maximize the discriminator loss by backpropagating the gradient from the discriminator via Gumbel-Softmax. For better pretraining efficiency, we propose a way to assemble multiple MLMs into one unified auxiliary model. AMOS outperforms ELECTRA and recent state-of-the-art pretrained models by about 1 point on the GLUE benchmark for BERT base-sized models.", "pdf": "https://openreview.net/pdf/4127a755f1e5ee998e6423f7a8d734f9e88b8cab.pdf"} {"title": "Towards Training Billion Parameter Graph Neural Networks for Atomic Simulations", "url": "https://openreview.net/forum?id=0jP2n0YFmKG", "detail_url": "https://openreview.net/forum?id=0jP2n0YFmKG", "authors": "Anuroop Sriram,Abhishek Das,Brandon M Wood,Siddharth Goyal,C. Lawrence Zitnick", "tags": "ICLR 2022,Poster", "abstract": "Recent progress in Graph Neural Networks (GNNs) for modeling atomic simulations has the potential to revolutionize catalyst discovery, which is a key step in making progress towards the energy breakthroughs needed to combat climate change. However, the GNNs that have proven most effective for this task are memory intensive as they model higher-order interactions in the graphs such as those between triplets or quadruplets of atoms, making it challenging to scale these models. In this paper, we introduce Graph Parallelism, a method to distribute input graphs across multiple GPUs, enabling us to train very large GNNs with hundreds of millions or billions of parameters. We empirically evaluate our method by scaling up the recently proposed DimeNet++ and GemNet models by over an order of magnitude in the number of parameters. On the large-scale Open Catalyst 2020 (OC20) dataset, these graph-parallelized models lead to relative improvements of 1) 15% on the force MAE metric on the S2EF task and 2) 21% on the AFbT metric on the IS2RS task, establishing new state-of-the-art results.", "pdf": "https://openreview.net/pdf/d00345679f2290baeabb225428516fad14fea79e.pdf"} {"title": "Understanding and Leveraging Overparameterization in Recursive Value Estimation", "url": "https://openreview.net/forum?id=shbAgEsk3qM", "detail_url": "https://openreview.net/forum?id=shbAgEsk3qM", "authors": "Chenjun Xiao,Bo Dai,Jincheng Mei,Oscar A Ramirez,Ramki Gummadi,Chris Harris,Dale Schuurmans", "tags": "ICLR 2022,Poster", "abstract": "The theory of function approximation in reinforcement learning (RL) typically considers low capacity representations that incur a tradeoff between approximation error, stability and generalization. Current deep architectures, however, operate in an overparameterized regime where approximation error is not necessarily a bottleneck. To better understand the utility of deep models in RL we present an analysis of recursive value estimation using \\emph{overparameterized} linear representations that provides useful, transferable findings. First, we show that classical updates such as temporal difference (TD) learning or fitted-value-iteration (FVI) converge to \\emph{different} fixed points than residual minimization (RM) in the overparameterized linear case. We then develop a unified interpretation of overparameterized linear value estimation as minimizing the Euclidean norm of the weights subject to alternative constraints. A practical consequence is that RM can be modified by a simple alteration of the backup targets to obtain the same fixed points as FVI and TD (when they converge), while universally ensuring stability. Further, we provide an analysis of the generalization error of these methods, demonstrating per iterate bounds on the value prediction error of FVI, and fixed point bounds for TD and RM. \nGiven this understanding, we then develop new algorithmic tools for improving recursive value estimation with deep models. \nIn particular, we extract two regularizers that penalize out-of-span top-layer weights and co-linearity in top-layer features respectively. Empirically we find that these regularizers dramatically improve the stability of TD and FVI, while allowing RM to match and even sometimes surpass their generalization performance with assured stability. ", "pdf": "https://openreview.net/pdf/c5131ad5930c1a9f32ede673f284175158a75792.pdf"} {"title": "Optimization and Adaptive Generalization of Three layer Neural Networks", "url": "https://openreview.net/forum?id=dPyRNUlttBv", "detail_url": "https://openreview.net/forum?id=dPyRNUlttBv", "authors": "Khashayar Gatmiry,Stefanie Jegelka,Jonathan Kelner", "tags": "ICLR 2022,Poster", "abstract": "While there has been substantial recent work studying generalization of neural networks, \nthe ability of deep nets in automating the process of feature extraction still evades a thorough mathematical understanding. \nAs a step toward this goal, we analyze learning and generalization of a three-layer neural network with ReLU activations in a regime that goes beyond the linear approximation of the network, and is hence not captured by the common Neural Tangent Kernel. We show that despite nonconvexity of the empirical loss, a variant of SGD converges in polynomially many iterations to a good solution that generalizes. In particular, our generalization bounds are adaptive: they automatically optimize over a family of kernels that includes the Neural Tangent Kernel, to provide the tightest bound. ", "pdf": "https://openreview.net/pdf/086ce10c9607a92d59635b0ac0f1f0bd8c86ae5b.pdf"} {"title": "Non-Parallel Text Style Transfer with Self-Parallel Supervision", "url": "https://openreview.net/forum?id=-TSe5o7STVR", "detail_url": "https://openreview.net/forum?id=-TSe5o7STVR", "authors": "Ruibo Liu,Chongyang Gao,Chenyan Jia,Guangxuan Xu,Soroush Vosoughi", "tags": "ICLR 2022,Poster", "abstract": "The performance of existing text style transfer models is severely limited by the non-parallel datasets on which the models are trained. In non-parallel datasets, no direct mapping exists between sentences of the source and target style; the style transfer models thus only receive weak supervision of the target sentences during training, which often leads the model to discard too much style-independent information, or utterly fail to transfer the style.\n\nIn this work, we propose LaMer, a novel text style transfer framework based on large-scale language models. LaMer first mines the roughly parallel expressions in the non-parallel datasets with scene graphs, and then employs MLE training, followed by imitation learning refinement, to leverage the intrinsic parallelism within the data. On two benchmark tasks (sentiment & formality transfer) and a newly proposed challenging task (political stance transfer), our model achieves qualitative advances in transfer accuracy, content preservation, and fluency. Further empirical and human evaluations demonstrate that our model not only makes training more efficient, but also generates more readable and diverse expressions than previous models.", "pdf": "https://openreview.net/pdf/7858e341aa92c11991455a43e9a78c35ee4655a2.pdf"} {"title": "Can an Image Classifier Suffice For Action Recognition?", "url": "https://openreview.net/forum?id=qhkFX-HLuHV", "detail_url": "https://openreview.net/forum?id=qhkFX-HLuHV", "authors": "Quanfu Fan,Chun-Fu Chen,Rameswar Panda", "tags": "ICLR 2022,Poster", "abstract": "We explore a new perspective on video understanding by casting the video recognition problem as an image recognition task. Our approach rearranges input video frames into super images, which allow for training an image classifier directly to fulfill the task of action recognition, in exactly the same way as image classification. With such a simple idea, we show that transformer-based image classifiers alone can suffice for action recognition. In particular, our approach demonstrates strong and promising performance against SOTA methods on several public datasets including Kinetics400, Moments In Time, Something-Something V2 (SSV2), Jester and Diving48. We also experiment with the prevalent ResNet image classifiers in computer vision to further validate our idea. The results on both Kinetics400 and SSV2 are comparable to some of the best-performed CNN approaches based on spatio-temporal modeling. Our source codes and models are available at \\url{https://github.com/IBM/sifar-pytorch}.", "pdf": "https://openreview.net/pdf/30716aa30d9fbd5e0f9a95e4c0e1255607ab8bc4.pdf"} {"title": "Interacting Contour Stochastic Gradient Langevin Dynamics", "url": "https://openreview.net/forum?id=IK9ap6nxXr2", "detail_url": "https://openreview.net/forum?id=IK9ap6nxXr2", "authors": "Wei Deng,Siqi Liang,Botao Hao,Guang Lin,Faming Liang", "tags": "ICLR 2022,Poster", "abstract": "We propose an interacting contour stochastic gradient Langevin dynamics (ICSGLD) sampler, an embarrassingly parallel multiple-chain contour stochastic gradient Langevin dynamics (CSGLD) sampler with efficient interactions. We show that ICSGLD can be theoretically more efficient than a single-chain CSGLD with an equivalent computational budget. We also present a novel random-field function, which facilitates the estimation of self-adapting parameters in big data and obtains free mode explorations. Empirically, we compare the proposed algorithm with popular benchmark methods for posterior sampling. The numerical results show a great potential of ICSGLD for large-scale uncertainty estimation tasks.", "pdf": "https://openreview.net/pdf/bf454b672f7afe0c72e3a83029c7238309a1b4a0.pdf"} {"title": "NeuPL: Neural Population Learning", "url": "https://openreview.net/forum?id=MIX3fJkl_1", "detail_url": "https://openreview.net/forum?id=MIX3fJkl_1", "authors": "Siqi Liu,Luke Marris,Daniel Hennes,Josh Merel,Nicolas Heess,Thore Graepel", "tags": "ICLR 2022,Poster", "abstract": "Learning in strategy games (e.g. StarCraft, poker) requires the discovery of diverse policies. This is often achieved by iteratively training new policies against existing ones, growing a policy population that is robust to exploit. This iterative approach suffers from two issues in real-world games: a) under finite budget, approximate best-response operators at each iteration needs truncating, resulting in under-trained good-responses populating the population; b) repeated learning of basic skills at each iteration is wasteful and becomes intractable in the presence of increasingly strong opponents. In this work, we propose Neural Population Learning (NeuPL) as a solution to both issues. NeuPL offers convergence guarantees to a population of best-responses under mild assumptions. By representing a population of policies within a single conditional model, NeuPL enables transfer learning across policies. Empirically, we show the generality, improved performance and efficiency of NeuPL across several test domains. Most interestingly, we show that novel strategies become more accessible, not less, as the neural population expands.", "pdf": "https://openreview.net/pdf/eeeb391c4885267d9c80ba3a8ea3dfd9e9ea8832.pdf"} {"title": "DeSKO: Stability-Assured Robust Control with a Deep Stochastic Koopman Operator", "url": "https://openreview.net/forum?id=hniLRD_XCA", "detail_url": "https://openreview.net/forum?id=hniLRD_XCA", "authors": "Minghao Han,Jacob Euler-Rolle,Robert K. Katzschmann", "tags": "ICLR 2022,Poster", "abstract": "The Koopman operator theory linearly describes nonlinear dynamical systems in a high-dimensional functional space and it allows to apply linear control methods to highly nonlinear systems. However, the Koopman operator does not account for any uncertainty in dynamical systems, causing it to perform poorly in real-world applications.\nTherefore, we propose a deep stochastic Koopman operator (DeSKO) model in a robust learning control framework to guarantee stability of nonlinear stochastic systems. The DeSKO model captures a dynamical system's uncertainty by inferring a distribution of observables. We use the inferred distribution to design a robust, stabilizing closed-loop controller for a dynamical system. Modeling and control experiments on several advanced control benchmarks show that our framework is more robust and scalable than state-of-the-art deep Koopman operators and reinforcement learning methods. Tested control benchmarks include a soft robotic arm, a legged robot, and a biological gene regulatory network. We also demonstrate that this robust control method resists previously unseen uncertainties, such as external disturbances, with a magnitude of up to five times the maximum control input. Our approach opens up new possibilities in learning control for high-dimensional nonlinear systems while robustly managing internal or external uncertainty.", "pdf": "https://openreview.net/pdf/862602026e43c103de39be4295ff8f7288f3acf2.pdf"} {"title": "Neural Network Approximation based on Hausdorff distance of Tropical Zonotopes", "url": "https://openreview.net/forum?id=oiZJwC_fyS", "detail_url": "https://openreview.net/forum?id=oiZJwC_fyS", "authors": "Panagiotis Misiakos,Georgios Smyrnis,George Retsinas,Petros Maragos", "tags": "ICLR 2022,Poster", "abstract": "In this work we theoretically contribute to neural network approximation by providing a novel tropical geometrical viewpoint to structured neural network compression. In particular, we show that the approximation error between two neural networks with ReLU activations and one hidden layer depends on the Hausdorff distance of the tropical zonotopes of the networks. This theorem comes as a first step towards a purely geometrical interpretation of neural network approximation. Based on this theoretical contribution, we propose geometrical methods that employ the K-means algorithm to compress the fully connected parts of ReLU activated deep neural networks. We analyze the error bounds of our algorithms theoretically based on our approximation theorem and evaluate them empirically on neural network compression. Our experiments follow a proof-of-concept strategy and indicate that our geometrical tools achieve improved performance over relevant tropical geometry techniques and can be competitive against non-tropical methods. ", "pdf": "https://openreview.net/pdf/e09efd74b974abec052126ca4cbb787b04fd3265.pdf"} {"title": "Learning Towards The Largest Margins", "url": "https://openreview.net/forum?id=hqkhcFHOeKD", "detail_url": "https://openreview.net/forum?id=hqkhcFHOeKD", "authors": "Xiong Zhou,Xianming Liu,Deming Zhai,Junjun Jiang,Xin Gao,Xiangyang Ji", "tags": "ICLR 2022,Poster", "abstract": "One of the main challenges for feature representation in deep learning-based classification is the design of appropriate loss functions that exhibit strong discriminative power. The classical softmax loss does not explicitly encourage discriminative learning of features. A popular direction of research is to incorporate margins in well-established losses in order to enforce extra intra-class compactness and inter-class separability, which, however, were developed through heuristic means, as opposed to rigorous mathematical principles. In this work, we attempt to address this limitation by formulating the principled optimization objective as learning towards the largest margins. Specifically, we firstly propose to employ the class margin as the measure of inter-class separability, and the sample margin as the measure of intra-class compactness. Accordingly, to encourage discriminative representation of features, the loss function should promote the largest possible margins for both classes and samples. Furthermore, we derive a generalized margin softmax loss to draw general conclusions for the existing margin-based losses. Not only does this principled framework offer new perspectives to understand and interpret existing margin-based losses, but it also provides new insights that can guide the design of new tools, including \\textit{sample margin regularization} and \\textit{largest margin softmax loss} for class balanced cases, and \\textit{zero centroid regularization} for class imbalanced cases. Experimental results demonstrate the effectiveness of our strategy for multiple tasks including visual classification, imbalanced classification, person re-identification, and face verification.", "pdf": "https://openreview.net/pdf/05f12453b1762c08d54507567f592f91d86425be.pdf"} {"title": "Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?", "url": "https://openreview.net/forum?id=28ib9tf6zhr", "detail_url": "https://openreview.net/forum?id=28ib9tf6zhr", "authors": "Yonggan Fu,Shunyao Zhang,Shang Wu,Cheng Wan,Yingyan Lin", "tags": "ICLR 2022,Poster", "abstract": "Vision transformers (ViTs) have recently set off a new wave in neural architecture design thanks to their record-breaking performance in various vision tasks. In parallel, to fulfill the goal of deploying ViTs into real-world vision applications, their robustness against potential malicious attacks has gained increasing attention. In particular, recent works show that ViTs are more robust against adversarial attacks as compared with convolutional neural networks (CNNs), and conjecture that this is because ViTs focus more on capturing global interactions among different input/feature patches, leading to their improved robustness to local perturbations imposed by adversarial attacks. In this work, we ask an intriguing question: \"Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?\" Driven by this question, we first conduct a comprehensive experiment regarding the robustness of both ViTs and CNNs under various existing adversarial attacks to understand the underlying reason favoring their robustness. Based on the drawn insights, we then propose a dedicated attack framework, dubbed Patch-Fool, that fools the self-attention mechanism by attacking its basic component (i.e., a single patch) with a series of attention-aware optimization techniques. Interestingly, our Patch-Fool framework shows for the first time that ViTs are not necessarily more robust than CNNs against adversarial perturbations. In particular, we find that ViTs are more vulnerable learners compared with CNNs against our Patch-Fool attack which is consistent across extensive experiments, and the observations from Sparse/Mild Patch-Fool, two variants of Patch-Fool, indicate an intriguing insight that the perturbation density and strength on each patch seem to be the key factors that influence the robustness ranking between ViTs and CNNs. It can be expected that our Patch-Fool framework will shed light on both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment. Our codes are available at https://github.com/RICE-EIC/Patch-Fool.", "pdf": "https://openreview.net/pdf/4c7b8d2f80c4ea1bfe11754da2e7c69fc5183754.pdf"} {"title": "AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation", "url": "https://openreview.net/forum?id=Q5uh1Nvv5dm", "detail_url": "https://openreview.net/forum?id=Q5uh1Nvv5dm", "authors": "David Berthelot,Rebecca Roelofs,Kihyuk Sohn,Nicholas Carlini,Alexey Kurakin", "tags": "ICLR 2022,Poster", "abstract": "We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different one. With the goal of generality, we introduce AdaMatch, a unified solution for unsupervised domain adaptation (UDA), semi-supervised learning (SSL), and semi-supervised domain adaptation (SSDA). In an extensive experimental study, we compare its behavior with respective state-of-the-art techniques from SSL, SSDA, and UDA and find that AdaMatch either matches or significantly exceeds the state-of-the-art in each case using the same hyper-parameters regardless of the dataset or task. For example, AdaMatch nearly doubles the accuracy compared to that of the prior state-of-the-art on the UDA task for DomainNet and even exceeds the accuracy of the prior state-of-the-art obtained with pre-training by 6.4% when AdaMatch is trained completely from scratch. Furthermore, by providing AdaMatch with just one labeled example per class from the target domain (i.e., the SSDA setting), we increase the target accuracy by an additional 6.1%, and with 5 labeled examples, by 13.6%.", "pdf": "https://openreview.net/pdf/8dd30c7eff2e4f152d2d24368c232baec4e5e974.pdf"} {"title": "Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound", "url": "https://openreview.net/forum?id=l_amHf1oaK", "detail_url": "https://openreview.net/forum?id=l_amHf1oaK", "authors": "Claudio Ferrari,Mark Niklas Mueller,Nikola Jovanovi\u0107,Martin Vechev", "tags": "ICLR 2022,Poster", "abstract": "State-of-the-art neural network verifiers are fundamentally based on one of two paradigms: either encoding the whole verification problem via tight multi-neuron convex relaxations or applying a Branch-and-Bound (BaB) procedure leveraging imprecise but fast bounding methods on a large number of easier subproblems. The former can capture complex multi-neuron dependencies but sacrifices completeness due to the inherent limitations of convex relaxations. The latter enables complete verification but becomes increasingly ineffective on larger and more challenging networks. In this work, we present a novel complete verifier which combines the strengths of both paradigms: it leverages multi-neuron relaxations to drastically reduce the number of subproblems generated during the BaB process and an efficient GPU-based dual optimizer to solve the remaining ones. An extensive evaluation demonstrates that our verifier achieves a new state-of-the-art on both established benchmarks as well as networks with significantly higher accuracy than previously considered. The latter result (up to 28% certification gains) indicates meaningful progress towards creating verifiers that can handle practically relevant networks.", "pdf": "https://openreview.net/pdf/fcc20218f5754386cf64f4156a1f41039038b5da.pdf"} {"title": "Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality", "url": "https://openreview.net/forum?id=VFBjuF8HEp", "detail_url": "https://openreview.net/forum?id=VFBjuF8HEp", "authors": "Daniel Watson,William Chan,Jonathan Ho,Mohammad Norouzi", "tags": "ICLR 2022,Poster", "abstract": "Diffusion models have emerged as an expressive family of generative models rivaling GANs in sample quality and autoregressive models in likelihood scores. Standard diffusion models typically require hundreds of forward passes through the model to generate a single high-fidelity sample. We introduce Differentiable Diffusion Sampler Search (DDSS): a method that optimizes fast samplers for any pre-trained diffusion model by differentiating through sample quality scores. We also present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models. We show that optimizing the degrees of freedom of GGDM samplers by maximizing sample quality scores via gradient descent leads to improved sample quality. Our optimization procedure backpropagates through the sampling process using the reparametrization trick and gradient rematerialization. DDSS achieves strong results on unconditional image generation across various datasets (e.g., FID scores on LSUN church 128x128 of 11.6 with only 10 inference steps, and 4.82 with 20 steps, compared to 51.1 and 14.9 with strongest DDPM/DDIM baselines). Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required.", "pdf": "https://openreview.net/pdf/56f0145dd15f32bd53f6dba7efde74914a88f663.pdf"} {"title": "Distribution Compression in Near-Linear Time", "url": "https://openreview.net/forum?id=lzupY5zjaU9", "detail_url": "https://openreview.net/forum?id=lzupY5zjaU9", "authors": "Abhishek Shetty,Raaz Dwivedi,Lester Mackey", "tags": "ICLR 2022,Poster", "abstract": "In distribution compression, one aims to accurately summarize a probability distribution $\\mathbb{P}$ using a small number of representative points. Near-optimal thinning procedures achieve this goal by sampling $n$ points from a Markov chain and identifying $\\sqrt{n}$ points with $\\widetilde{\\mathcal{O}}(1/\\sqrt{n})$ discrepancy to $\\mathbb{P}$. Unfortunately, these algorithms suffer from quadratic or super-quadratic runtime in the sample size $n$. To address this deficiency, we introduce Compress++, a simple meta-procedure for speeding up any thinning algorithm while suffering at most a factor of $4$ in error. When combined with the quadratic-time kernel halving and kernel thinning algorithms of Dwivedi and Mackey (2021), Compress++ delivers $\\sqrt{n}$ points with $\\mathcal{O}(\\sqrt{\\log n/n})$ integration error and better-than-Monte-Carlo maximum mean discrepancy in $\\mathcal{O}(n \\log^3 n)$ time and $\\mathcal{O}( \\sqrt{n} \\log^2 n )$ space. Moreover, Compress++ enjoys the same near-linear runtime given any quadratic-time input and reduces the runtime of super-quadratic algorithms by a square-root factor. In our benchmarks with high-dimensional Monte Carlo samples and Markov chains targeting challenging differential equation posteriors, Compress++ matches or nearly matches the accuracy of its input algorithm in orders of magnitude less time.", "pdf": "https://openreview.net/pdf/484f68f97f561be1f3272522336a9a0b1fa84bbc.pdf"} {"title": "Capturing Structural Locality in Non-parametric Language Models", "url": "https://openreview.net/forum?id=nnU3IUMJmN", "detail_url": "https://openreview.net/forum?id=nnU3IUMJmN", "authors": "Frank F. Xu,Junxian He,Graham Neubig,Vincent Josua Hellendoorn", "tags": "ICLR 2022,Poster", "abstract": "Structural locality is a ubiquitous feature of real-world datasets, wherein data points are organized into local hierarchies. Some examples include topical clusters in text or project hierarchies in source code repositories. In this paper, we explore utilizing this structural locality within non-parametric language models, which generate sequences that reference retrieved examples from an external source. We propose a simple yet effective approach for adding locality information into such models by adding learned parameters that improve the likelihood of retrieving examples from local neighborhoods. Experiments on two different domains, Java source code and Wikipedia text, demonstrate that locality features improve model efficacy over models without access to these features, with interesting differences. We also perform an analysis of how and where locality features contribute to improving performance and why the traditionally used contextual similarity metrics alone are not enough to grasp the locality structure.\n", "pdf": "https://openreview.net/pdf/05677eb0d7fca88dd7c4c6cbefa73f6ae430ad68.pdf"} {"title": "Audio Lottery: Speech Recognition Made Ultra-Lightweight, Noise-Robust, and Transferable", "url": "https://openreview.net/forum?id=9Nk6AJkVYB", "detail_url": "https://openreview.net/forum?id=9Nk6AJkVYB", "authors": "Shaojin Ding,Tianlong Chen,Zhangyang Wang", "tags": "ICLR 2022,Poster", "abstract": "Lightweight speech recognition models have seen explosive demands owing to a growing amount of speech-interactive features on mobile devices. Since designing such systems from scratch is non-trivial, practitioners typically choose to compress large (pre-trained) speech models. Recently, lottery ticket hypothesis reveals the existence of highly sparse subnetworks that can be trained in isolation without sacrificing the performance of the full models. In this paper, we investigate the tantalizing possibility of using lottery ticket hypothesis to discover lightweight speech recognition models, that are (1) robust to various noise existing in speech; (2) transferable to fit the open-world personalization; and 3) compatible with structured sparsity. We conducted extensive experiments on CNN-LSTM, RNN-Transducer, and Transformer models, and verified the existence of highly sparse winning tickets that can match the full model performance across those backbones. We obtained winning tickets that have less than 20% of full model weights on all backbones, while the most lightweight one only keeps 4.4% weights. Those winning tickets generalize to structured sparsity with no performance loss, and transfer exceptionally from large source datasets to various target datasets. Perhaps most surprisingly, when the training utterances have high background noises, the winning tickets even substantially outperform the full models, showing the extra bonus of noise robustness by inducing sparsity. Codes are available at https://github.com/VITA-Group/Audio-Lottery.", "pdf": "https://openreview.net/pdf/3d42ff881f8ec8954935d0f8bbcb2a21d71106ea.pdf"} {"title": "Learning to Map for Active Semantic Goal Navigation", "url": "https://openreview.net/forum?id=swrMQttr6wN", "detail_url": "https://openreview.net/forum?id=swrMQttr6wN", "authors": "Georgios Georgakis,Bernadette Bucher,Karl Schmeckpeper,Siddharth Singh,Kostas Daniilidis", "tags": "ICLR 2022,Poster", "abstract": "We consider the problem of object goal navigation in unseen environments. Solving this problem requires learning of contextual semantic priors, a challenging endeavour given the spatial and semantic variability of indoor environments. Current methods learn to implicitly encode these priors through goal-oriented navigation policy functions operating on spatial representations that are limited to the agent's observable areas. In this work, we propose a novel framework that actively learns to generate semantic maps outside the field of view of the agent and leverages the uncertainty over the semantic classes in the unobserved areas to decide on long term goals. We demonstrate that through this spatial prediction strategy, we are able to learn semantic priors in scenes that can be leveraged in unknown environments. Additionally, we show how different objectives can be defined by balancing exploration with exploitation during searching for semantic targets. Our method is validated in the visually realistic environments of the Matterport3D dataset and show improved results on object goal navigation over competitive baselines.", "pdf": "https://openreview.net/pdf/8097afd8a3e6d7c824f59390ca5a9cee0530bbd1.pdf"} {"title": "Benchmarking the Spectrum of Agent Capabilities", "url": "https://openreview.net/forum?id=1W0z96MFEoH", "detail_url": "https://openreview.net/forum?id=1W0z96MFEoH", "authors": "Danijar Hafner", "tags": "ICLR 2022,Poster", "abstract": "Evaluating the general abilities of intelligent agents requires complex simulation environments. Existing benchmarks typically evaluate only one narrow task per environment, requiring researchers to perform expensive training runs on many different environments. We introduce Crafter, an open world survival game with visual inputs that evaluates a wide range of general abilities within a single environment. Agents either learn from the provided reward signal or through intrinsic objectives and are evaluated by semantically meaningful achievements that can be unlocked during each episode, such as discovering resources and crafting tools. Consistently unlocking all achievements requires strong generalization, deep exploration, and long-term reasoning. We experimentally verify that Crafter is of appropriate difficulty to drive future research and provide baselines scores of reward agents and unsupervised agents. Furthermore, we observe sophisticated behaviors emerging from maximizing the reward signal, such as building tunnel systems, bridges, houses, and plantations. We hope that Crafter will accelerate research progress by quickly evaluating a wide spectrum of abilities.", "pdf": "https://openreview.net/pdf/116a18888b3fb460e882ec2b844128223e3b17ca.pdf"} {"title": "Mind the Gap: Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks", "url": "https://openreview.net/forum?id=vqGi8Kp0wM", "detail_url": "https://openreview.net/forum?id=vqGi8Kp0wM", "authors": "Peihao Zhu,Rameen Abdal,John Femiani,Peter Wonka", "tags": "ICLR 2022,Poster", "abstract": "We present a new method for one shot domain adaptation. The input to our method is trained GAN that can produce images in domain A and a single reference image I_B from domain B. The proposed algorithm can translate any output of the trained GAN from domain A to domain B. There are two main advantages of our method compared to the current state of the art: First, our solution achieves higher visual quality, e.g. by noticeably reducing overfitting. Second, our solution allows for more degrees of freedom to control the domain gap, i.e. what aspects of image I_B are used to define the domain B. Technically, we realize the new method by building on a pre-trained StyleGAN generator as GAN and a pre-trained CLIP model for representing the domain gap. We propose several new regularizers for controlling the domain gap to optimize the weights of the pre-trained StyleGAN generator to output images in domain B instead of domain A. The regularizers prevent the optimization from taking on too many attributes of the single reference image. Our results show significant visual improvements over the state of the art as well as multiple applications that highlight improved control.", "pdf": "https://openreview.net/pdf/2f6e593f100fa850ecde50e059aa6b2e73a3f6fe.pdf"} {"title": "On Evaluation Metrics for Graph Generative Models", "url": "https://openreview.net/forum?id=EnwCZixjSh", "detail_url": "https://openreview.net/forum?id=EnwCZixjSh", "authors": "Rylee Thompson,Boris Knyazev,Elahe Ghalebi,Jungtaek Kim,Graham W. Taylor", "tags": "ICLR 2022,Poster", "abstract": "In image generation, generative models can be evaluated naturally by visually inspecting model outputs. However, this is not always the case for graph generative models (GGMs), making their evaluation challenging. Currently, the standard process for evaluating GGMs suffers from three critical limitations: i) it does not produce a single score which makes model selection challenging, ii) in many cases it fails to consider underlying edge and node features, and iii) it is prohibitively slow to perform. In this work, we mitigate these issues by searching for \\emph{scalar, domain-agnostic, and scalable metrics} for evaluating and ranking GGMs. To this end, we study existing GGM metrics and neural-network-based metrics emerging from generative models of images that use embeddings extracted from a task-specific network. Motivated by the power of Graph Neural Networks (GNNs) to extract meaningful graph representations \\emph{without any training}, we introduce several metrics based on the features extracted by an untrained random GNN. We design experiments to thoroughly test and objectively score metrics on their ability to measure the diversity and fidelity of generated graphs, as well as their sample and computational efficiency. Depending on the quantity of samples, we recommend one of two metrics from our collection of random-GNN-based metrics. We show these two metrics to be more expressive than pre-existing and alternative random-GNN-based metrics using our objective scoring. While we focus on applying these metrics to GGM evaluation, in practice this enables the ability to easily compute the dissimilarity between any two sets of graphs \\emph{regardless of domain}. Our code is released at: https://github.com/uoguelph-mlrg/GGM-metrics.", "pdf": "https://openreview.net/pdf/fcb94055fd54a7db263aab7d0f85b591c34e713e.pdf"} {"title": "Selective Ensembles for Consistent Predictions", "url": "https://openreview.net/forum?id=HfUyCRBeQc", "detail_url": "https://openreview.net/forum?id=HfUyCRBeQc", "authors": "Emily Black,Klas Leino,Matt Fredrikson", "tags": "ICLR 2022,Poster", "abstract": "Recent work has shown that models trained to the same objective, and which achieve similar measures of accuracy on consistent test data, may nonetheless behave very differently on individual predictions. This inconsistency is undesirable in high-stakes contexts, such as medical diagnosis and finance. We show that this duplicitous behavior extends beyond predictions to feature attributions, which may likewise have negative implications for the intelligibility of a model, and one's ability to find recourse for subjects. We then introduce selective ensembles to mitigate such inconsistencies by applying hypothesis testing to the predictions of a set of models trained using randomly-selected starting conditions; importantly, selective ensembles can abstain in cases where a consistent outcome cannot be achieved up to a specified confidence level. We prove that that prediction disagreement between selective ensembles is bounded, and empirically demonstrate that selective ensembles achieve consistent predictions and feature attributions while maintaining low abstention rates. On several benchmark datasets, selective ensembles reach zero inconsistently predicted points, with abstention rates as low as 1.5%.", "pdf": "https://openreview.net/pdf/aef96c65d43466af59147df0d990f0b94efbef7a.pdf"} {"title": "Graph Condensation for Graph Neural Networks", "url": "https://openreview.net/forum?id=WLEx3Jo4QaB", "detail_url": "https://openreview.net/forum?id=WLEx3Jo4QaB", "authors": "Wei Jin,Lingxiao Zhao,Shichang Zhang,Yozen Liu,Jiliang Tang,Neil Shah", "tags": "ICLR 2022,Poster", "abstract": "Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns. To alleviate the concerns, we propose and study the problem of graph condensation for graph neural networks (GNNs). Specifically, we aim to condense the large, original graph into a small, synthetic and highly-informative graph, such that GNNs trained on the small graph and large graph have comparable performance. We approach the condensation problem by imitating the GNN training trajectory on the original graph through the optimization of a gradient matching loss and design a strategy to condense node futures and structural information simultaneously. Extensive experiments have demonstrated the effectiveness of the proposed framework in condensing different graph datasets into informative smaller graphs. In particular, we are able to approximate the original test accuracy by 95.3\\% on Reddit, 99.8\\% on Flickr and 99.0\\% on Citeseer, while reducing their graph size by more than 99.9\\%, and the condensed graphs can be used to train various GNN architectures. ", "pdf": "https://openreview.net/pdf/fb904d1d840eb264e6ab2e160ff7322153a1fbb0.pdf"} {"title": "DIVA: Dataset Derivative of a Learning Task", "url": "https://openreview.net/forum?id=bVvMOtLMiw", "detail_url": "https://openreview.net/forum?id=bVvMOtLMiw", "authors": "Yonatan Dukler,Alessandro Achille,Giovanni Paolini,Avinash Ravichandran,Marzia Polito,Stefano Soatto", "tags": "ICLR 2022,Poster", "abstract": "We present a method to compute the derivative of a learning task with respect to a dataset. A learning task is a function from a training set to the validation error, which can be represented by a trained deep neural network (DNN). The ``dataset derivative'' is a linear operator, computed around the trained model, that informs how perturbations of the weight of each training sample affect the validation error, usually computed on a separate validation dataset. Our method, DIVA (Differentiable Validation) hinges on a closed-form differentiable expression of the leave-one-out cross-validation error around a pre-trained DNN. Such expression constitutes the dataset derivative. DIVA could be used for dataset auto-curation, for example removing samples with faulty annotations, augmenting a dataset with additional relevant samples, or rebalancing. More generally, DIVA can be used to optimize the dataset, along with the parameters of the model, as part of the training process without the need for a separate validation dataset, unlike bi-level optimization methods customary in AutoML. To illustrate the flexibility of DIVA, we report experiments on sample auto-curation tasks such as outlier rejection, dataset extension, and automatic aggregation of multi-modal data.", "pdf": "https://openreview.net/pdf/c20ae574c689fe5fbecb96f791b3e678973e0053.pdf"} {"title": "Towards General Function Approximation in Zero-Sum Markov Games", "url": "https://openreview.net/forum?id=sA4qIu3zv6v", "detail_url": "https://openreview.net/forum?id=sA4qIu3zv6v", "authors": "Baihe Huang,Jason D. Lee,Zhaoran Wang,Zhuoran Yang", "tags": "ICLR 2022,Poster", "abstract": "This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves. The study focuses on the challenging settings where the value\nfunction or the model is parameterized by general function classes. Provably efficient\nalgorithms for both decoupled and coordinated settings are developed. In the decoupled setting where the agent controls a single player and plays against an arbitrary opponent, we propose a new model-free algorithm. The sample complexity is governed by the Minimax Eluder dimension\u2014a new dimension of the function class in Markov games. As a special case, this method improves the state-of-the-art algorithm\nby a $\\sqrt{d}$ factor in the regret when the reward function and transition kernel are parameterized with d-dimensional linear features. In the coordinated setting where both\nplayers are controlled by the agent, we propose a model-based algorithm and a model-free algorithm. In the model-based algorithm, we prove that sample complexity can\nbe bounded by a generalization of Witness rank to Markov games. The model-free\nalgorithm enjoys a $\\sqrt{K}$-regret upper bound where $K$ is the number of episodes. Our\nalgorithms are based on new techniques of alternate optimism", "pdf": "https://openreview.net/pdf/89164a5698b4ced1396254451108620fc52d5bc1.pdf"} {"title": "Exposing the Implicit Energy Networks behind Masked Language Models via Metropolis--Hastings", "url": "https://openreview.net/forum?id=6PvWo1kEvlT", "detail_url": "https://openreview.net/forum?id=6PvWo1kEvlT", "authors": "Kartik Goyal,Chris Dyer,Taylor Berg-Kirkpatrick", "tags": "ICLR 2022,Poster", "abstract": "While recent work has shown that scores from models trained by the ubiquitous masked language modeling (MLM) objective effectively discriminate probable from improbable sequences, it is still an open question if these MLMs specify a principled probability distribution over the space of possible sequences. In this paper, we interpret MLMs as energy-based sequence models and propose two energy parametrizations derivable from the trained MLMs. In order to draw samples correctly from these models, we develop a tractable sampling scheme based on the Metropolis--Hastings Monte Carlo algorithm. In our approach, samples are proposed from the same masked conditionals used for training the masked language models, and they are accepted or rejected based on their energy values according to the target distribution. We validate the effectiveness of the proposed parametrizations by exploring the quality of samples drawn from these energy-based models for both open-ended unconditional generation and a conditional generation task of machine translation. We theoretically and empirically justify our sampling algorithm by showing that the masked conditionals on their own do not yield a Markov chain whose stationary distribution is that of our target distribution, and our approach generates higher quality samples than other recently proposed undirected generation approaches (Wang et al., 2019, Ghazvininejad et al., 2019).", "pdf": "https://openreview.net/pdf/dfdc7212f0c035baaec71e0d9d64317aec15492b.pdf"} {"title": "ClimateGAN: Raising Climate Change Awareness by Generating Images of Floods", "url": "https://openreview.net/forum?id=EZNOb_uNpJk", "detail_url": "https://openreview.net/forum?id=EZNOb_uNpJk", "authors": "Victor Schmidt,Alexandra Luccioni,M\u00e9lisande Teng,Tianyu Zhang,Alexia Reynaud,Sunand Raghupathi,Gautier Cosne,Adrien Juraver,Vahe Vardanyan,Alex Hern\u00e1ndez-Garc\u00eda,Yoshua Bengio", "tags": "ICLR 2022,Poster", "abstract": "Climate change is a major threat to humanity and the actions required to prevent its catastrophic consequences include changes in both policy-making and individual behaviour. However, taking action requires understanding its seemingly abstract and distant consequences. Projecting the potential impacts of extreme climate events such as flooding in familiar places can help make the impacts of climate change more concrete and encourage action. As part of a larger initiative to build a website (https://thisclimatedoesnotexist.com) that projects extreme climate events onto user-chosen photos, we present our solution to simulate photo-realistic floods on authentic images. To address this complex task in the absence of suitable data, we propose ClimateGAN, a model that leverages both simulated and real data through unsupervised domain adaptation and conditional image generation. In this paper, we describe the details of our framework, thoroughly evaluate the main components of our architecture and demonstrate that our model is capable of robustly generating photo-realistic flooding on street images.", "pdf": "https://openreview.net/pdf/ca121d72177c0fb77244bde0b2958681a89d4b98.pdf"} {"title": "A Comparison of Hamming Errors of Representative Variable Selection Methods", "url": "https://openreview.net/forum?id=nhN-fqxmNGx", "detail_url": "https://openreview.net/forum?id=nhN-fqxmNGx", "authors": "Tracy Ke,Longlin Wang", "tags": "ICLR 2022,Poster", "abstract": "Lasso is a celebrated method for variable selection in linear models, but it faces challenges when the covariates are moderately or strongly correlated. This motivates alternative approaches such as using a non-convex penalty, adding a ridge regularization, or conducting a post-Lasso thresholding. In this paper, we compare Lasso with 5 other methods: Elastic net, SCAD, forward selection, thresholded Lasso, and forward backward selection. We measure their performances theoretically by the expected Hamming error, assuming that the regression coefficients are ${\\it iid}$ drawn from a two-point mixture and that the Gram matrix is block-wise diagonal. By deriving the rates of convergence of Hamming errors and the phase diagrams, we obtain useful conclusions about the pros and cons of different methods.", "pdf": "https://openreview.net/pdf/ae8e44624ed225194ef2c6ef294ae6d5067515b8.pdf"} {"title": "A Program to Build E(N)-Equivariant Steerable CNNs ", "url": "https://openreview.net/forum?id=WE4qe9xlnQw", "detail_url": "https://openreview.net/forum?id=WE4qe9xlnQw", "authors": "Gabriele Cesa,Leon Lang,Maurice Weiler", "tags": "ICLR 2022,Poster", "abstract": "Equivariance is becoming an increasingly popular design choice to build data efficient neural networks by exploiting prior knowledge about the symmetries of the problem at hand. Euclidean steerable CNNs are one of the most common classes of equivariant networks. While the constraints these architectures need to satisfy are understood, existing approaches are tailored to specific (classes of) groups. No generally applicable method that is practical for implementation has been described so far. In this work, we generalize the Wigner-Eckart theorem proposed in Lang & Weiler (2020), which characterizes general $G$-steerable kernel spaces for compact groups $G$ over their homogeneous spaces, to arbitrary $G$-spaces. This enables us to directly parameterize filters in terms of a band-limited basis on the whole space rather than on $G$'s orbits, but also to easily implement steerable CNNs equivariant to a large number of groups. To demonstrate its generality, we instantiate our method on a variety of isometry groups acting on the Euclidean space $\\mathbb{R}^3$. Our framework allows us to build $E(3)$ and $SE(3)$-steerable CNNs like previous works, but also CNNs with arbitrary $G\\leq O(3)$-steerable kernels. For example, we build 3D CNNs equivariant to the symmetries of platonic solids or choose $G=SO(2)$ when working with 3D data having only azimuthal symmetries. We compare these models on 3D shapes and molecular datasets, observing improved performance by matching the model's symmetries to the ones of the data.", "pdf": "https://openreview.net/pdf/6d634b6f1eabc70593f897e223c78025e3029b52.pdf"} {"title": "Minimax Optimization with Smooth Algorithmic Adversaries", "url": "https://openreview.net/forum?id=UdxJ2fJx7N0", "detail_url": "https://openreview.net/forum?id=UdxJ2fJx7N0", "authors": "Tanner Fiez,Chi Jin,Praneeth Netrapalli,Lillian J Ratliff", "tags": "ICLR 2022,Poster", "abstract": "This paper considers minimax optimization $\\min_x \\max_y f(x, y)$ in the challenging setting where $f$ can be both nonconvex in $x$ and nonconcave in $y$. Though such optimization problems arise in many machine learning paradigms including training generative adversarial networks (GANs) and adversarially robust models, from a theoretical point of view, two fundamental issues remain: (i) the absence of simple and efficiently computable optimality notions, and (ii) cyclic or diverging behavior of existing algorithms. This paper proposes a new theoretical framework for nonconvex-nonconcave minimax optimization that addresses both of the above issues. The starting point of this paper is the observation that, under a computational budget, the max-player can not fully maximize $f(x,\\cdot)$ since nonconcave maximization is NP-hard in general. So, we propose a new framework, and a corresponding algorithm, for the min-player to play against \\emph{smooth algorithms} deployed by the adversary (i.e., the max-player) instead of against full maximization. Our algorithm is guaranteed to make monotonic progress (thus having no limit cycles or diverging behavior), and to find an appropriate ``stationary point'' in a polynomial number of iterations. Our framework covers practically relevant settings where the smooth algorithms deployed by the adversary are multi-step stochastic gradient ascent, and its accelerated version. We further present experimental results that confirm our theoretical findings and demonstrate the effectiveness of the proposed approach in practice on simple, conceptual settings.", "pdf": "https://openreview.net/pdf/6f978c34600cf6fcf440c6e1bf8d1f93e0afce3d.pdf"} {"title": "On Distributed Adaptive Optimization with Gradient Compression", "url": "https://openreview.net/forum?id=CI-xXX9dg9l", "detail_url": "https://openreview.net/forum?id=CI-xXX9dg9l", "authors": "Xiaoyun Li,Belhal Karimi,Ping Li", "tags": "ICLR 2022,Poster", "abstract": "We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMSGrad algorithm. Gradient compression with error feedback is applied to reduce the communication cost in the gradient transmission process. Our convergence analysis of COMP-AMS shows that such compressed gradient averaging strategy yields same convergence rate as standard AMSGrad, and also exhibits the linear speedup effect w.r.t. the number of local workers. Compared with recently proposed protocols on distributed adaptive methods, COMP-AMS is simple and convenient. Numerical experiments are conducted to justify the theoretical findings, and demonstrate that the proposed method can achieve same test accuracy as the full-gradient AMSGrad with substantial communication savings. With its simplicity and efficiency, COMP-AMS can serve as a useful distributed training framework for adaptive methods.", "pdf": "https://openreview.net/pdf/84313c8e0bf7b65d71addc3b16aba48f161f4092.pdf"} {"title": "Leveraging unlabeled data to predict out-of-distribution performance", "url": "https://openreview.net/forum?id=o_HsiMPYh_x", "detail_url": "https://openreview.net/forum?id=o_HsiMPYh_x", "authors": "Saurabh Garg,Sivaraman Balakrishnan,Zachary Chase Lipton,Behnam Neyshabur,Hanie Sedghi", "tags": "ICLR 2022,Poster", "abstract": "Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions\nthat may cause performance drops. In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data. We propose Average Thresholded Confidence (ATC), a practical method that learns a \\emph{threshold} on the model's confidence, predicting accuracy as the fraction of unlabeled examples for which model confidence exceeds that threshold. ATC outperforms previous methods across several model architectures, types of distribution shifts (e.g., due to synthetic corruptions, dataset reproduction, or novel subpopulations), and datasets (\\textsc{Wilds}-FMoW, ImageNet, \\breeds, CIFAR, and MNIST). In our experiments, ATC estimates target performance $2\\text{--}4\\times$ more accurately than prior methods. We also explore the theoretical foundations of the problem, proving that, in general, identifying the accuracy is just as hard as identifying the optimal predictor and thus, the efficacy of any method rests upon (perhaps unstated) assumptions on the nature of the shift. Finally, analyzing our method on some toy distributions, we provide insights concerning when it works.\n\n", "pdf": "https://openreview.net/pdf/f94008d1c0cfc4177d8617db211b62b1f85906ea.pdf"} {"title": "VC dimension of partially quantized neural networks in the overparametrized regime", "url": "https://openreview.net/forum?id=7udZAsEzd60", "detail_url": "https://openreview.net/forum?id=7udZAsEzd60", "authors": "Yutong Wang,Clayton Scott", "tags": "ICLR 2022,Poster", "abstract": "Vapnik-Chervonenkis (VC) theory has so far been unable to explain the small generalization error of overparametrized neural networks. Indeed, existing applications of VC theory to large networks obtain upper bounds on VC dimension that are proportional to the number of weights, and for a large class of networks, these upper bound are known to be tight. In this work, we focus on a class of partially quantized networks that we refer to as hyperplane arrangement neural networks (HANNs). Using a sample compression analysis, we show that HANNs can have VC dimension significantly smaller than the number of weights, while being highly expressive. In particular, empirical risk minimization over HANNs in the overparametrized regime achieves the minimax rate for classification with Lipschitz posterior class probability. We further demonstrate the expressivity of HANNs empirically. On a panel of 121 UCI datasets, overparametrized HANNs are able to match the performance of state-of-the-art full-precision models.", "pdf": "https://openreview.net/pdf/9760187606b3496a5f4a0fe752a22416bb4a2e21.pdf"} {"title": "Optimal Representations for Covariate Shift", "url": "https://openreview.net/forum?id=Rf58LPCwJj0", "detail_url": "https://openreview.net/forum?id=Rf58LPCwJj0", "authors": "Yangjun Ruan,Yann Dubois,Chris J. Maddison", "tags": "ICLR 2022,Poster", "abstract": "Machine learning systems often experience a distribution shift between training and testing. In this paper, we introduce a simple variational objective whose optima are exactly the set of all representations on which risk minimizers are guaranteed to be robust to any distribution shift that preserves the Bayes predictor, e.g., covariate shifts. Our objective has two components. First, a representation must remain discriminative for the task, i.e., some predictor must be able to simultaneously minimize the source and target risk. Second, the representation's marginal support needs to be the same across source and target. We make this practical by designing self-supervised objectives that only use unlabelled data and augmentations to train robust representations. \nOur objectives give insights into the robustness of CLIP, and further improve CLIP's representations to achieve SOTA results on DomainBed.", "pdf": "https://openreview.net/pdf/ddc6369b11aed2bc1a72bc2f493bb2ebd0f65be7.pdf"} {"title": "Fortuitous Forgetting in Connectionist Networks", "url": "https://openreview.net/forum?id=ei3SY1_zYsE", "detail_url": "https://openreview.net/forum?id=ei3SY1_zYsE", "authors": "Hattie Zhou,Ankit Vani,Hugo Larochelle,Aaron Courville", "tags": "ICLR 2022,Poster", "abstract": "Forgetting is often seen as an unwanted characteristic in both human and machine learning. However, we propose that forgetting can in fact be favorable to learning. We introduce forget-and-relearn as a powerful paradigm for shaping the learning trajectories of artificial neural networks. In this process, the forgetting step selectively removes undesirable information from the model, and the relearning step reinforces features that are consistently useful under different conditions. The forget-and-relearn framework unifies many existing iterative training algorithms in the image classification and language emergence literature, and allows us to understand the success of these algorithms in terms of the disproportionate forgetting of undesirable information. We leverage this understanding to improve upon existing algorithms by designing more targeted forgetting operations. Insights from our analysis provide a coherent view on the dynamics of iterative training in neural networks and offer a clear path towards performance improvements.", "pdf": "https://openreview.net/pdf/ca4d5fd0fac40867b797ca356f4056c7cb11fc6a.pdf"} {"title": "EigenGame Unloaded: When playing games is better than optimizing", "url": "https://openreview.net/forum?id=So6YAqnqgMj", "detail_url": "https://openreview.net/forum?id=So6YAqnqgMj", "authors": "Ian Gemp,Brian McWilliams,Claire Vernade,Thore Graepel", "tags": "ICLR 2022,Poster", "abstract": "We build on the recently proposed EigenGame that views eigendecomposition as a competitive game. EigenGame's updates are biased if computed using minibatches of data, which hinders convergence and more sophisticated parallelism in the stochastic setting. In this work, we propose an unbiased stochastic update that is asymptotically equivalent to EigenGame, enjoys greater parallelism allowing computation on datasets of larger sample sizes, and outperforms EigenGame in experiments. We present applications to finding the principal components of massive datasets and performing spectral clustering of graphs. We analyze and discuss our proposed update in the context of EigenGame and the shift in perspective from optimization to games.", "pdf": "https://openreview.net/pdf/cedcb096f43d8f1b1e43c8969cf5b1dd7e83d5ae.pdf"} {"title": "Contextualized Scene Imagination for Generative Commonsense Reasoning", "url": "https://openreview.net/forum?id=Oh1r2wApbPv", "detail_url": "https://openreview.net/forum?id=Oh1r2wApbPv", "authors": "PeiFeng Wang,Jonathan Zamora,Junfeng Liu,Filip Ilievski,Muhao Chen,Xiang Ren", "tags": "ICLR 2022,Poster", "abstract": "Humans use natural language to compose common concepts from their environment into plausible, day-to-day scene descriptions. However, such generative commonsense reasoning (GCSR) skills are lacking in state-of-the-art text generation methods. Descriptive sentences about arbitrary concepts generated by neural text generation models (e.g., pre-trained text-to-text Transformers) are often grammatically fluent but may not correspond to human common sense, largely due to their lack of mechanisms to capture concept relations, to identify implicit concepts, and to perform generalizable reasoning about unseen concept compositions. In this paper, we propose an Imagine-and-Verbalize (I\\&V) method, which learns to imagine a relational scene knowledge graph (SKG) with relations between the input concepts, and leverage the SKG as a constraint when generating a plausible scene description. We collect and harmonize a set of knowledge resources from different domains and modalities, providing a rich auxiliary supervision signal for I\\&V. The experiments demonstrate the effectiveness of I\\&V in improving language models on both concept-to-sentence and concept-to-story generation tasks, while enabling the model to learn well from fewer task examples and generate SKGs that make common sense to human annotators.", "pdf": "https://openreview.net/pdf/a66e1b12b2211131a44463611c8c272c21decbfb.pdf"} {"title": "Scene Transformer: A unified architecture for predicting future trajectories of multiple agents", "url": "https://openreview.net/forum?id=Wm3EA5OlHsG", "detail_url": "https://openreview.net/forum?id=Wm3EA5OlHsG", "authors": "Jiquan Ngiam,Vijay Vasudevan,Benjamin Caine,Zhengdong Zhang,Hao-Tien Lewis Chiang,Jeffrey Ling,Rebecca Roelofs,Alex Bewley,Chenxi Liu,Ashish Venugopal,David J Weiss,Benjamin Sapp,Zhifeng Chen,Jonathon Shlens", "tags": "ICLR 2022,Poster", "abstract": "Predicting the motion of multiple agents is necessary for planning in dynamic environments. This task is challenging for autonomous driving since agents (e.g., vehicles and pedestrians) and their associated behaviors may be diverse and influence one another. Most prior work have focused on predicting independent futures for each agent based on all past motion, and planning against these independent predictions. However, planning against independent predictions can make it challenging to represent the future interaction possibilities between different agents, leading to sub-optimal planning. In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents. Inspired by recent language modeling approaches, we use a masking strategy as the query to our model, enabling one to invoke a single model to predict agent behavior in many ways, such as potentially conditioned on the goal or full future trajectory of the autonomous vehicle or the behavior of other agents in the environment. Our model architecture employs attention to combine features across road elements, agent interactions, and time steps. We evaluate our approach on autonomous driving datasets for both marginal and joint motion prediction, and achieve state of the art performance across two popular datasets. Through combining a scene-centric approach, agent permutation equivariant model, and a sequence masking strategy, we show that our model can unify a variety of motion prediction tasks from joint motion predictions to conditioned prediction.", "pdf": "https://openreview.net/pdf/92f191f2cdcf1389ed2d3dce901833dc5fc6deaf.pdf"} {"title": "DISSECT: Disentangled Simultaneous Explanations via Concept Traversals", "url": "https://openreview.net/forum?id=qY79G8jGsep", "detail_url": "https://openreview.net/forum?id=qY79G8jGsep", "authors": "Asma Ghandeharioun,Been Kim,Chun-Liang Li,Brendan Jou,Brian Eoff,Rosalind Picard", "tags": "ICLR 2022,Poster", "abstract": "Explaining deep learning model inferences is a promising venue for scientific understanding, improving safety, uncovering hidden biases, evaluating fairness, and beyond, as argued by many scholars. One of the principal benefits of counterfactual explanations is allowing users to explore \"what-if\" scenarios through what does not and cannot exist in the data, a quality that many other forms of explanation such as heatmaps and influence functions are inherently incapable of doing. However, most previous work on generative explainability cannot disentangle important concepts effectively, produces unrealistic examples, or fails to retain relevant information. We propose a novel approach, DISSECT, that jointly trains a generator, a discriminator, and a concept disentangler to overcome such challenges using little supervision. DISSECT generates Concept Traversals (CTs), defined as a sequence of generated examples with increasing degrees of concepts that influence a classifier's decision. By training a generative model from a classifier's signal, DISSECT offers a way to discover a classifier's inherent \"notion\" of distinct concepts automatically rather than rely on user-predefined concepts. We show that DISSECT produces CTs that (1) disentangle several concepts, (2) are influential to a classifier's decision and are coupled to its reasoning due to joint training (3), are realistic, (4) preserve relevant information, and (5) are stable across similar inputs. We validate DISSECT on several challenging synthetic and realistic datasets where previous methods fall short of satisfying desirable criteria for interpretability and show that it performs consistently well. Finally, we present experiments showing applications of DISSECT for detecting potential biases of a classifier and identifying spurious artifacts that impact predictions.", "pdf": "https://openreview.net/pdf/8e8a8d5dafd24c9cba49d3671b2ee34d0decdecf.pdf"} {"title": "Heteroscedastic Temporal Variational Autoencoder For Irregularly Sampled Time Series", "url": "https://openreview.net/forum?id=Az7opqbQE-3", "detail_url": "https://openreview.net/forum?id=Az7opqbQE-3", "authors": "Satya Narayan Shukla,Benjamin Marlin", "tags": "ICLR 2022,Poster", "abstract": "Irregularly sampled time series commonly occur in several domains where they present a significant challenge to standard deep learning models. In this paper, we propose a new deep learning framework for probabilistic interpolation of irregularly sampled time series that we call the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE includes a novel input layer to encode information about input observation sparsity, a temporal VAE architecture to propagate uncertainty due to input sparsity, and a heteroscedastic output layer to enable variable uncertainty in the output interpolations. Our results show that the proposed architecture is better able to reflect variable uncertainty through time due to sparse and irregular sampling than a range of baseline and traditional models, as well as recently proposed deep latent variable models that use homoscedastic output layers.", "pdf": "https://openreview.net/pdf/4a602866528e0ae9511889c65b61991ad9ddfd8b.pdf"} {"title": "A Neural Tangent Kernel Perspective of Infinite Tree Ensembles", "url": "https://openreview.net/forum?id=vUH85MOXO7h", "detail_url": "https://openreview.net/forum?id=vUH85MOXO7h", "authors": "Ryuichi Kanoh,Mahito Sugiyama", "tags": "ICLR 2022,Poster", "abstract": "In practical situations, the tree ensemble is one of the most popular models along with neural networks. A soft tree is a variant of a decision tree. Instead of using a greedy method for searching splitting rules, the soft tree is trained using a gradient method in which the entire splitting operation is formulated in a differentiable form. Although ensembles of such soft trees have been used increasingly in recent years, little theoretical work has been done to understand their behavior. By considering an ensemble of infinite soft trees, this paper introduces and studies the Tree Neural Tangent Kernel (TNTK), which provides new insights into the behavior of the infinite ensemble of soft trees. Using the TNTK, we theoretically identify several non-trivial properties, such as global convergence of the training, the equivalence of the oblivious tree structure, and the degeneracy of the TNTK induced by the deepening of the trees.", "pdf": "https://openreview.net/pdf/39b3d2b8700abc51932e7eea69ff8d0868dc2be8.pdf"} {"title": "AlphaZero-based Proof Cost Network to Aid Game Solving", "url": "https://openreview.net/forum?id=nKWjE4QF1hB", "detail_url": "https://openreview.net/forum?id=nKWjE4QF1hB", "authors": "Ti-Rong Wu,Chung-Chin Shih,Ting Han Wei,Meng-Yu Tsai,Wei-Yuan Hsu,I-Chen Wu", "tags": "ICLR 2022,Poster", "abstract": "The AlphaZero algorithm learns and plays games without hand-crafted expert knowledge. However, since its objective is to play well, we hypothesize that a better objective can be defined for the related but separate task of solving games. This paper proposes a novel approach to solving problems by modifying the training target of the AlphaZero algorithm, such that it prioritizes solving the game quickly, rather than winning. We train a Proof Cost Network (PCN), where proof cost is a heuristic that estimates the amount of work required to solve problems. This matches the general concept of the so-called proof number from proof number search, which has been shown to be well-suited for game solving. We propose two specific training targets. The first finds the shortest path to a solution, while the second estimates the proof cost. We conduct experiments on solving 15x15 Gomoku and 9x9 Killall-Go problems with both MCTS-based and FDFPN solvers. Comparisons between using AlphaZero networks and PCN as heuristics show that PCN can solve more problems.", "pdf": "https://openreview.net/pdf/b5c23474ea991857d67e3e750bb82c36a669b2e9.pdf"} {"title": "Bayesian Framework for Gradient Leakage", "url": "https://openreview.net/forum?id=f2lrIbGx3x7", "detail_url": "https://openreview.net/forum?id=f2lrIbGx3x7", "authors": "Mislav Balunovic,Dimitar Iliev Dimitrov,Robin Staab,Martin Vechev", "tags": "ICLR 2022,Poster", "abstract": "Federated learning is an established method for training machine learning models without sharing training data. However, recent work has shown that it cannot guarantee data privacy as shared gradients can still leak sensitive information. To formalize the problem of gradient leakage, we propose a theoretical framework that enables, for the first time, analysis of the Bayes optimal adversary phrased as an optimization problem. We demonstrate that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients. Our experiments confirm the effectiveness of the Bayes optimal adversary when it has knowledge of the underlying distribution. Further, our experimental evaluation shows that several existing heuristic defenses are not effective against stronger attacks, especially early in the training process. Thus, our findings indicate that the construction of more effective defenses and their evaluation remains an open problem.\n", "pdf": "https://openreview.net/pdf/4e51a98c83f488bc5362a078c71216dab544be00.pdf"} {"title": "Universalizing Weak Supervision", "url": "https://openreview.net/forum?id=YpPiNigTzMT", "detail_url": "https://openreview.net/forum?id=YpPiNigTzMT", "authors": "Changho Shin,Winfred Li,Harit Vishwakarma,Nicholas Carl Roberts,Frederic Sala", "tags": "ICLR 2022,Poster", "abstract": "Weak supervision (WS) frameworks are a popular way to bypass hand-labeling large datasets for training data-hungry models.\nThese approaches synthesize multiple noisy but cheaply-acquired estimates of labels into a set of high-quality pseudo-labels for downstream training. However, the synthesis technique is specific to a particular kind of label, such as binary labels or sequences, and each new label type requires manually designing a new synthesis algorithm. Instead, we propose a universal technique that enables weak supervision over any label type while still offering desirable properties, including practical flexibility, computational efficiency, and theoretical guarantees. We apply this technique to important problems previously not tackled by WS frameworks including learning to rank, regression, and learning in hyperbolic space. Theoretically, our synthesis approach produces a consistent estimators for learning some challenging but important generalizations of the exponential family model. Experimentally, we validate our framework and show improvement over baselines in diverse settings including real-world learning-to-rank and regression problems along with learning on hyperbolic manifolds.", "pdf": "https://openreview.net/pdf/a2adc08eeb52dcddf2563c7bb42940946813b522.pdf"} {"title": "Maximum n-times Coverage for Vaccine Design", "url": "https://openreview.net/forum?id=ULfq0qR25dY", "detail_url": "https://openreview.net/forum?id=ULfq0qR25dY", "authors": "Ge Liu,Alexander Dimitrakakis,Brandon Carter,David Gifford", "tags": "ICLR 2022,Poster", "abstract": "We introduce the maximum $n$-times coverage problem that selects $k$ overlays to maximize the summed coverage of weighted elements, where each element must be covered at least $n$ times. We also define the min-cost $n$-times coverage problem where the objective is to select the minimum set of overlays such that the sum of the weights of elements that are covered at least $n$ times is at least $\\tau$. Maximum $n$-times coverage is a generalization of the multi-set multi-cover problem, is NP-complete, and is not submodular. We introduce two new practical solutions for $n$-times coverage based on integer linear programming and sequential greedy optimization. We show that maximum $n$-times coverage is a natural way to frame peptide vaccine design, and find that it produces a pan-strain COVID-19 vaccine design that is superior to 29 other published designs in predicted population coverage and the expected number of peptides displayed by each individual's HLA molecules.", "pdf": "https://openreview.net/pdf/9d61f13ecd3d02a7e3ed6243e5e82f05c5f456cf.pdf"} {"title": "KL Guided Domain Adaptation", "url": "https://openreview.net/forum?id=0JzqUlIVVDd", "detail_url": "https://openreview.net/forum?id=0JzqUlIVVDd", "authors": "A. Tuan Nguyen,Toan Tran,Yarin Gal,Philip Torr,Atilim Gunes Baydin", "tags": "ICLR 2022,Poster", "abstract": "Domain adaptation is an important problem and often needed for real-world applications. In this problem, instead of i.i.d. training and testing datapoints, we assume that the source (training) data and the target (testing) data have different distributions. With that setting, the empirical risk minimization training procedure often does not perform well, since it does not account for the change in the distribution. A common approach in the domain adaptation literature is to learn a representation of the input that has the same (marginal) distribution over the source and the target domain. However, these approaches often require additional networks and/or optimizing an adversarial (minimax) objective, which can be very expensive or unstable in practice. To improve upon these marginal alignment techniques, in this paper, we first derive a generalization bound for the target loss based on the training loss and the reverse Kullback-Leibler (KL) divergence between the source and the target representation distributions. Based on this bound, we derive an algorithm that minimizes the KL term to obtain a better generalization to the target domain. We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples without any additional network or a minimax objective. This leads to a theoretically sound alignment method which is also very efficient and stable in practice. Experimental results also suggest that our method outperforms other representation-alignment approaches.", "pdf": "https://openreview.net/pdf/943a05167d50e4a4de4e6c043f7c7e6374502f72.pdf"} {"title": "From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness", "url": "https://openreview.net/forum?id=Mspk_WYKoEH", "detail_url": "https://openreview.net/forum?id=Mspk_WYKoEH", "authors": "Lingxiao Zhao,Wei Jin,Leman Akoglu,Neil Shah", "tags": "ICLR 2022,Poster", "abstract": "Message Passing Neural Networks (MPNNs) are a common type of Graph Neural Network (GNN), in which each node\u2019s representation is computed recursively by aggregating representations (\u201cmessages\u201d) from its immediate neighbors akin to a star-shaped pattern. MPNNs are appealing for being efficient and scalable, however their expressiveness is upper-bounded by the 1st-order Weisfeiler-Lehman isomorphism test (1-WL). In response, prior works propose highly expressive models at the cost of scalability and sometimes generalization performance. Our work stands between these two regimes: we introduce a general framework to uplift any MPNN to be more expressive, with limited scalability overhead and greatly improved practical performance. We achieve this by extending local aggregation in MPNNs from star patterns to general subgraph patterns (e.g., k-egonets): in our framework, each node representation is computed as the encoding of a surrounding induced subgraph rather than encoding of immediate neighbors only (i.e. a star). We choose the subgraph encoder to be a GNN (mainly MPNNs, considering scalability) to design a general framework that serves as a wrapper to uplift any GNN. We call our proposed method GNN-AK (GNN As Kernel), as the framework resembles a convolutional neural network by replacing the kernel with\nGNNs. Theoretically, we show that our framework is strictly more powerful than 1&2-WL, and is not less powerful than 3-WL. We also design subgraph sampling strategies which greatly reduce memory footprint and improve speed while maintaining performance. Our method sets new state-of-the-art performance by large margins for several well-known graph ML tasks; specifically, 0.08 MAE on ZINC,\n74.79% and 86.887% accuracy on CIFAR10 and PATTERN respectively.", "pdf": "https://openreview.net/pdf/cc341ac588b917bee10fc4d5bb31b4a119b6108b.pdf"} {"title": "NETWORK INSENSITIVITY TO PARAMETER NOISE VIA PARAMETER ATTACK DURING TRAINING", "url": "https://openreview.net/forum?id=-8sBpe7rDiV", "detail_url": "https://openreview.net/forum?id=-8sBpe7rDiV", "authors": "Julian B\u00fcchel,Fynn Firouz Faber,Dylan Richard Muir", "tags": "ICLR 2022,Poster", "abstract": "Neuromorphic neural network processors, in the form of compute-in-memory crossbar arrays of memristors, or in the form of subthreshold analog and mixed-signal ASICs, promise enormous advantages in compute density and energy efficiency for NN-based ML tasks. However, these technologies are prone to computational non-idealities, due to process variation and intrinsic device physics. This degrades the task performance of networks deployed to the processor, by introducing parameter noise into the deployed model. While it is possible to calibrate each device, or train networks individually for each processor, these approaches are expensive and impractical for commercial deployment. Alternative methods are therefore needed to train networks that are inherently robust against parameter variation, as a consequence of network architecture and parameters. We present a new network training algorithm that attacks network parameters during training, and promotes robust performance during inference in the face of random parameter variation. Our approach introduces a loss regularization term that penalizes the susceptibility of a network to weight perturbation. We compare against previous approaches for producing parameter insensitivity such as dropout, weight smoothing and introducing parameter noise during training. We show that our approach produces models that are more robust to random mismatch-induced parameter variation as well as to targeted parameter variation. Our approach finds minima in flatter locations in the weight-loss landscape compared with other approaches, highlighting that the networks found by our technique are less sensitive to parameter perturbation. Our work provides an approach to deploy neural network architectures to inference devices that suffer from computational non-idealities, with minimal loss of performance. This method will enable deployment at scale to novel energy-efficient computational substrates, promoting cheaper and more prevalent edge inference.", "pdf": "https://openreview.net/pdf/b7b77ce8535702dba33084aa20eb08cae53193f4.pdf"} {"title": "Gradient Importance Learning for Incomplete Observations", "url": "https://openreview.net/forum?id=fXHl76nO2AZ", "detail_url": "https://openreview.net/forum?id=fXHl76nO2AZ", "authors": "Qitong Gao,Dong Wang,Joshua David Amason,Siyang Yuan,Chenyang Tao,Ricardo Henao,Majda Hadziahmetovic,Lawrence Carin,Miroslav Pajic", "tags": "ICLR 2022,Poster", "abstract": "Though recent works have developed methods that can generate estimates (or imputations) of the missing entries in a dataset to facilitate downstream analysis, most depend on assumptions that may not align with real-world applications and could suffer from poor performance in subsequent tasks such as classification. This is particularly true if the data have large missingness rates or a small sample size. More importantly, the imputation error could be propagated into the prediction step that follows, which may constrain the capabilities of the prediction model. In this work, we introduce the gradient importance learning (GIL) method to train multilayer perceptrons (MLPs) and long short-term memories (LSTMs) to directly perform inference from inputs containing missing values without imputation. Specifically, we employ reinforcement learning (RL) to adjust the gradients used to train these models via back-propagation. This allows the model to exploit the underlying information behind missingness patterns. We test the approach on real-world time-series (i.e., MIMIC-III), tabular data obtained from an eye clinic, and a standard dataset (i.e., MNIST), where our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.", "pdf": "https://openreview.net/pdf/77f82d36ef5cbde5647d6e9f7fb7dd38ce4e2a91.pdf"} {"title": "Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset", "url": "https://openreview.net/forum?id=v6s3HVjPerv", "detail_url": "https://openreview.net/forum?id=v6s3HVjPerv", "authors": "Leon Sixt,Martin Schuessler,Oana-Iuliana Popescu,Philipp Wei\u00df,Tim Landgraf", "tags": "ICLR 2022,Poster", "abstract": "A variety of methods exist to explain image classification models. However, whether they provide any benefit to users over simply comparing various inputs and the model\u2019s respective predictions remains unclear. We conducted a user study (N=240) to test how such a baseline explanation technique performs against concept-based and counterfactual explanations. To this end, we contribute a synthetic dataset generator capable of biasing individual attributes and quantifying their relevance to the model. In a study, we assess if participants can identify the relevant set of attributes compared to the ground-truth. Our results show that the baseline outperformed concept-based explanations. Counterfactual explanations from an invertible neural network performed similarly as the baseline. Still, they allowed users to identify some attributes more accurately. Our results highlight the importance of measuring how well users can reason about biases of a model, rather than solely relying on technical evaluations or proxy tasks. We open-source our study and dataset so it can serve as a blue-print for future studies.", "pdf": "https://openreview.net/pdf/49e3023b785924a7159ee756c546ac2ec523e8ea.pdf"} {"title": "Understanding the Variance Collapse of SVGD in High Dimensions", "url": "https://openreview.net/forum?id=Qycd9j5Qp9J", "detail_url": "https://openreview.net/forum?id=Qycd9j5Qp9J", "authors": "Jimmy Ba,Murat A Erdogdu,Marzyeh Ghassemi,Shengyang Sun,Taiji Suzuki,Denny Wu,Tianzong Zhang", "tags": "ICLR 2022,Poster", "abstract": "Stein variational gradient descent (SVGD) is a deterministic inference algorithm that evolves a set of particles to fit a target distribution. Despite its computational efficiency, SVGD often underestimates the variance of the target distribution in high dimensions. In this work we attempt to explain the variance collapse in SVGD. On the qualitative side, we compare the SVGD update with gradient descent on the maximum mean discrepancy (MMD) objective; we observe that the variance collapse phenomenon relates to the bias from deterministic updates present in the \"driving force\" of SVGD, and empirically verify that removal of such bias leads to more accurate variance estimation. On the quantitative side, we demonstrate that the variance collapse of SVGD can be accurately predicted in the proportional asymptotic limit, i.e., when the number of particles $n$ and dimensions $d$ diverge at the same rate. In particular, for learning high-dimensional isotropic Gaussians, we derive the exact equilibrium variance for both SVGD and MMD-descent under certain near-orthogonality assumption on the converged particles, and confirm that SVGD suffers from the \"curse of dimensionality\".", "pdf": "https://openreview.net/pdf/71e77dab5447ab6226d0f2e58132575f2217dc3b.pdf"} {"title": "Generalisation in Lifelong Reinforcement Learning through Logical Composition ", "url": "https://openreview.net/forum?id=ZOcX-eybqoL", "detail_url": "https://openreview.net/forum?id=ZOcX-eybqoL", "authors": "Geraud Nangue Tasse,Steven James,Benjamin Rosman", "tags": "ICLR 2022,Poster", "abstract": "We leverage logical composition in reinforcement learning to create a framework that enables an agent to autonomously determine whether a new task can be immediately solved using its existing abilities, or whether a task-specific skill should be learned. In the latter case, the proposed algorithm also enables the agent to learn the new task faster by generating an estimate of the optimal policy. Importantly, we provide two main theoretical results: we bound the performance of the transferred policy on a new task, and we give bounds on the necessary and sufficient number of tasks that need to be learned throughout an agent's lifetime to generalise over a distribution. We verify our approach in a series of experiments, where we perform transfer learning both after learning a set of base tasks, and after learning an arbitrary set of tasks. We also demonstrate that, as a side effect of our transfer learning approach, an agent can produce an interpretable Boolean expression of its understanding of the current task. Finally, we demonstrate our approach in the full lifelong setting where an agent receives tasks from an unknown distribution. Starting from scratch, an agent is able to quickly generalise over the task distribution after learning only a few tasks, which are sub-logarithmic in the size of the task space.", "pdf": "https://openreview.net/pdf/89cb79a9b9bb6a9a833a7a8ae73c8c5a87792970.pdf"} {"title": "PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions", "url": "https://openreview.net/forum?id=gSdSJoenupI", "detail_url": "https://openreview.net/forum?id=gSdSJoenupI", "authors": "Zhaoqi Leng,Mingxing Tan,Chenxi Liu,Ekin Dogus Cubuk,Jay Shi,Shuyang Cheng,Dragomir Anguelov", "tags": "ICLR 2022,Poster", "abstract": "Cross-entropy loss and focal loss are the most common choices when training deep neural networks for classification problems. Generally speaking, however, a good loss function can take on much more flexible forms, and should be tailored for different tasks and datasets. Motivated by how functions can be approximated via Taylor expansion, we propose a simple framework, named PolyLoss, to view and design loss functions as a linear combination of polynomial functions. Our PolyLoss allows the importance of different polynomial bases to be easily adjusted depending on the targeting tasks and datasets, while naturally subsuming the aforementioned cross-entropy loss and focal loss as special cases. Extensive experimental results show that the optimal choice within the PolyLoss is indeed dependent on the task and dataset. Simply by introducing one extra hyperparameter and adding one line of code, our Poly-1 formulation outperforms the cross-entropy loss and focal loss on 2D image classification, instance segmentation, object detection, and 3D object detection tasks, sometimes by a large margin.", "pdf": "https://openreview.net/pdf/d1430448cff98fb37273293f39735ba9c6a4313a.pdf"} {"title": "Improving Non-Autoregressive Translation Models Without Distillation", "url": "https://openreview.net/forum?id=I2Hw58KHp8O", "detail_url": "https://openreview.net/forum?id=I2Hw58KHp8O", "authors": "Xiao Shi Huang,Felipe Perez,Maksims Volkovs", "tags": "ICLR 2022,Poster", "abstract": "Transformer-based autoregressive (AR) machine translation models have achieved significant performance improvements, nearing human-level accuracy on some languages. The AR framework translates one token at a time which can be time consuming, especially for long sequences. To accelerate inference, recent work has been exploring non-autoregressive (NAR) approaches that translate blocks of tokens in parallel. Despite significant progress, leading NAR models still lag behind their AR counterparts, and only become competitive when trained with distillation. In this paper we investigate possible reasons behind this performance gap, namely, the indistinguishability of tokens, and mismatch between training and inference. We then propose the Conditional Masked Language Model with Correction (CMLMC) that addresses these problems. Empirically, we show that CMLMC achieves state-of-the-art NAR performance when trained on raw data without distillation and approaches AR performance on multiple datasets. Full code for this work will be released at the time of publication.", "pdf": "https://openreview.net/pdf/fe5e18c9939f10295c39693c81d77b03816cad63.pdf"} {"title": "A Theory of Tournament Representations", "url": "https://openreview.net/forum?id=zzk231Ms1Ih", "detail_url": "https://openreview.net/forum?id=zzk231Ms1Ih", "authors": "Arun Rajkumar,Vishnu Veerathu,Abdul Bakey Mir", "tags": "ICLR 2022,Poster", "abstract": "Real-world tournaments are almost always intransitive. Recent works have noted that parametric models which assume $d$ dimensional node representations can effectively model intransitive tournaments. However, nothing is known about the structure of the class of tournaments that arise out of any fixed $d$ dimensional representations. In this work, we develop a novel theory for understanding parametric tournament representations. Our first contribution is to structurally characterize the class of tournaments that arise out of $d$ dimensional representations. We do this by showing that these tournament classes have forbidden configurations that must necessarily be a union of flip classes, a novel way to partition the set of all tournaments. We further characterize rank $2$ tournaments completely by showing that the associated forbidden flip class contains just $2$ tournaments. Specifically, we show that the rank $2$ tournaments are equivalent to locally transitive tournaments. This insight allows us to show that the minimum feedback arc set problem on this tournament class can be solved using the standard Quicksort procedure. We also exhibit specific forbidden configurations for rank $4$ tournaments. For a general rank $d$ tournament class, we show that the flip class associated with a coned-doubly regular tournament of size $\\mathcal{O}(\\sqrt{d})$ must be a forbidden configuration. To answer a dual question, using a celebrated result of Froster, we show a lower bound of $\\Theta(\\sqrt{n})$ on the minimum dimension needed to represent all tournaments on $n$ nodes. For any given tournament, we show a novel upper bound on the smallest representation dimension that depends on the least size of the number of unique nodes in any feedback arc set of the flip class associated with a tournament. We show how our results also shed light on the upper bound of sign-rank of matrices. ", "pdf": "https://openreview.net/pdf/a7853d8c301f8a37bc858f4c428d73862dabff26.pdf"} {"title": "Convergent and Efficient Deep Q Learning Algorithm", "url": "https://openreview.net/forum?id=OJm3HZuj4r7", "detail_url": "https://openreview.net/forum?id=OJm3HZuj4r7", "authors": "Zhikang T. Wang,Masahito Ueda", "tags": "ICLR 2022,Poster", "abstract": "Despite the empirical success of the deep Q network (DQN) reinforcement learning algorithm and its variants, DQN is still not well understood and it does not guarantee convergence. In this work, we show that DQN can indeed diverge and cease to operate in realistic settings. Although there exist gradient-based convergent methods, we show that they actually have inherent problems in learning dynamics which cause them to fail even for simple tasks. To overcome these problems, we propose a convergent DQN algorithm (C-DQN) that is guaranteed to converge and can work with large discount factors (0.9998). It learns robustly in difficult settings and can learn several difficult games in the Atari 2600 benchmark that DQN fails to solve.", "pdf": "https://openreview.net/pdf/d999c3cb704da4722ea5330b5dd48600eb9c4ef4.pdf"} {"title": "Trigger Hunting with a Topological Prior for Trojan Detection", "url": "https://openreview.net/forum?id=TXsjU8BaibT", "detail_url": "https://openreview.net/forum?id=TXsjU8BaibT", "authors": "Xiaoling Hu,Xiao Lin,Michael Cogswell,Yi Yao,Susmit Jha,Chao Chen", "tags": "ICLR 2022,Poster", "abstract": "Despite their success and popularity, deep neural networks (DNNs) are vulnerable when facing backdoor attacks. This impedes their wider adoption, especially in mission critical applications. This paper tackles the problem of Trojan detection, namely, identifying Trojaned models \u2013 models trained with poisoned data. One popular approach is reverse engineering, i.e., recovering the triggers on a clean image by manipulating the model\u2019s prediction. One major challenge of reverse engineering approach is the enormous search space of triggers. To this end, we propose innovative priors such as diversity and topological simplicity to not only increase the chances of finding the appropriate triggers but also improve the quality of the found triggers. Moreover, by encouraging a diverse set of trigger candidates, our method can perform effectively in cases with unknown target labels. We demonstrate that these priors can significantly improve the quality of the recovered triggers, resulting in substantially improved Trojan detection accuracy as validated on both synthetic and publicly available TrojAI benchmarks.", "pdf": "https://openreview.net/pdf/4db1d42d467c296c5ec7fa3f38e37dcb5c140e84.pdf"} {"title": "Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL", "url": "https://openreview.net/forum?id=JM2kFbJvvI", "detail_url": "https://openreview.net/forum?id=JM2kFbJvvI", "authors": "Yanchao Sun,Ruijie Zheng,Yongyuan Liang,Furong Huang", "tags": "ICLR 2022,Poster", "abstract": "Evaluating the worst-case performance of a reinforcement learning (RL) agent under the strongest/optimal adversarial perturbations on state observations (within some constraints) is crucial for understanding the robustness of RL agents. However, finding the optimal adversary is challenging, in terms of both whether we can find the optimal attack and how efficiently we can find it. Existing works on adversarial RL either use heuristics-based methods that may not find the strongest adversary, or directly train an RL-based adversary by treating the agent as a part of the environment, which can find the optimal adversary but may become intractable in a large state space. \nThis paper introduces a novel attacking method to find the optimal attacks through collaboration between a designed function named \"actor\" and an RL-based learner named \"director'\". The actor crafts state perturbations for a given policy perturbation direction, and the director learns to propose the best policy perturbation directions. Our proposed algorithm, PA-AD, is theoretically optimal and significantly more efficient than prior RL-based works in environments with large state spaces. Empirical results show that our proposed PA-AD universally outperforms state-of-the-art attacking methods in various Atari and MuJoCo environments. By applying PA-AD to adversarial training, we achieve state-of-the-art empirical robustness in multiple tasks under strong adversaries.", "pdf": "https://openreview.net/pdf/b11335ea1d1d4ca95531723261e11735e0550bc4.pdf"} {"title": "Chunked Autoregressive GAN for Conditional Waveform Synthesis", "url": "https://openreview.net/forum?id=v3aeIsY_vVX", "detail_url": "https://openreview.net/forum?id=v3aeIsY_vVX", "authors": "Max Morrison,Rithesh Kumar,Kundan Kumar,Prem Seetharaman,Aaron Courville,Yoshua Bengio", "tags": "ICLR 2022,Poster", "abstract": "Conditional waveform synthesis models learn a distribution of audio waveforms given conditioning such as text, mel-spectrograms, or MIDI. These systems employ deep generative models that model the waveform via either sequential (autoregressive) or parallel (non-autoregressive) sampling. Generative adversarial networks (GANs) have become a common choice for non-autoregressive waveform synthesis. However, state-of-the-art GAN-based models produce artifacts when performing mel-spectrogram inversion. In this paper, we demonstrate that these artifacts correspond with an inability for the generator to learn accurate pitch and periodicity. We show that simple pitch and periodicity conditioning is insufficient for reducing this error relative to using autoregression. We discuss the inductive bias that autoregression provides for learning the relationship between instantaneous frequency and phase, and show that this inductive bias holds even when autoregressively sampling large chunks of the waveform during each forward pass. Relative to prior state-of-the-art GAN-based models, our proposed model, Chunked Autoregressive GAN (CARGAN) reduces pitch error by 40-60%, reduces training time by 58%, maintains a fast inference speed suitable for real-time or interactive applications, and maintains or improves subjective quality.", "pdf": "https://openreview.net/pdf/070239829c83980ec499e2eff346d48eafe3ecb5.pdf"} {"title": "COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks", "url": "https://openreview.net/forum?id=psh0oeMSBiF", "detail_url": "https://openreview.net/forum?id=psh0oeMSBiF", "authors": "Fan Wu,Linyi Li,Huan Zhang,Bhavya Kailkhura,Krishnaram Kenthapadi,Ding Zhao,Bo Li", "tags": "ICLR 2022,Poster", "abstract": "As reinforcement learning (RL) has achieved near human-level performance in a variety of tasks, its robustness has raised great attention. While a vast body of research has explored test-time (evasion) attacks in RL and corresponding defenses, its robustness against training-time (poisoning) attacks remains largely unanswered. In this work, we focus on certifying the robustness of of\ufb02ine RL in the presence of poisoning attacks, where a subset of training trajectories could be arbitrarily manipulated. We propose the \ufb01rst certi\ufb01cation framework, COPA, to certify the number of poisoning trajectories that can be tolerated regarding different certi\ufb01cation criteria. Given the complex structure of RL, we propose two certi\ufb01cation criteria: per-state action stability and cumulative reward bound. To further improve the certi\ufb01cation, we propose new partition and aggregation protocols to train robust policies. We further prove that some of the proposed certi\ufb01cation methods are theoretically tight and some are NP-Complete problems. We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can signi\ufb01cantly improve the certi\ufb01cations; (2) Our certi\ufb01cations for both per-state action stability and cumulative reward bound are ef\ufb01cient and tight; (3) The certi\ufb01cation for different training algorithms and environments are different, implying their intrinsic robustness properties. All experimental results are available at https://copa-leaderboard.github.io.", "pdf": "https://openreview.net/pdf/0a24a116cb24a1e99cd715566dae243e36472472.pdf"} {"title": "ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning", "url": "https://openreview.net/forum?id=Vzh1BFUCiIX", "detail_url": "https://openreview.net/forum?id=Vzh1BFUCiIX", "authors": "Vamsi Aribandi,Yi Tay,Tal Schuster,Jinfeng Rao,Huaixiu Steven Zheng,Sanket Vaibhav Mehta,Honglei Zhuang,Vinh Q. Tran,Dara Bahri,Jianmo Ni,Jai Gupta,Kai Hui,Sebastian Ruder,Donald Metzler", "tags": "ICLR 2022,Poster", "abstract": "Despite the recent success of multi-task learning and transfer learning for natural language processing (NLP), few works have systematically studied the effect of scaling up the number of tasks during pre-training. Towards this goal, this paper introduces ExMix (Extreme Mixture): a massive collection of 107 supervised NLP tasks across diverse domains and task-families. Using ExMix, we study the effect of multi-task pre-training at the largest scale to date, and analyze co-training transfer amongst common families of tasks. Through this analysis, we show that manually curating an ideal set of tasks for multi-task pre-training is not straightforward, and that multi-task scaling can vastly improve models on its own. Finally, we propose ExT5: a model pre-trained using a multi-task objective of self-supervised span denoising and supervised ExMix. Via extensive experiments, we show that ExT5 outperforms strong T5 baselines on SuperGLUE, GEM, Rainbow, Closed-Book QA tasks, and several tasks outside of ExMix. ExT5 also significantly improves sample efficiency while pre-training.", "pdf": "https://openreview.net/pdf/b64da5c159b90bf56d174fc67459b74928711232.pdf"} {"title": "Provable Adaptation across Multiway Domains via Representation Learning", "url": "https://openreview.net/forum?id=gRCCdgpVZf", "detail_url": "https://openreview.net/forum?id=gRCCdgpVZf", "authors": "Zhili Feng,Shaobo Han,Simon Shaolei Du", "tags": "ICLR 2022,Poster", "abstract": "This paper studies zero-shot domain adaptation where each domain is indexed on a multi-dimensional array, and we only have data from a small subset of domains. Our goal is to produce predictors that perform well on \\emph{unseen} domains. We propose a model which consists of a domain-invariant latent representation layer and a domain-specific linear prediction layer with a low-rank tensor structure. Theoretically, we present explicit sample complexity bounds to characterize the prediction error on unseen domains in terms of the number of domains with training data and the number of data per domain. To our knowledge, this is the first finite-sample guarantee for zero-shot domain adaptation. In addition, we provide experiments on two-way MNIST and four-way fiber sensing datasets to demonstrate the effectiveness of our proposed model.", "pdf": "https://openreview.net/pdf/097cce8a39240bc2a614483e1cb4e0314237f10a.pdf"} {"title": "Efficient Token Mixing for Transformers via Adaptive Fourier Neural Operators", "url": "https://openreview.net/forum?id=EXHG-A3jlM", "detail_url": "https://openreview.net/forum?id=EXHG-A3jlM", "authors": "John Guibas,Morteza Mardani,Zongyi Li,Andrew Tao,Anima Anandkumar,Bryan Catanzaro", "tags": "ICLR 2022,Poster", "abstract": "Vision transformers have delivered tremendous success in representation learning. This is primarily due to effective token mixing through self attention. However, this scales quadratically with the number of pixels, which becomes infeasible for high-resolution inputs. To cope with this challenge, we propose Adaptive Fourier Neural Operator (AFNO) as an efficient token mixer that learns to mix in the Fourier domain. AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution. This principle was previously used to design FNO, which solves global convolution efficiently in the Fourier domain and has shown promise in learning challenging PDEs. To handle challenges in visual representation learning such as discontinuities in images and high resolution inputs, we propose principled architectural modifications to FNO which results in memory and computational efficiency. This includes imposing a block-diagonal structure on the channel mixing weights, adaptively sharing weights across tokens, and sparsifying the frequency modes via soft-thresholding and shrinkage. The resulting model is highly parallel with a quasi-linear complexity and has linear memory in the sequence size. AFNO outperforms self-attention mechanisms for few-shot segmentation in terms of both efficiency and accuracy. For Cityscapes segmentation with the Segformer-B3 backbone, AFNO can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms.", "pdf": "https://openreview.net/pdf/bec7c123720932f2545dfb12e85bab8ac5cca6ff.pdf"} {"title": "Sample Selection with Uncertainty of Losses for Learning with Noisy Labels", "url": "https://openreview.net/forum?id=xENf4QUL4LW", "detail_url": "https://openreview.net/forum?id=xENf4QUL4LW", "authors": "Xiaobo Xia,Tongliang Liu,Bo Han,Mingming Gong,Jun Yu,Gang Niu,Masashi Sugiyama", "tags": "ICLR 2022,Poster", "abstract": "In learning with noisy labels, the sample selection approach is very popular, which regards small-loss data as correctly labeled data during training. However, losses are generated on-the-\ufb02y based on the model being trained with noisy labels, and thus large-loss data are likely but not certain to be incorrect. There are actually two possibilities of a large-loss data point: (a) it is mislabeled, and then its loss decreases slower than other data, since deep neural networks learn patterns \ufb01rst; (b) it belongs to an underrepresented group of data and has not been selected yet. In this paper, we incorporate the uncertainty of losses by adopting interval estimation instead of point estimation of losses, where lower bounds of the con\ufb01dence intervals of losses derived from distribution-free concentration inequalities, but not losses themselves, are used for sample selection. In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try. As a result, we can better explore underrepresented data that are correctly labeled but seem to be mislabeled at \ufb01rst glance. Experiments demonstrate that the proposed method is superior to baselines and robust to a broad range of label noise types.", "pdf": "https://openreview.net/pdf/0ebab5bba4b36eec025abfd2e21f947e05d6e662.pdf"} {"title": "Data-Driven Offline Optimization for Architecting Hardware Accelerators", "url": "https://openreview.net/forum?id=GsH-K1VIyy", "detail_url": "https://openreview.net/forum?id=GsH-K1VIyy", "authors": "Aviral Kumar,Amir Yazdanbakhsh,Milad Hashemi,Kevin Swersky,Sergey Levine", "tags": "ICLR 2022,Poster", "abstract": "To attain higher efficiency, the industry has gradually reformed towards application-specific hardware accelerators. While such a paradigm shift is already starting to show promising results, designers need to spend considerable manual effort and perform large number of time-consuming simulations to find accelerators that can accelerate multiple target applications while obeying design constraints. Moreover, such a simulation-driven approach must be re-run from scratch every time the set of target applications or design constraints change. An alternative paradigm is to use a data-driven, offline approach that utilizes logged simulation data, to architect hardware accelerators, without needing any form of simulations. Such an approach not only alleviates the need to run time-consuming simulation, but also enables data reuse and applies even when set of target applications changes. In this paper, we develop such a data-driven offline optimization method for designing hardware accelerators, dubbed PRIME, that enjoys all of these properties. Our approach learns a conservative, robust estimate of the desired cost function, utilizes infeasible points and optimizes the design against this estimate without any additional simulator queries during optimization. PRIME architects accelerators---tailored towards both single- and multi-applications---improving performance upon stat-of-the-art simulation-driven methods by about 1.54x and 1.20x, while considerably reducing the required total simulation time by 93% and 99%, respectively. In addition, PRIME also architects effective accelerators for unseen applications in a zero-shot setting, outperforming simulation-based methods by 1.26x.", "pdf": "https://openreview.net/pdf/62fa3ad6648729230b552447a872cf6777743905.pdf"} {"title": "Multi-Agent MDP Homomorphic Networks", "url": "https://openreview.net/forum?id=H7HDG--DJF0", "detail_url": "https://openreview.net/forum?id=H7HDG--DJF0", "authors": "Elise van der Pol,Herke van Hoof,Frans A Oliehoek,Max Welling", "tags": "ICLR 2022,Poster", "abstract": "This paper introduces Multi-Agent MDP Homomorphic Networks, a class of networks that allows distributed execution using only local information, yet is able to share experience between global symmetries in the joint state-action space of cooperative multi-agent systems. In cooperative multi-agent systems, complex symmetries arise between different configurations of the agents and their local observations. For example, consider a group of agents navigating: rotating the state globally results in a permutation of the optimal joint policy. Existing work on symmetries in single agent reinforcement learning can only be generalized to the fully centralized setting, because such approaches rely on the global symmetry in the full state-action spaces, and these can result in correspondences across agents. To encode such symmetries while still allowing distributed execution we propose a factorization that decomposes global symmetries into local transformations. Our proposed factorization allows for distributing the computation that enforces global symmetries over local agents and local interactions. We introduce a multi-agent equivariant policy network based on this factorization. We show empirically on symmetric multi-agent problems that globally symmetric distributable policies improve data efficiency compared to non-equivariant baselines.", "pdf": "https://openreview.net/pdf/3a8f28592a8f20859b54c37f57cb659f7b0664fa.pdf"} {"title": "Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields", "url": "https://openreview.net/forum?id=yhCp5RcZD7", "detail_url": "https://openreview.net/forum?id=yhCp5RcZD7", "authors": "Wang Yifan,Lukas Rahmann,Olga Sorkine-hornung", "tags": "ICLR 2022,Poster", "abstract": "We present implicit displacement fields, a novel representation for detailed 3D geometry. Inspired by a classic surface deformation technique, displacement mapping, our method represents a complex surface as a smooth base surface plus a displacement along the base's normal directions, resulting in a frequency-based shape decomposition, where the high-frequency signal is constrained geometrically by the low-frequency signal. Importantly, this disentanglement is unsupervised thanks to a tailored architectural design that has an innate frequency hierarchy by construction. We explore implicit displacement field surface reconstruction and detail transfer\nand demonstrate superior representational power, training stability, and generalizability.", "pdf": "https://openreview.net/pdf/55c1560b8382311a7f02b90aaba2fa21e4475e9d.pdf"} {"title": "Modeling Label Space Interactions in Multi-label Classification using Box Embeddings", "url": "https://openreview.net/forum?id=tyTH9kOxcvh", "detail_url": "https://openreview.net/forum?id=tyTH9kOxcvh", "authors": "Dhruvesh Patel,Pavitra Dangati,Jay-Yoon Lee,Michael Boratko,Andrew McCallum", "tags": "ICLR 2022,Poster", "abstract": "Multi-label classification is a challenging structured prediction task in which a set of output class labels are predicted for each input. Real-world datasets often have natural or latent taxonomic relationships between labels, making it desirable for models to employ label representations capable of capturing such taxonomies. Most existing multi-label classification methods do not do so, resulting in label predictions that are inconsistent with the taxonomic constraints, thus failing to accurately represent the fundamentals of problem setting. In this work, we introduce the multi-label box model (MBM), a multi-label classification method that combines the encoding power of neural networks with the inductive bias and probabilistic semantics of box embeddings (Vilnis, et al 2018). Box embeddings can be understood as trainable Venn-diagrams based on hyper-rectangles. Representing labels by boxes rather than vectors, MBM is able to capture taxonomic relations among labels. Furthermore, since box embeddings allow these relations to be learned by stochastic gradient descent from data, and to be read as calibrated conditional probabilities, our model is endowed with a high degree of interpretability. This interpretability also facilitates the injection of partial information about label-label relationships into model training, to further improve its consistency. We provide theoretical grounding for our method and show experimentally the model's ability to learn the true latent taxonomic structure from data. Through extensive empirical evaluations on both small and large-scale multi-label classification datasets, we show that BBM can significantly improve taxonomic consistency while preserving or surpassing the state-of-the-art predictive performance.", "pdf": "https://openreview.net/pdf/f5671d43125692a6533d9c7a1996335b8a1cd482.pdf"} {"title": "It Takes Two to Tango: Mixup for Deep Metric Learning", "url": "https://openreview.net/forum?id=ZKy2X3dgPA", "detail_url": "https://openreview.net/forum?id=ZKy2X3dgPA", "authors": "Shashanka Venkataramanan,Bill Psomas,Ewa Kijak,laurent amsaleg,Konstantinos Karantzalos,Yannis Avrithis", "tags": "ICLR 2022,Poster", "abstract": "Metric learning involves learning a discriminative representation such that embeddings of similar classes are encouraged to be close, while embeddings of dissimilar classes are pushed far apart. State-of-the-art methods focus mostly on sophisticated loss functions or mining strategies. On the one hand, metric learning losses consider two or more examples at a time. On the other hand, modern data augmentation methods for classification consider two or more examples at a time. The combination of the two ideas is under-studied.\n\nIn this work, we aim to bridge this gap and improve representations using mixup, which is a powerful data augmentation approach interpolating two or more examples and corresponding target labels at a time. This task is challenging because, unlike classification, the loss functions used in metric learning are not additive over examples, so the idea of interpolating target labels is not straightforward. To the best of our knowledge, we are the first to investigate mixing both examples and target labels for deep metric learning. We develop a generalized formulation that encompasses existing metric learning loss functions and modify it to accommodate for mixup, introducing Metric Mix, or Metrix. We also introduce a new metric---utilization---to demonstrate that by mixing examples during training, we are exploring areas of the embedding space beyond the training classes, thereby improving representations. To validate the effect of improved representations, we show that mixing inputs, intermediate representations or embeddings along with target labels significantly outperforms state-of-the-art metric learning methods on four benchmark deep metric learning datasets.", "pdf": "https://openreview.net/pdf/1b4683c706bc39fb7b56b3982f8c10166b29773d.pdf"} {"title": "Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation", "url": "https://openreview.net/forum?id=G89-1yZLFHk", "detail_url": "https://openreview.net/forum?id=G89-1yZLFHk", "authors": "Bichen Wu,Ruizhe Cheng,Peizhao Zhang,Tianren Gao,Joseph E. Gonzalez,Peter Vajda", "tags": "ICLR 2022,Poster", "abstract": "Traditional computer vision models are trained to predict a fixed set of predefined categories. Recently, natural language has been shown to be a broader and richer source of supervision that provides finer descriptions to visual concepts than supervised \"gold\" labels. Previous works, such as CLIP, use InfoNCE loss to train a model to predict the pairing between images and text captions. CLIP, however, is data hungry and requires more than 400M image-text pairs for training. The inefficiency can be \\textit{partially} attributed to the fact that the image-text pairs are noisy. To address this, we propose OTTER (Optimal TransporT distillation for Efficient zero-shot Recognition), which uses online entropic optimal transport to find a soft image-text match as labels for contrastive learning. Based on pretrained image and text encoders, models trained with OTTER achieve strong performance with only 3M image text pairs. Compared with InfoNCE loss, label smoothing, and knowledge distillation, OTTER consistently outperforms these baselines in zero-shot evaluation on Google Open Images (19,958 classes) and multi-labeled ImageNet 10K (10032 classes) from Tencent ML-Images. Over 42 evaluations on 7 different dataset/architecture settings x 6 metrics, OTTER outperforms (32) or ties (2) all baselines in 34 of them. Our source code is open sourced at https://github.com/facebookresearch/OTTER.", "pdf": "https://openreview.net/pdf/4692c27fcf85afed7f22e02ea4a1c14104fce2a4.pdf"} {"title": "A Statistical Framework for Efficient Out of Distribution Detection in Deep Neural Networks", "url": "https://openreview.net/forum?id=Oy9WeuZD51", "detail_url": "https://openreview.net/forum?id=Oy9WeuZD51", "authors": "Matan Haroush,Tzviel Frostig,Ruth Heller,Daniel Soudry", "tags": "ICLR 2022,Poster", "abstract": "Background.\nCommonly, Deep Neural Networks (DNNs) generalize well on samples drawn from a distribution similar to that of the training set. However, DNNs' predictions are brittle and unreliable when the test samples are drawn from a dissimilar distribution.\nThis is a major concern for deployment in real-world applications, where such behavior may come at a considerable cost, such as industrial production lines, autonomous vehicles, or healthcare applications.\n\nContributions.\nWe frame Out Of Distribution (OOD) detection in DNNs as a statistical hypothesis testing problem. Tests generated within our proposed framework combine evidence from the entire network.\nUnlike previous OOD detection heuristics, this framework returns a $p$-value for each test sample. It is guaranteed to maintain the Type I Error (T1E - incorrectly predicting OOD for an actual in-distribution sample) for test data. Moreover, this allows to combine several detectors while maintaining the T1E.\n\nBuilding on this framework, we suggest a novel OOD procedure based on low-order statistics. Our method achieves comparable or better results than state-of-the-art methods on well-accepted OOD benchmarks, without retraining the network parameters or assuming prior knowledge on the test distribution --- and at a fraction of the computational cost.", "pdf": "https://openreview.net/pdf/8ab4fc0f10bb1b17497961ee8ff9912af8ed2cc3.pdf"} {"title": "FedBABU: Toward Enhanced Representation for Federated Image Classification", "url": "https://openreview.net/forum?id=HuaYQfggn5u", "detail_url": "https://openreview.net/forum?id=HuaYQfggn5u", "authors": "Jaehoon Oh,SangMook Kim,Se-Young Yun", "tags": "ICLR 2022,Poster", "abstract": "Federated learning has evolved to improve a single global model under data heterogeneity (as a curse) or to develop multiple personalized models using data heterogeneity (as a blessing). However, little research has considered both directions simultaneously. In this paper, we first investigate the relationship between them by analyzing Federated Averaging at the client level and determine that a better federated global model performance does not constantly improve personalization. To elucidate the cause of this personalization performance degradation problem, we decompose the entire network into the body (extractor), which is related to universality, and the head (classifier), which is related to personalization. We then point out that this problem stems from training the head. Based on this observation, we propose a novel federated learning algorithm, coined FedBABU, which only updates the body of the model during federated training (i.e., the head is randomly initialized and never updated), and the head is fine-tuned for personalization during the evaluation process. Extensive experiments show consistent performance improvements and an efficient personalization of FedBABU. The code is available at https://github.com/jhoon-oh/FedBABU.", "pdf": "https://openreview.net/pdf/09e0b377fa4e3200e80d267b3e1df94235e10a45.pdf"} {"title": "Should I Run Offline Reinforcement Learning or Behavioral Cloning?", "url": "https://openreview.net/forum?id=AP1MKT37rJ", "detail_url": "https://openreview.net/forum?id=AP1MKT37rJ", "authors": "Aviral Kumar,Joey Hong,Anikait Singh,Sergey Levine", "tags": "ICLR 2022,Poster", "abstract": "Offline reinforcement learning (RL) algorithms can acquire effective policies by utilizing only previously collected experience, without any online interaction. While it is widely understood that offline RL is able to extract good policies even from highly suboptimal data, in practice offline RL is often used with data that resembles demonstrations. In this case, one can also use behavioral cloning (BC) algorithms, which mimic a subset of the dataset via supervised learning. It seems natural to ask: When should we prefer offline RL over BC? In this paper, our goal is to characterize environments and dataset compositions where offline RL leads to better performance than BC. In particular, we characterize the properties of environments that allow offline RL methods to perform better than BC methods even when only provided with expert data. Additionally, we show that policies trained on suboptimal data that is sufficiently noisy can attain better performance than even BC algorithms with expert data, especially on long-horizon problems. We validate our theoretical results via extensive experiments on both diagnostic and high-dimensional domains including robot manipulation, maze navigation and Atari games, when learning from a variety of data sources. We observe that modern offline RL methods trained on suboptimal, noisy data in sparse reward domains outperform cloning the expert data in several practical problems.", "pdf": "https://openreview.net/pdf/ab91050974b19858a9a241236b4d69019903de0e.pdf"} {"title": "Learning State Representations via Retracing in Reinforcement Learning", "url": "https://openreview.net/forum?id=CLpxpXqqBV", "detail_url": "https://openreview.net/forum?id=CLpxpXqqBV", "authors": "Changmin Yu,Dong Li,Jianye HAO,Jun Wang,Neil Burgess", "tags": "ICLR 2022,Poster", "abstract": "We propose learning via retracing, a novel self-supervised approach for learning the state representation (and the associated dynamics model) for reinforcement learning tasks. In addition to the predictive (reconstruction) supervision in the forward direction, we propose to include \"retraced\" transitions for representation/model learning, by enforcing the cycle-consistency constraint between the original and retraced states, hence improve upon the sample efficiency of learning. Moreover, learning via retracing explicitly propagates information about future transitions backward for inferring previous states, thus facilitates stronger representation learning for the downstream reinforcement learning tasks. We introduce Cycle-Consistency World Model (CCWM), a concrete model-based instantiation of learning via retracing. Additionally we propose a novel adaptive \"truncation\" mechanism for counteracting the negative impacts brought by \"irreversible\" transitions such that learning via retracing can be maximally effective. Through extensive empirical studies on visual-based continuous control benchmarks, we demonstrate that CCWM achieves state-of-the-art performance in terms of sample efficiency and asymptotic performance, whilst exhibiting behaviours that are indicative of stronger representation learning. ", "pdf": "https://openreview.net/pdf/04d24e2870546f3dcff312162e1b4006ecd641b7.pdf"} {"title": "Open-World Semi-Supervised Learning", "url": "https://openreview.net/forum?id=O-r8LOR-CCA", "detail_url": "https://openreview.net/forum?id=O-r8LOR-CCA", "authors": "Kaidi Cao,Maria Brbic,Jure Leskovec", "tags": "ICLR 2022,Poster", "abstract": "A fundamental limitation of applying semi-supervised learning in real-world settings is the assumption that unlabeled test data contains only classes previously encountered in the labeled training data. However, this assumption rarely holds for data in-the-wild, where instances belonging to novel classes may appear at testing time. Here, we introduce a novel open-world semi-supervised learning setting that formalizes the notion that novel classes may appear in the unlabeled test data. In this novel setting, the goal is to solve the class distribution mismatch problem between labeled and unlabeled data, where at the test time every input instance either needs to be classified into one of the existing classes or a new unseen class needs to be initialized and the instance assigned to it. To tackle this challenging problem, we propose ORCA, an end-to-end approach that assigns instances to previously seen classes or forms novel classes by grouping similar instances without assuming any prior knowledge. The key idea in ORCA is to utilize uncertainty adaptive margin to circumvent the bias towards seen classes caused by learning seen classes faster than the novel classes. In this way, ORCA gradually increases the discriminability of the model during the training and reduces the gap between intra-class variance of seen with respect to novel classes. Extensive experiments on image classification datasets and a single-cell dataset demonstrate that ORCA consistently outperforms alternative baselines, achieving 25% improvement on seen and 96% improvement on novel classes of the ImageNet dataset. ", "pdf": "https://openreview.net/pdf/e5ffbb438b307d601bd7794c87fae3c23950a63f.pdf"} {"title": "Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent", "url": "https://openreview.net/forum?id=af1eUDdUVz", "detail_url": "https://openreview.net/forum?id=af1eUDdUVz", "authors": "Oliver Bryniarski,Nabeel Hingun,Pedro Pachuca,Vincent Wang,Nicholas Carlini", "tags": "ICLR 2022,Poster", "abstract": "Evading adversarial example detection defenses requires finding adversarial examples that must simultaneously (a) be misclassified by the model and (b) be detected as non-adversarial. We find that existing attacks that attempt to satisfy multiple simultaneous constraints often over-optimize against one constraint at the cost of satisfying another. We introduce Selective Projected Gradient Descent and Orthogonal Projected Gradient Descent, improved attack techniques to generate adversarial examples that avoid this problem by orthogonalizing the gradients when running standard gradient-based attacks. We use our technique to evade four state-of-the-art detection defenses, reducing their accuracy to 0% while maintaining a 0% detection rate.", "pdf": "https://openreview.net/pdf/3d2eb96b012475581aa80cda16373c217e28c087.pdf"} {"title": "Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off", "url": "https://openreview.net/forum?id=Azh9QBQ4tR7", "detail_url": "https://openreview.net/forum?id=Azh9QBQ4tR7", "authors": "Rahul Rade,Seyed-Mohsen Moosavi-Dezfooli", "tags": "ICLR 2022,Poster", "abstract": "While adversarial training has become the de facto approach for training robust classifiers, it leads to a drop in accuracy. This has led to prior works postulating that accuracy is inherently at odds with robustness. Yet, the phenomenon remains inexplicable. In this paper, we closely examine the changes induced in the decision boundary of a deep network during adversarial training. We find that adversarial training leads to unwarranted increase in the margin along certain adversarial directions, thereby hurting accuracy. Motivated by this observation, we present a novel algorithm, called Helper-based Adversarial Training (HAT), to reduce this effect by incorporating additional wrongly labelled examples during training. Our proposed method provides a notable improvement in accuracy without compromising robustness. It achieves a better trade-off between accuracy and robustness in comparison to existing defenses. Code is available at https://github.com/imrahulr/hat.", "pdf": "https://openreview.net/pdf/c2a72787c4e6f0d24586b17eab7ca97027346386.pdf"} {"title": "Expressivity of Emergent Languages is a Trade-off between Contextual Complexity and Unpredictability", "url": "https://openreview.net/forum?id=WxuE_JWxjkW", "detail_url": "https://openreview.net/forum?id=WxuE_JWxjkW", "authors": "Shangmin Guo,Yi Ren,Kory Wallace Mathewson,Simon Kirby,Stefano V Albrecht,Kenny Smith", "tags": "ICLR 2022,Poster", "abstract": "Researchers are using deep learning models to explore the emergence of language in various language games, where agents interact and develop an emergent language to solve tasks. We focus on the factors that determine the expressivity of emergent languages, which reflects the amount of information about input spaces those languages are capable of encoding. We measure the expressivity of emergent languages based on the generalisation performance across different games, and demonstrate that the expressivity of emergent languages is a trade-off between the complexity and unpredictability of the context those languages emerged from. Another contribution of this work is the discovery of message type collapse, i.e. the number of unique messages is lower than that of inputs. We also show that using the contrastive loss proposed by Chen et al. (2020) can alleviate this problem.", "pdf": "https://openreview.net/pdf/be46689741877d2b59dc56c09443500af7dd2941.pdf"} {"title": "Fast AdvProp", "url": "https://openreview.net/forum?id=hcoswsDHNAW", "detail_url": "https://openreview.net/forum?id=hcoswsDHNAW", "authors": "Jieru Mei,Yucheng Han,Yutong Bai,Yixiao Zhang,Yingwei Li,Xianhang Li,Alan Yuille,Cihang Xie", "tags": "ICLR 2022,Poster", "abstract": "Adversarial Propagation (AdvProp) is an effective way to improve recognition models, leveraging adversarial examples. Nonetheless, AdvProp suffers from the extremely slow training speed, mainly because: a) extra forward and backward passes are required for generating adversarial examples; b) both original samples and their adversarial counterparts are used for training (i.e., 2X data). In this paper, we introduce Fast AdvProp, which aggressively revamps AdvProp's costly training components, rendering the method nearly as cheap as the vanilla training. Specifically, our modifications in Fast AdvProp are guided by the hypothesis that disentangled learning with adversarial examples is the key for performance improvements, while other training recipes (e.g., paired clean and adversarial training samples, multi-step adversarial attackers) could be largely simplified. \n\nOur empirical results show that, compared to the vanilla training baseline, Fast AdvProp is able to further model performance on a spectrum of visual benchmarks, without incurring extra training cost. Additionally, our ablations find Fast AdvProp scales better if larger models are used, is compatible with existing data augmentation methods (i.e., Mixup and CutMix), and can be easily adapted to other recognition tasks like object detection. The code is available here: https://github.com/meijieru/fast_advprop.", "pdf": "https://openreview.net/pdf/12e365a996eeb801b2173df149f6f8bc69ec02fa.pdf"} {"title": "Triangle and Four Cycle Counting with Predictions in Graph Streams", "url": "https://openreview.net/forum?id=8in_5gN9I0", "detail_url": "https://openreview.net/forum?id=8in_5gN9I0", "authors": "Justin Y Chen,Talya Eden,Piotr Indyk,Honghao Lin,Shyam Narayanan,Ronitt Rubinfeld,Sandeep Silwal,Tal Wagner,David Woodruff,Michael Zhang", "tags": "ICLR 2022,Poster", "abstract": "We propose data-driven one-pass streaming algorithms for estimating the number of triangles and four cycles, two fundamental problems in graph analytics that are widely studied in the graph data stream literature. Recently, Hsu et al. (2019) and Jiang et al. (2020) applied machine learning techniques in other data stream problems, using a trained oracle that can predict certain properties of the stream elements to improve on prior \u201cclassical\u201d algorithms that did not use oracles. In this paper, we explore the power of a \u201cheavy edge\u201d oracle in multiple graph edge streaming models. In the adjacency list model, we present a one-pass triangle counting algorithm improving upon the previous space upper bounds without such an oracle. In the arbitrary order model, we present algorithms for both triangle and four cycle estimation with fewer passes and the same space complexity as in previous algorithms, and we show several of these bounds are optimal. We analyze our algorithms under several noise models, showing that the algorithms perform well even when the oracle errs. Our methodology expands upon prior work on \u201cclassical\u201d streaming algorithms, as previous multi-pass and random order streaming algorithms can be seen as special cases of our algorithms, where the first pass or random order was used to implement the heavy edge oracle. Lastly, our experiments demonstrate advantages of the proposed method compared to state-of-the-art streaming algorithms.", "pdf": "https://openreview.net/pdf/25b70c42018200ce5f79c1f1dfc16f4c95ff9304.pdf"} {"title": "Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning", "url": "https://openreview.net/forum?id=js62_xuLDDv", "detail_url": "https://openreview.net/forum?id=js62_xuLDDv", "authors": "Natalie Dullerud,Karsten Roth,Kimia Hamidieh,Nicolas Papernot,Marzyeh Ghassemi", "tags": "ICLR 2022,Poster", "abstract": "Deep metric learning (DML) enables learning with less supervision through its emphasis on the similarity structure of representations. There has been much work on improving generalization of DML in settings like zero-shot retrieval, but little is known about its implications for fairness. In this paper, we are the first to evaluate state-of-the-art DML methods trained on imbalanced data, and to show the negative impact these representations have on minority subgroup performance when used for downstream tasks. In this work, we first define fairness in DML through an analysis of three properties of the representation space -- inter-class alignment, intra-class alignment, and uniformity -- and propose \\textit{\\textbf{finDML}}, the \\textit{\\textbf{f}}airness \\textit{\\textbf{i}}n \\textit{\\textbf{n}}on-balanced \\textit{\\textbf{DML}} benchmark to characterize representation fairness. Utilizing \\textit{finDML}, we find bias in DML representations to propagate to common downstream classification tasks. Surprisingly, this bias is propagated even when training data in the downstream task is re-balanced. To address this problem, we present Partial Attribute De-correlation (\\textit{\\textbf{\\pad}}) to disentangle feature representations from sensitive attributes and reduce performance gaps between subgroups in both embedding space and downstream metrics.", "pdf": "https://openreview.net/pdf/f404cf882e197b2c86f3e62a769c3cbf9024a9b5.pdf"} {"title": "NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs", "url": "https://openreview.net/forum?id=xMJWUKJnFSw", "detail_url": "https://openreview.net/forum?id=xMJWUKJnFSw", "authors": "Mikhail Galkin,Etienne Denis,Jiapeng Wu,William L. Hamilton", "tags": "ICLR 2022,Poster", "abstract": "Conventional representation learning algorithms for knowledge graphs (KG) map each entity to a unique embedding vector. \nSuch a shallow lookup results in a linear growth of memory consumption for storing the embedding matrix and incurs high computational costs of working with real-world KGs.\nDrawing parallels with subword tokenization commonly used in NLP, we explore the landscape of more parameter-efficient node embedding strategies with possibly sublinear memory requirements. \nTo this end, we propose NodePiece, an anchor-based approach to learn a fixed-size entity vocabulary. \nIn NodePiece, a vocabulary of subword/sub-entity units is constructed from anchor nodes in a graph with known relation types. Given such a fixed-size vocabulary, it is possible to bootstrap an encoding and embedding for any entity, including those unseen during training.\nExperiments show that NodePiece performs competitively in node classification, link prediction, and relation prediction tasks retaining less than 10% of explicit nodes in a graph as anchors and often having 10x fewer parameters. To this end, we show that a NodePiece-enabled model outperforms existing shallow models on a large OGB WikiKG 2 graph having 70x fewer parameters.\n", "pdf": "https://openreview.net/pdf/6eb641d163812ce838dbad1b8e7fddebb2c72c12.pdf"} {"title": "Pix2seq: A Language Modeling Framework for Object Detection", "url": "https://openreview.net/forum?id=e42KbIw6Wb", "detail_url": "https://openreview.net/forum?id=e42KbIw6Wb", "authors": "Ting Chen,Saurabh Saxena,Lala Li,David J. Fleet,Geoffrey Hinton", "tags": "ICLR 2022,Poster", "abstract": "We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural network to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural network knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms.", "pdf": "https://openreview.net/pdf/1f7291d96e3b195bdf0664dfb0f5313b0eab7a04.pdf"} {"title": "Particle Stochastic Dual Coordinate Ascent: Exponential convergent algorithm for mean field neural network optimization", "url": "https://openreview.net/forum?id=PQQp7AJwz3", "detail_url": "https://openreview.net/forum?id=PQQp7AJwz3", "authors": "Kazusato Oko,Taiji Suzuki,Atsushi Nitanda,Denny Wu", "tags": "ICLR 2022,Poster", "abstract": "We introduce Particle-SDCA, a gradient-based optimization algorithm for two-layer neural networks in the mean field regime that achieves exponential convergence rate in regularized empirical risk minimization. The proposed algorithm can be regarded as an infinite dimensional extension of Stochastic Dual Coordinate Ascent (SDCA) in the probability space: we exploit the convexity of the dual problem, for which the coordinate-wise proximal gradient method can be applied. Our proposed method inherits advantages of the original SDCA, including (i) exponential convergence (with respect to the outer iteration steps), and (ii) better dependency on the sample size and condition number than the full-batch gradient method. One technical challenge in implementing the SDCA update is the intractable integral over the entire parameter space at every step. To overcome this limitation, we propose a tractable \\textit{particle method} that approximately solves the dual problem, and an importance re-weighted technique to reduce the computational cost. The convergence rate of our method is verified by numerical experiments.", "pdf": "https://openreview.net/pdf/b6a0af59072ab41c5553c6952e5a786b25d0adde.pdf"} {"title": "The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders ", "url": "https://openreview.net/forum?id=7_JR7WpwKV1", "detail_url": "https://openreview.net/forum?id=7_JR7WpwKV1", "authors": "Divyansh Pareek,Andrej Risteski", "tags": "ICLR 2022,Poster", "abstract": "Training and using modern neural-network based latent-variable generative models (like Variational Autoencoders) often require simultaneously training a generative direction along with an inferential (encoding) direction, which approximates the posterior distribution over the latent variables. Thus, the question arises: how complex does the inferential model need to be, in order to be able to accurately model the posterior distribution of a given generative model? In this paper, we identify an important property of the generative map impacting the required size of the encoder. We show that if the generative map is ``strongly invertible\" (in a sense we suitably formalize), the inferential model need not be much more complex. Conversely, we prove that there exist non-invertible generative maps, for which the encoding direction needs to be exponentially larger (under standard assumptions in computational complexity). Importantly, we do not require the generative model to be layerwise invertible, which a lot of the related literature assumes and isn't satisfied by many architectures used in practice (e.g. convolution and pooling based networks). Thus, we provide theoretical support for the empirical wisdom that learning deep generative models is harder when data lies on a low-dimensional manifold.", "pdf": "https://openreview.net/pdf/4116475bedc76111284bad627cb9a8fbaec2059b.pdf"} {"title": "Tracking the risk of a deployed model and detecting harmful distribution shifts", "url": "https://openreview.net/forum?id=Ro_zAjZppv", "detail_url": "https://openreview.net/forum?id=Ro_zAjZppv", "authors": "Aleksandr Podkopaev,Aaditya Ramdas", "tags": "ICLR 2022,Poster", "abstract": "When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain---but not all---distribution shifts could result in significant performance degradation. In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially, making interventions by a human expert (or model retraining) unnecessary. While several works have developed tests for distribution shifts, these typically either use non-sequential methods, or detect arbitrary shifts (benign or harmful), or both. We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate. In this work, we design simple sequential tools for testing if the difference between source (training) and target (test) distributions leads to a significant increase in a risk function of interest, like accuracy or calibration. Recent advances in constructing time-uniform confidence sequences allow efficient aggregation of statistical evidence accumulated during the tracking process. The designed framework is applicable in settings where (some) true labels are revealed after the prediction is performed, or when batches of labels become available in a delayed fashion. We demonstrate the efficacy of the proposed framework through an extensive empirical study on a collection of simulated and real datasets.", "pdf": "https://openreview.net/pdf/f763a5271b61d98bca4127ab14ce483150d152c4.pdf"} {"title": "Towards Understanding the Robustness Against Evasion Attack on Categorical Data", "url": "https://openreview.net/forum?id=BmJV7kyAmg", "detail_url": "https://openreview.net/forum?id=BmJV7kyAmg", "authors": "Hongyan Bao,Yufei Han,Yujun Zhou,Yun Shen,Xiangliang Zhang", "tags": "ICLR 2022,Poster", "abstract": "Characterizing and assessing the adversarial vulnerability of classification models with categorical input has been a practically important, while rarely explored research problem. Our work echoes the challenge by first unveiling the impact factors of adversarial vulnerability of classification models with categorical data based on an information-theoretic adversarial risk analysis about the targeted classifier. Though certifying the robustness of such classification models is intrinsically an NP-hard combinatorial problem, our study shows that the robustness certification can be solved via an efficient greedy exploration of the discrete attack space for any measurable classifiers with a mild smoothness constraint. Our proposed robustness certification framework is instantiated with deep neural network models applied on real-world safety-critic data sources. Our empirical observations confirm the impact of the key adversarial risk factors with categorical input.", "pdf": "https://openreview.net/pdf/b599972b615dea56e3cd777bb3c09e18b73ba736.pdf"} {"title": "Learning Curves for SGD on Structured Features", "url": "https://openreview.net/forum?id=WPI2vbkAl3Q", "detail_url": "https://openreview.net/forum?id=WPI2vbkAl3Q", "authors": "Blake Bordelon,Cengiz Pehlevan", "tags": "ICLR 2022,Poster", "abstract": "The generalization performance of a machine learning algorithm such as a neural network depends in a non-trivial way on the structure of the data distribution. To analyze the influence of data structure on test loss dynamics, we study an exactly solveable model of stochastic gradient descent (SGD) on the square loss which predicts test error when training on features with arbitrary covariance structure. We solve the theory exactly for both Gaussian features and arbitrary features and we show that the simpler Gaussian model accurately predicts test loss of nonlinear random-feature models and neural networks in the kernel regime trained with SGD on real datasets such as MNIST and CIFAR-10. We show that the optimal batch size at a fixed compute budget is typically small and depends on the feature correlation structure, demonstrating the computational benefits of SGD with small batch sizes. Lastly, we extend our theory to the more usual setting of stochastic gradient descent on a fixed subsampled training set, showing that both training and test error can be accurately predicted in our framework on real data.", "pdf": "https://openreview.net/pdf/05e1bd43845bd2321a0ab8593b8960931a65e24e.pdf"} {"title": "NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training", "url": "https://openreview.net/forum?id=Qaw16njk6L", "detail_url": "https://openreview.net/forum?id=Qaw16njk6L", "authors": "Chengyue Gong,Dilin Wang,Meng Li,Xinlei Chen,Zhicheng Yan,Yuandong Tian,qiang liu,Vikas Chandra", "tags": "ICLR 2022,Poster", "abstract": "Designing accurate and efficient vision transformers (ViTs) is a highly important but challenging task. Supernet-based one-shot neural architecture search (NAS) enables fast architecture optimization and has achieved state-of-the-art (SOTA) results on convolutional neural networks (CNNs). However, directly applying the supernet-based NAS to optimize ViTs leads to poor performance - even worse compared to training single ViTs. In this work, we observe that the poor performance is due to a gradient conflict issue: the gradients of different sub-networks conflict with that of the supernet more severely in ViTs than CNNs, which leads to early saturation in training and inferior convergence. To alleviate this issue, we propose a series of techniques, including a gradient projection algorithm, a switchable layer scaling design, and a simplified data augmentation and regularization training recipe. The proposed techniques significantly improve the convergence and the performance of all sub-networks. Our discovered hybrid ViT model family, dubbed NASViT, achieves top-1 accuracy from 78.2% to 81.8% on ImageNet from 200M to 800M FLOPs, and outperforms all the prior art CNNs and ViTs, including AlphaNet and LeViT, etc. When transferred to semantic segmentation tasks, NASViTs also outperform previous backbones on both Cityscape and ADE20K datasets, achieving 73.2% and 37.9% mIoU with only 5G FLOPs, respectively. Code is available at\nhttps://github.com/facebookresearch/NASViT.\n", "pdf": "https://openreview.net/pdf/a6df48abb7e0bb493e7c343c46beb7b365cdc788.pdf"} {"title": "Graphon based Clustering and Testing of Networks: Algorithms and Theory", "url": "https://openreview.net/forum?id=sTNHCrIKDQc", "detail_url": "https://openreview.net/forum?id=sTNHCrIKDQc", "authors": "Mahalakshmi Sabanayagam,Leena Chennuru Vankadara,Debarghya Ghoshdastidar", "tags": "ICLR 2022,Poster", "abstract": "Network-valued data are encountered in a wide range of applications, and pose challenges in learning due to their complex structure and absence of vertex correspondence. Typical examples of such problems include classification or grouping of protein structures and social networks. Various methods, ranging from graph kernels to graph neural networks, have been proposed that achieve some success in graph classification problems. However, most methods have limited theoretical justification, and their applicability beyond classification remains unexplored. In this work, we propose methods for clustering multiple graphs, without vertex correspondence, that are inspired by the recent literature on estimating graphons---symmetric functions corresponding to infinite vertex limit of graphs. We propose a novel graph distance based on sorting-and-smoothing graphon estimators. Using the proposed graph distance, we present two clustering algorithms and show that they achieve state-of-the-art results. We prove the statistical consistency of both algorithms under Lipschitz assumptions on the graph degrees. We further study the applicability of the proposed distance for graph two-sample testing problems.", "pdf": "https://openreview.net/pdf/bc3a82e090f7f3cfaa9a92ef69181887e0348ede.pdf"} {"title": "Network Augmentation for Tiny Deep Learning", "url": "https://openreview.net/forum?id=TYw3-OlrRm-", "detail_url": "https://openreview.net/forum?id=TYw3-OlrRm-", "authors": "Han Cai,Chuang Gan,Ji Lin,Song Han", "tags": "ICLR 2022,Poster", "abstract": "We introduce Network Augmentation (NetAug), a new training method for improving the performance of tiny neural networks. Existing regularization techniques (e.g., data augmentation, dropout) have shown much success on large neural networks by adding noise to overcome over-fitting. However, we found these techniques hurt the performance of tiny neural networks. We argue that training tiny models are different from large models: rather than augmenting the data, we should augment the model, since tiny models tend to suffer from under-fitting rather than over-fitting due to limited capacity. To alleviate this issue, NetAug augments the network (reverse dropout) instead of inserting noise into the dataset or the network. It puts the tiny model into larger models and encourages it to work as a sub-model of larger models to get extra supervision, in addition to functioning as an independent model. At test time, only the tiny model is used for inference, incurring zero inference overhead. We demonstrate the effectiveness of NetAug on image classification and object detection. NetAug consistently improves the performance of tiny models, achieving up to 2.2% accuracy improvement on ImageNet. On object detection, achieving the same level of performance, NetAug requires 41% fewer MACs on Pascal VOC and 38% fewer MACs on COCO than the baseline.", "pdf": "https://openreview.net/pdf/484496875b902e745fc4d6514abb817e7be477c2.pdf"} {"title": "Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations", "url": "https://openreview.net/forum?id=o-1v9hdSult", "detail_url": "https://openreview.net/forum?id=o-1v9hdSult", "authors": "Sarath Sreedharan,Utkarsh Soni,Mudit Verma,Siddharth Srivastava,Subbarao Kambhampati", "tags": "ICLR 2022,Poster", "abstract": "As increasingly complex AI systems are introduced into our daily lives, it becomes important for such systems to be capable of explaining the rationale for their decisions and allowing users to contest these decisions. A significant hurdle to allowing for such explanatory dialogue could be the {\\em vocabulary mismatch} between the user and the AI system. This paper introduces methods for providing contrastive explanations in terms of user-specified concepts for sequential decision-making settings where the system's model of the task may be best represented as an inscrutable model. We do this by building partial symbolic models of a local approximation of the task that can be leveraged to answer the user queries. We test these methods on a popular Atari game (Montezuma's Revenge) and variants of Sokoban (a well-known planning benchmark) and report the results of user studies to evaluate whether people find explanations generated in this form useful.", "pdf": "https://openreview.net/pdf/2558c3735ba361f65aac84ecf8e9f4624e87dec8.pdf"} {"title": "Distributional Reinforcement Learning with Monotonic Splines", "url": "https://openreview.net/forum?id=C8Ltz08PtBp", "detail_url": "https://openreview.net/forum?id=C8Ltz08PtBp", "authors": "Yudong Luo,Guiliang Liu,Haonan Duan,Oliver Schulte,Pascal Poupart", "tags": "ICLR 2022,Poster", "abstract": "Distributional Reinforcement Learning (RL) differs from traditional RL by estimating the distribution over returns to capture the intrinsic uncertainty of MDPs. One key challenge in distributional RL lies in how to parameterize the quantile function when minimizing the Wasserstein metric of temporal differences. Existing algorithms use step functions or piecewise linear functions. In this paper, we propose to learn smooth continuous quantile functions represented by monotonic rational-quadratic splines, which also naturally solve the quantile crossing problem. Experiments in stochastic environments show that a dense estimation for quantile functions enhances distributional RL in terms of faster empirical convergence and higher rewards in most cases.", "pdf": "https://openreview.net/pdf/376a906de470631ee01098610befe6addc3d72de.pdf"} {"title": "Toward Faithful Case-based Reasoning through Learning Prototypes in a Nearest Neighbor-friendly Space.", "url": "https://openreview.net/forum?id=R79ZGjHhv6p", "detail_url": "https://openreview.net/forum?id=R79ZGjHhv6p", "authors": "Seyed Omid Davoudi,Majid Komeili", "tags": "ICLR 2022,Poster", "abstract": "Recent advances in machine learning have brought opportunities for the ever-increasing use of AI in the real world. This has created concerns about the black-box nature of many of the most recent machine learning approaches. In this work, we propose an interpretable neural network that leverages metric and prototype learning for classification tasks. It encodes its own explanations and provides an improved case-based reasoning through learning prototypes in an embedding space learned by a probabilistic nearest neighbor rule. Through experiments, we demonstrated the effectiveness of the proposed method in both performance and the accuracy of the explanations provided.", "pdf": "https://openreview.net/pdf/6d0714a184aa752df631ed2df558e8cfee0d4bb9.pdf"} {"title": "Augmented Sliced Wasserstein Distances", "url": "https://openreview.net/forum?id=iMqTLyfwnOO", "detail_url": "https://openreview.net/forum?id=iMqTLyfwnOO", "authors": "Xiongjie Chen,Yongxin Yang,Yunpeng Li", "tags": "ICLR 2022,Poster", "abstract": "While theoretically appealing, the application of the Wasserstein distance to large-scale machine learning problems has been hampered by its prohibitive computational cost. The sliced Wasserstein distance and its variants improve the computational efficiency through the random projection, yet they suffer from low accuracy if the number of projections is not sufficiently large, because the majority of projections result in trivially small values. In this work, we propose a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs), constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks. It is derived from a key observation that (random) linear projections of samples residing on these hypersurfaces would translate to much more flexible nonlinear projections in the original sample space, so they can capture complex structures of the data distribution. We show that the hypersurfaces can be optimized by gradient ascent efficiently. We provide the condition under which the ASWD is a valid metric and show that this can be obtained by an injective neural network architecture. Numerical results demonstrate that the ASWD significantly outperforms other Wasserstein variants for both synthetic and real-world problems.", "pdf": "https://openreview.net/pdf/d09a765a0ca6e8fe66e61db6af5518d089814c41.pdf"} {"title": "Relational Learning with Variational Bayes", "url": "https://openreview.net/forum?id=Az-7gJc6lpr", "detail_url": "https://openreview.net/forum?id=Az-7gJc6lpr", "authors": "Kuang-Hung Liu", "tags": "ICLR 2022,Poster", "abstract": "In psychology, relational learning refers to the ability to recognize and respond to relationship among objects irrespective of the nature of those objects. Relational learning has long been recognized as a hallmark of human cognition and a key question in artificial intelligence research. In this work, we propose an unsupervised learning method for addressing the relational learning problem where we learn the underlying relationship between a pair of data irrespective of the nature of those data. The central idea of the proposed method is to encapsulate the relational learning problem with a probabilistic graphical model in which we perform inference to learn about data relationship and other relational processing tasks.", "pdf": "https://openreview.net/pdf/9d3dfe42360aa203adb14bacece6acbb08064ac0.pdf"} {"title": "Provably Robust Adversarial Examples", "url": "https://openreview.net/forum?id=UMfhoMtIaP5", "detail_url": "https://openreview.net/forum?id=UMfhoMtIaP5", "authors": "Dimitar Iliev Dimitrov,Gagandeep Singh,Timon Gehr,Martin Vechev", "tags": "ICLR 2022,Poster", "abstract": "We introduce the concept of provably robust adversarial examples for deep neural networks \u2013 connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations). We present a novel method called PARADE for generating these regions in a scalable manner which works by iteratively refining the region initially obtained via sampling until a refined region is certified to be adversarial with existing state-of-the-art verifiers. At each step, a novel optimization procedure is applied to maximize the region's volume under the constraint that the convex relaxation of the network behavior with respect to the region implies a chosen bound on the certification objective. Our experimental evaluation shows the effectiveness of PARADE: it successfully finds large provably robust regions including ones containing $\\approx 10^{573}$ adversarial examples for pixel intensity and $\\approx 10^{599}$ for geometric perturbations. The provability enables our robust examples to be significantly more effective against state-of-the-art defenses based on randomized smoothing than the individual attacks used to construct the regions.", "pdf": "https://openreview.net/pdf/3b8eb27fbc166f48033673d3fadc49a86ef0b79f.pdf"} {"title": "Joint Shapley values: a measure of joint feature importance", "url": "https://openreview.net/forum?id=vcUmUvQCloe", "detail_url": "https://openreview.net/forum?id=vcUmUvQCloe", "authors": "Chris Harris,Richard Pymar,Colin Rowat", "tags": "ICLR 2022,Poster", "abstract": "The Shapley value is one of the most widely used measures of feature importance partly as it measures a feature's average effect on a model's prediction. We introduce joint Shapley values, which directly extend Shapley's axioms and intuitions: joint Shapley values measure a set of features' average effect on a model's prediction. We prove the uniqueness of joint Shapley values, for any order of explanation. Results for games show that joint Shapley values present different insights from existing interaction indices, which assess the effect of a feature within a set of features. The joint Shapley values seem to provide sensible results in ML attribution problems. With binary features, we present a presence-adjusted global value that is more consistent with local intuitions than the usual approach.", "pdf": "https://openreview.net/pdf/7d8a95bb048b3b204b4a1c9a95e93486a12439a1.pdf"} {"title": "Low-Budget Active Learning via Wasserstein Distance: An Integer Programming Approach", "url": "https://openreview.net/forum?id=v8OlxjGn23S", "detail_url": "https://openreview.net/forum?id=v8OlxjGn23S", "authors": "Rafid Mahmood,Sanja Fidler,Marc T Law", "tags": "ICLR 2022,Poster", "abstract": "Active learning is the process of training a model with limited labeled data by selecting a core subset of an unlabeled data pool to label. The large scale of data sets used in deep learning forces most sample selection strategies to employ efficient heuristics. This paper introduces an integer optimization problem for selecting a core set that minimizes the discrete Wasserstein distance from the unlabeled pool. We demonstrate that this problem can be tractably solved with a Generalized Benders Decomposition algorithm. Our strategy uses high-quality latent features that can be obtained by unsupervised learning on the unlabeled pool. Numerical results on several data sets show that our optimization approach is competitive with baselines and particularly outperforms them in the low budget regime where less than one percent of the data set is labeled. ", "pdf": "https://openreview.net/pdf/9dac127c30d4567d8dde179f21749b9ca5494686.pdf"} {"title": "Efficient Self-supervised Vision Transformers for Representation Learning", "url": "https://openreview.net/forum?id=fVu3o-YUGQK", "detail_url": "https://openreview.net/forum?id=fVu3o-YUGQK", "authors": "Chunyuan Li,Jianwei Yang,Pengchuan Zhang,Mei Gao,Bin Xiao,Xiyang Dai,Lu Yuan,Jianfeng Gao", "tags": "ICLR 2022,Poster", "abstract": "This paper investigates two techniques for developing efficient self-supervised vision transformers (EsViT) for visual representation learning. First, we show through a comprehensive empirical study that multi-stage architectures with sparse self-attentions can significantly reduce modeling complexity but with a cost of losing the ability to capture fine-grained correspondences between image regions. Second, we propose a new pre-training task, non-contrastive region-matching, which allows the model to capture fine-grained region dependencies and as a result significantly improves the quality of the learned vision representations. Our results show that combining the two techniques, EsViT achieves 81.3% top-1 on the ImageNet linear probe evaluation, outperforming prior arts with around an order magnitude of higher throughput. When transferring to downstream linear classification tasks, EsViT outperforms its supervised counterpart on 17 out of 18 datasets. The code and pre-trained models are released at: https://github.com/microsoft/esvit", "pdf": "https://openreview.net/pdf/e7b63dccef8ad598db1c36a2386c8d8a63058e8e.pdf"} {"title": "Visual Representation Learning Does Not Generalize Strongly Within the Same Domain", "url": "https://openreview.net/forum?id=9RUHPlladgh", "detail_url": "https://openreview.net/forum?id=9RUHPlladgh", "authors": "Lukas Schott,Julius Von K\u00fcgelgen,Frederik Tr\u00e4uble,Peter Vincent Gehler,Chris Russell,Matthias Bethge,Bernhard Sch\u00f6lkopf,Francesco Locatello,Wieland Brendel", "tags": "ICLR 2022,Poster", "abstract": "An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.\nIn this paper, we test whether 17 unsupervised, weakly supervised, and fully supervised representation learning approaches correctly infer the generative factors of variation in simple datasets (dSprites, Shapes3D, MPI3D) from controlled environments, and on our contributed CelebGlow dataset. \nIn contrast to prior robustness work that introduces novel factors of variation during test time, such as blur or other (un)structured noise, we here recompose, interpolate, or extrapolate only existing factors of variation from the training data set (e.g., small and medium-sized objects during training and large objects during testing). Models that learn the correct mechanism should be able to generalize to this benchmark.\nIn total, we train and test 2000+ models and observe that all of them struggle to learn the underlying mechanism regardless of supervision signal and architectural bias. Moreover, the generalization capabilities of all tested models drop significantly as we move from artificial datasets towards more realistic real-world datasets.\nDespite their inability to identify the correct mechanism, the models are quite modular as their ability to infer other in-distribution factors remains fairly stable, providing only a single factor is out-of-distribution. These results point to an important yet understudied problem of learning mechanistic models of observations that can facilitate generalization.", "pdf": "https://openreview.net/pdf/775e024ab2e9ce40e6b2f7608d5b1eb2c1136e75.pdf"} {"title": "Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions", "url": "https://openreview.net/forum?id=e2Lle5cij9D", "detail_url": "https://openreview.net/forum?id=e2Lle5cij9D", "authors": "Arda Sahiner,Tolga Ergen,Batu Ozturkler,Burak Bartan,John M. Pauly,Morteza Mardani,Mert Pilanci", "tags": "ICLR 2022,Poster", "abstract": "Generative Adversarial Networks (GANs) are commonly used for modeling complex distributions of data. Both the generators and discriminators of GANs are often modeled by neural networks, posing a non-transparent optimization problem which is non-convex and non-concave over the generator and discriminator, respectively. Such networks are often heuristically optimized with gradient descent-ascent (GDA), but it is unclear whether the optimization problem contains any saddle points, or whether heuristic methods can find them in practice. In this work, we analyze the training of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality, and for a variety of generators expose the conditions under which Wasserstein GANs can be solved exactly with convex optimization approaches, or can be represented as convex-concave games. Using this convex duality interpretation, we further demonstrate the impact of different activation functions of the discriminator. Our observations are verified with numerical results demonstrating the power of the convex interpretation, with an application in progressive training of convex architectures corresponding to linear generators and quadratic-activation discriminators for CelebA image generation. The code for our experiments is available at https://github.com/ardasahiner/ProCoGAN.", "pdf": "https://openreview.net/pdf/733796fc142ddb063afc1a0818ecba208aef1465.pdf"} {"title": "Memory Augmented Optimizers for Deep Learning", "url": "https://openreview.net/forum?id=NRX9QZ6yqt", "detail_url": "https://openreview.net/forum?id=NRX9QZ6yqt", "authors": "Paul-Aymeric Martin McRae,Prasanna Parthasarathi,Mido Assran,Sarath Chandar", "tags": "ICLR 2022,Poster", "abstract": "Popular approaches for minimizing loss in data-driven learning often involve an abstraction or an explicit retention of the history of gradients for efficient parameter updates. \nThe aggregated history of gradients nudges the parameter updates in the right direction even when the gradients at any given step are not informative. \nAlthough the history of gradients summarized in meta-parameters or explicitly stored in memory has been shown effective in theory and practice, the question of whether $all$ or only a subset of the gradients in the history are sufficient in deciding the parameter updates remains unanswered. \nIn this paper, we propose a framework of memory-augmented gradient descent optimizers that retain a limited view of their gradient history in their internal memory. \nSuch optimizers scale well to large real-life datasets, and our experiments show that the memory augmented extensions of standard optimizers enjoy accelerated convergence and improved performance on a majority of computer vision and language tasks that we considered.\nAdditionally, we prove that the proposed class of optimizers with fixed-size memory converge under assumptions of strong convexity, regardless of which gradients are selected or how they are linearly combined to form the update step.", "pdf": "https://openreview.net/pdf/874e2c95385be68f564d4d96107e652253f10706.pdf"} {"title": "Orchestrated Value Mapping for Reinforcement Learning", "url": "https://openreview.net/forum?id=c87d0TS4yX", "detail_url": "https://openreview.net/forum?id=c87d0TS4yX", "authors": "Mehdi Fatemi,Arash Tavakoli", "tags": "ICLR 2022,Poster", "abstract": "We present a general convergent class of reinforcement learning algorithms that is founded on two distinct principles: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. The first principle enables incorporating specific properties into the value estimator that can enhance learning. The second principle, on the other hand, allows for the value function to be represented as a composition of multiple utility functions. This can be leveraged for various purposes, e.g. dealing with highly varying reward scales, incorporating a priori knowledge about the sources of reward, and ensemble learning. Combining the two principles yields a general blueprint for instantiating convergent algorithms by orchestrating diverse mapping functions over multiple reward channels. This blueprint generalizes and subsumes algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In addition, our convergence proof for this general class relaxes certain required assumptions in some of these algorithms. Based on our theory, we discuss several interesting configurations as special cases. Finally, to illustrate the potential of the design space that our theory opens up, we instantiate a particular algorithm and evaluate its performance on the Atari suite.", "pdf": "https://openreview.net/pdf/9ef3cef089b9f45f5bdb93fddb0ed8ccfa9e3268.pdf"} {"title": "Learning to Generalize across Domains on Single Test Samples", "url": "https://openreview.net/forum?id=CIaQKbTBwtU", "detail_url": "https://openreview.net/forum?id=CIaQKbTBwtU", "authors": "Zehao Xiao,Xiantong Zhen,Ling Shao,Cees G. M. Snoek", "tags": "ICLR 2022,Poster", "abstract": "We strive to learn a model from a set of source domains that generalizes well to unseen target domains. The main challenge in such a domain generalization scenario is the unavailability of any target domain data during training, resulting in the learned model not being explicitly adapted to the unseen target domains. We propose learning to generalize across domains on single test samples. We leverage a meta-learning paradigm to learn our model to acquire the ability of adaptation with single samples at training time so as to further adapt itself to each single test sample at test time. We formulate the adaptation to the single test sample as a variational Bayesian inference problem, which incorporates the test sample as a conditional into the generation of model parameters. The adaptation to each test sample requires only one feed-forward computation at test time without any fine-tuning or self-supervised training on additional data from the unseen domains. Extensive ablation studies demonstrate that our model learns the ability to adapt models to each single sample by mimicking domain shifts during training. Further, our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.", "pdf": "https://openreview.net/pdf/4fcc67594340f12c1beb7e4f1ce64c7be6f70c0a.pdf"} {"title": "Prototype memory and attention mechanisms for few shot image generation", "url": "https://openreview.net/forum?id=lY0-7bj0Vfz", "detail_url": "https://openreview.net/forum?id=lY0-7bj0Vfz", "authors": "Tianqin Li,Zijie Li,Andrew Luo,Harold Rockwell,Amir Barati Farimani,Tai Sing Lee", "tags": "ICLR 2022,Poster", "abstract": "Recent discoveries indicate that the neural codes in the primary visual cortex (V1) of macaque monkeys are complex, diverse and sparse. This leads us to ponder the computational advantages and functional role of these \u201cgrandmother cells.\" Here, we propose that such cells can serve as prototype memory priors that bias and shape the distributed feature processing within the image generation process in the brain. These memory prototypes are learned by momentum online clustering and are utilized via a memory-based attention operation, which we define as Memory Concept Attention (MoCA). To test our proposal, we show in a few-shot image generation task, that having a prototype memory during attention can improve image synthesis quality, learn interpretable visual concept clusters, as well as improve the robustness of the model. Interestingly, we also find that our attentional memory mechanism can implicitly modify the horizontal connections by updating the transformation into the prototype embedding space for self-attention. Insofar as GANs can be seen as plausible models for reasoning about the top-down synthesis in the analysis-by-synthesis loop of the hierarchical visual cortex, our findings demonstrate a plausible computational role for these \u201cprototype concept\" neurons in visual processing in the brain.", "pdf": "https://openreview.net/pdf/c2a4a72f1bd5890c4beeb93de11cac4746eae2c1.pdf"} {"title": "TPU-GAN: Learning temporal coherence from dynamic point cloud sequences", "url": "https://openreview.net/forum?id=FEBFJ98FKx", "detail_url": "https://openreview.net/forum?id=FEBFJ98FKx", "authors": "Zijie Li,Tianqin Li,Amir Barati Farimani", "tags": "ICLR 2022,Poster", "abstract": "Point cloud sequence is an important data representation that provides flexible shape and motion information. Prior work demonstrates that incorporating scene flow information into loss can make model learn temporally coherent feature spaces. However, it is prohibitively expensive to acquire point correspondence information across frames in real-world environments. In this work, we propose a super-resolution generative adversarial network (GAN) for upsampling dynamic point cloud sequences, which does not require point correspondence annotation. Our model, Temporal Point cloud Upsampling GAN (TPU-GAN), can implicitly learn the underlying temporal coherence from point cloud sequence, which in turn guides the generator to produce temporally coherent output. In addition, we propose a learnable masking module to adapt upsampling ratio according to the point distribution. We conduct extensive experiments on point cloud sequences from two different domains: particles in the fluid dynamical system and human action scanned data. The quantitative and qualitative evaluation demonstrates the effectiveness of our method on upsampling tasks as well as learning temporal coherence from irregular point cloud sequences.", "pdf": "https://openreview.net/pdf/52569840ae5698d2203efde4f8f06d012fa7868a.pdf"} {"title": "A First-Occupancy Representation for Reinforcement Learning", "url": "https://openreview.net/forum?id=JBAZe2yN6Ub", "detail_url": "https://openreview.net/forum?id=JBAZe2yN6Ub", "authors": "Ted Moskovitz,Spencer R Wilson,Maneesh Sahani", "tags": "ICLR 2022,Poster", "abstract": "Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states. The successor representation (SR), which measures the expected cumulative, discounted state occupancy under a fixed policy, enables efficient transfer to different reward structures in an otherwise constant Markovian environment and has been hypothesized to underlie aspects of biological behavior and neural activity. However, in the real world, rewards may only be available for consumption once, may shift location, or agents may simply aim to reach goal states as rapidly as possible without the constraint of artificially imposed task horizons. In such cases, the most behaviorally-relevant representation would carry information about when the agent was likely to first reach states of interest, rather than how often it should expect to visit them over a potentially infinite time span. To reflect such demands, we introduce the first-occupancy representation (FR), which measures the expected temporal discount to the first time a state is accessed. We demonstrate that the FR facilitates exploration, the selection of efficient paths to desired states, allows the agent, under certain conditions, to plan provably optimal trajectories defined by a sequence of subgoals, and induces similar behavior to animals avoiding threatening stimuli.", "pdf": "https://openreview.net/pdf/46abdff2d131f44012d855cdd93c0fa7034d601a.pdf"} {"title": "Deep ReLU Networks Preserve Expected Length", "url": "https://openreview.net/forum?id=ci7LBzDn2Q", "detail_url": "https://openreview.net/forum?id=ci7LBzDn2Q", "authors": "Boris Hanin,Ryan Jeong,David Rolnick", "tags": "ICLR 2022,Poster", "abstract": "Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize. One natural measure of complexity is how the network distorts length - if the network takes a unit-length curve as input, what is the length of the resulting curve of outputs? It has been widely believed that this length grows exponentially in network depth. We prove that in fact this is not the case: the expected length distortion does not grow with depth, and indeed shrinks slightly, for ReLU networks with standard random initialization. We also generalize this result by proving upper bounds both for higher moments of the length distortion and for the distortion of higher-dimensional volumes. These theoretical results are corroborated by our experiments.", "pdf": "https://openreview.net/pdf/726f7b1d7efcb38a8f1685099dbfc32c938b1267.pdf"} {"title": "Phenomenology of Double Descent in Finite-Width Neural Networks", "url": "https://openreview.net/forum?id=lTqGXfn9Tv", "detail_url": "https://openreview.net/forum?id=lTqGXfn9Tv", "authors": "Sidak Pal Singh,Aurelien Lucchi,Thomas Hofmann,Bernhard Sch\u00f6lkopf", "tags": "ICLR 2022,Poster", "abstract": "`Double descent' delineates the generalization behaviour of models depending on the regime they belong to: under- or over-parameterized. The current theoretical understanding behind the occurrence of this phenomenon is primarily based on linear and kernel regression models --- with informal parallels to neural networks via the Neural Tangent Kernel. Therefore such analyses do not adequately capture the mechanisms behind double descent in finite-width neural networks, as well as, disregard crucial components --- such as the choice of the loss function. We address these shortcomings by leveraging influence functions in order to derive suitable expressions of the population loss and its lower bound, while imposing minimal assumptions on the form of the parametric model. Our derived bounds bear an intimate connection with the spectrum of the Hessian at the optimum, and importantly, exhibit a double descent behaviour at the interpolation threshold. Building on our analysis, we further investigate how the loss function affects double descent --- and thus uncover interesting properties of neural networks and their Hessian spectra near the interpolation threshold.", "pdf": "https://openreview.net/pdf/692a8cdd6b0dd0b5c63c485191d55432b21ad442.pdf"} {"title": "How Attentive are Graph Attention Networks? ", "url": "https://openreview.net/forum?id=F72ximsx7C1", "detail_url": "https://openreview.net/forum?id=F72ximsx7C1", "authors": "Shaked Brody,Uri Alon,Eran Yahav", "tags": "ICLR 2022,Poster", "abstract": "Graph Attention Networks (GATs) are one of the most popular GNN architectures and are considered as the state-of-the-art architecture for representation learning with graphs. In GAT, every node attends to its neighbors given its own representation as the query.\nHowever, in this paper we show that GAT computes a very limited kind of attention: the ranking of the attention scores is unconditioned on the query node. We formally define this restricted kind of attention as static attention and distinguish it from a strictly more expressive dynamic attention.\nBecause GATs use a static attention mechanism, there are simple graph problems that GAT cannot express: in a controlled problem, we show that static attention hinders GAT from even fitting the training data. \nTo remove this limitation, we introduce a simple fix by modifying the order of operations and propose GATv2: a dynamic graph attention variant that is strictly more expressive than GAT. We perform an extensive evaluation and show that GATv2 outperforms GAT across 12 OGB and other benchmarks while we match their parametric costs. \nOur code is available at https://github.com/tech-srl/how_attentive_are_gats . GATv2 is available as part of the PyTorch Geometric library, the Deep Graph Library, and the TensorFlow GNN library.", "pdf": "https://openreview.net/pdf/10878ac1155ddeeada5fd384fbe0cf15747d06bf.pdf"} {"title": "Learning Transferable Reward for Query Object Localization with Policy Adaptation", "url": "https://openreview.net/forum?id=92tYQiil17", "detail_url": "https://openreview.net/forum?id=92tYQiil17", "authors": "Tingfeng Li,Shaobo Han,Martin Renqiang Min,Dimitris N. Metaxas", "tags": "ICLR 2022,Poster", "abstract": "We propose a reinforcement learning based approach to query object localization, for which an agent is trained to localize objects of interest specified by a small exemplary set. We learn a transferable reward signal formulated using the exemplary set by ordinal metric learning. Our proposed method enables test-time policy adaptation to new environments where the reward signals are not readily available, and outperforms fine-tuning approaches that are limited to annotated images. In addition, the transferable reward allows repurposing the trained agent from one specific class to another class. Experiments on corrupted MNIST, CU-Birds, and COCO datasets demonstrate the effectiveness of our approach.", "pdf": "https://openreview.net/pdf/5b72a5cbe8d019baa19ccef469da73414589de18.pdf"} {"title": "CKConv: Continuous Kernel Convolution For Sequential Data", "url": "https://openreview.net/forum?id=8FhxBtXSl0", "detail_url": "https://openreview.net/forum?id=8FhxBtXSl0", "authors": "David W. Romero,Anna Kuzina,Erik J Bekkers,Jakub Mikolaj Tomczak,Mark Hoogendoorn", "tags": "ICLR 2022,Poster", "abstract": "Conventional neural architectures for sequential data present important limitations. Recurrent neural networks suffer from exploding and vanishing gradients, small effective memory horizons, and must be trained sequentially. Convolutional neural networks cannot handle sequences of unknown size and their memory horizon must be defined a priori. In this work, we show that these problems can be solved by formulating the convolutional kernels of CNNs as continuous functions. The resulting Continuous Kernel Convolution (CKConv) handles arbitrarily long sequences in a parallel manner, within a single operation, and without relying on any form of recurrence. We show that Continuous Kernel Convolutional Networks (CKCNNs) obtain state-of-the-art results in multiple datasets, e.g., permuted MNIST, and, thanks to their continuous nature, are able to handle non-uniformly sampled datasets and irregularly-sampled data natively. CKCNNs match or perform better than neural ODEs designed for these purposes in a faster and simpler manner.", "pdf": "https://openreview.net/pdf/eb7ec6afc6fd671f4c62e8ae61ac22465ac362ab.pdf"} {"title": "Towards Empirical Sandwich Bounds on the Rate-Distortion Function", "url": "https://openreview.net/forum?id=H4PmOqSZDY", "detail_url": "https://openreview.net/forum?id=H4PmOqSZDY", "authors": "Yibo Yang,Stephan Mandt", "tags": "ICLR 2022,Poster", "abstract": "Rate-distortion (R-D) function, a key quantity in information theory, characterizes the fundamental limit of how much a data source can be compressed subject to a fidelity criterion, by any compression algorithm. As researchers push for ever-improving compression performance, establishing the R-D function of a given data source is not only of scientific interest, but also reveals the possible room for improvement in existing compression algorithms. Previous work on this problem relied on distributional assumptions on the data source (Gibson, 2017) or only applied to discrete data (Blahut, 1972; Arimoto, 1972). By contrast, this paper makes the first attempt at an algorithm for sandwiching the R-D function of a general (not necessarily discrete) source requiring only i.i.d. data samples. We estimate R-D sandwich bounds for a variety of artificial and real-world data sources, in settings far beyond the feasibility of any known method, and shed light on the optimality of neural data compression (Ball\u00e9 et al., 2021; Yang et al., 2022). Our R-D upper bound on natural images indicates theoretical room for improving state-of-the-art image compression methods by at least one dB in PSNR at various bitrates. Our data and code can be found at https://github.com/mandt-lab/RD-sandwich.", "pdf": "https://openreview.net/pdf/68b5094de29bac048c6f775fd6ae524a866caf89.pdf"} {"title": "Pareto Policy Adaptation", "url": "https://openreview.net/forum?id=wfZGut6e09", "detail_url": "https://openreview.net/forum?id=wfZGut6e09", "authors": "Panagiotis Kyriakis,Jyotirmoy Deshmukh,Paul Bogdan", "tags": "ICLR 2022,Poster", "abstract": "We present a policy gradient method for Multi-Objective Reinforcement Learning under unknown, linear preferences. By enforcing Pareto stationarity, a first-order condition for Pareto optimality, we are able to design a simple policy gradient algorithm that approximates the Pareto front and infers the unknown preferences. Our method relies on a projected gradient descent solver that identifies common ascent directions for all objectives. Leveraging the solution of that solver, we introduce Pareto Policy Adaptation (PPA), a loss function that adapts the policy to be optimal with respect to any distribution over preferences. PPA uses implicit differentiation to back-propagate the loss gradient bypassing the operations of the projected gradient descent solver. Our approach is straightforward, easy to implement and can be used with all existing policy gradient and actor-critic methods. We evaluate our method in a series of reinforcement learning tasks", "pdf": "https://openreview.net/pdf/20c222c6c34c93ec1b453f5c07d51bd1827ec7bd.pdf"} {"title": "Fair Normalizing Flows", "url": "https://openreview.net/forum?id=BrFIKuxrZE", "detail_url": "https://openreview.net/forum?id=BrFIKuxrZE", "authors": "Mislav Balunovic,Anian Ruoss,Martin Vechev", "tags": "ICLR 2022,Poster", "abstract": "Fair representation learning is an attractive approach that promises fairness of downstream predictors by encoding sensitive data. Unfortunately, recent work has shown that strong adversarial predictors can still exhibit unfairness by recovering sensitive attributes from these representations. In this work, we present Fair Normalizing Flows (FNF), a new approach offering more rigorous fairness guarantees for learned representations. Specifically, we consider a practical setting where we can estimate the probability density for sensitive groups. The key idea is to model the encoder as a normalizing flow trained to minimize the statistical distance between the latent representations of different groups. The main advantage of FNF is that its exact likelihood computation allows us to obtain guarantees on the maximum unfairness of any potentially adversarial downstream predictor. We experimentally demonstrate the effectiveness of FNF in enforcing various group fairness notions, as well as other attractive properties such as interpretability and transfer learning, on a variety of challenging real-world datasets.", "pdf": "https://openreview.net/pdf/609e4c482621a5208ae4ebb3b311369c3b04689f.pdf"} {"title": "The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program", "url": "https://openreview.net/forum?id=5QhUE1qiVC6", "detail_url": "https://openreview.net/forum?id=5QhUE1qiVC6", "authors": "Yifei Wang,Mert Pilanci", "tags": "ICLR 2022,Poster", "abstract": "We study non-convex subgradient flows for training two-layer ReLU neural networks from a convex geometry and duality perspective. We characterize the implicit bias of unregularized non-convex gradient flow as convex regularization of an equivalent convex model. We then show that the limit points of non-convex subgradient flows can be identified via primal-dual correspondence in this convex optimization problem. Moreover, we derive a sufficient condition on the dual variables which ensures that the stationary points of the non-convex objective are the KKT points of the convex objective, thus proving convergence of non-convex gradient flows to the global optimum. For a class of regular training data distributions such as orthogonal separable data, we show that this sufficient condition holds. Therefore, non-convex gradient flows in fact converge to optimal solutions of a convex optimization problem. We present numerical results verifying the predictions of our theory for non-convex subgradient descent.", "pdf": "https://openreview.net/pdf/4d29755fe3cd56f6093aa8e79892bc79392b8c0d.pdf"} {"title": "Adaptive Wavelet Transformer Network for 3D Shape Representation Learning", "url": "https://openreview.net/forum?id=5MLb3cLCJY", "detail_url": "https://openreview.net/forum?id=5MLb3cLCJY", "authors": "Hao Huang,Yi Fang", "tags": "ICLR 2022,Poster", "abstract": "We present a novel method for 3D shape representation learning using multi-scale wavelet decomposition. Previous works often decompose 3D shapes into complementary components in spatial domain at a single scale. In this work, we study to decompose 3D shapes into sub-bands components in frequency domain at multiple scales, resulting in a hierarchical decomposition tree in a principled manner rooted in multi-resolution wavelet analysis. Specifically, we propose Adaptive Wavelet Transformer Network (AWT-Net) that firstly generates approximation or detail wavelet coefficients per point, classifying each point into high or low sub-bands components, using lifting scheme at multiple scales recursively and hierarchically. Then, AWT-Net exploits Transformer to enhance the original shape features by querying and fusing features from different but integrated sub-bands. The wavelet coefficients can be learned without direct supervision on coefficients, and AWT-Net is fully differentiable and can be learned in an end-to-end fashion. Extensive experiments demonstrate that AWT-Net achieves competitive performance on 3D shape classification and segmentation benchmarks.", "pdf": "https://openreview.net/pdf/b460e9efd8a892dfa306a2d12f830a63074ab5dd.pdf"} {"title": "On the Convergence of mSGD and AdaGrad for Stochastic Optimization", "url": "https://openreview.net/forum?id=g5tANwND04i", "detail_url": "https://openreview.net/forum?id=g5tANwND04i", "authors": "ruinan Jin,Yu Xing,Xingkang He", "tags": "ICLR 2022,Poster", "abstract": "As one of the most fundamental stochastic optimization algorithms, stochastic gradient descent (SGD) has been intensively developed and extensively applied in machine learning in the past decade. There have been some modified SGD-type algorithms, which outperform the SGD in many competitions and applications in terms of convergence rate and accuracy, such as momentum-based SGD (mSGD) and adaptive gradient algorithm (AdaGrad). Despite these empirical successes, the theoretical properties of these algorithms have not been well established due to technical difficulties. With this motivation, we focus on convergence analysis of mSGD and AdaGrad for any smooth (possibly non-convex) loss functions in stochastic optimization. First, we prove that the iterates of mSGD are asymptotically convergent to a connected set of stationary points with probability one, which is more general than existing works on subsequence convergence or convergence of time averages. Moreover, we prove that the loss function of mSGD decays at a certain rate faster than that of SGD. In addition, we prove the iterates of AdaGrad are asymptotically convergent to a connected set of stationary points with probability one. Also, this result extends the results from the literature on subsequence convergence and the convergence of time averages. Despite the generality of the above convergence results, we have relaxed some assumptions of gradient noises, convexity of loss functions, as well as boundedness of iterates.", "pdf": "https://openreview.net/pdf/b4ec8da613b96b5a0a6fa0fdf588173e801abfc8.pdf"} {"title": "Likelihood Training of Schr\u00f6dinger Bridge using Forward-Backward SDEs Theory", "url": "https://openreview.net/forum?id=nioAdKCEdXB", "detail_url": "https://openreview.net/forum?id=nioAdKCEdXB", "authors": "Tianrong Chen,Guan-Horng Liu,Evangelos Theodorou", "tags": "ICLR 2022,Poster", "abstract": "Schr\u00f6dinger Bridge (SB) is an entropy-regularized optimal transport problem that has received increasing attention in deep generative modeling for its mathematical flexibility compared to the Scored-based Generative Model (SGM). However, it remains unclear whether the optimization principle of SB relates to the modern training of deep generative models, which often rely on constructing log-likelihood objectives.This raises questions on the suitability of SB models as a principled alternative for generative applications. In this work, we present a novel computational framework for likelihood training of SB models grounded on Forward-Backward Stochastic Differential Equations Theory \u2013 a mathematical methodology appeared in stochastic optimal control that transforms the optimality condition of SB into a set of SDEs. Crucially, these SDEs can be used to construct the likelihood objectives for SB that, surprisingly, generalizes the ones for SGM as special cases. This leads to a new optimization principle that inherits the same SB optimality yet without losing applications of modern generative training techniques, and we show that the resulting training algorithm achieves comparable results on generating realistic images on MNIST, CelebA, and CIFAR10. Our code is available at https://github.com/ghliu/SB-FBSDE.", "pdf": "https://openreview.net/pdf/88f4662fe55ad470a87f305792547280c33c6e1b.pdf"} {"title": "Imitation Learning from Observations under Transition Model Disparity", "url": "https://openreview.net/forum?id=twv2QlJhXzo", "detail_url": "https://openreview.net/forum?id=twv2QlJhXzo", "authors": "Tanmay Gangwani,Yuan Zhou,Jian Peng", "tags": "ICLR 2022,Poster", "abstract": "Learning to perform tasks by leveraging a dataset of expert observations, also known as imitation learning from observations (ILO), is an important paradigm for learning skills without access to the expert reward function or the expert actions. We consider ILO in the setting where the expert and the learner agents operate in different environments, with the source of the discrepancy being the transition dynamics model. Recent methods for scalable ILO utilize adversarial learning to match the state-transition distributions of the expert and the learner, an approach that becomes challenging when the dynamics are dissimilar. In this work, we propose an algorithm that trains an intermediary policy in the learner environment and uses it as a surrogate expert for the learner. The intermediary policy is learned such that the state transitions generated by it are close to the state transitions in the expert dataset. To derive a practical and scalable algorithm, we employ concepts from prior work on estimating the support of a probability distribution. Experiments using MuJoCo locomotion tasks highlight that our method compares favorably to the baselines for ILO with transition dynamics mismatch.", "pdf": "https://openreview.net/pdf/7fd85a0997ef411d5948afe26129e044734df91a.pdf"} {"title": "MCMC Should Mix: Learning Energy-Based Model with Neural Transport Latent Space MCMC", "url": "https://openreview.net/forum?id=4C93Qvn-tz", "detail_url": "https://openreview.net/forum?id=4C93Qvn-tz", "authors": "Erik Nijkamp,Ruiqi Gao,Pavel Sountsov,Srinivas Vasudevan,Bo Pang,Song-Chun Zhu,Ying Nian Wu", "tags": "ICLR 2022,Poster", "abstract": "Learning energy-based model (EBM) requires MCMC sampling of the learned model as an inner loop of the learning algorithm. However, MCMC sampling of EBMs in high-dimensional data space is generally not mixing, because the energy function, which is usually parametrized by deep network, is highly multi-modal in the data space. This is a serious handicap for both theory and practice of EBMs. In this paper, we propose to learn EBM with a flow-based model (or in general latent variable model) serving as a backbone, so that the EBM is a correction or an exponential tilting of the flow-based model. We show that the model has a particularly simple form in the space of the latent variables of the generative model, and MCMC sampling of the EBM in the latent space mixes well and traverses modes in the data space. This enables proper sampling and learning of EBMs.", "pdf": "https://openreview.net/pdf/e59fd3452037dfc60c95270a1328f0a3077b10f9.pdf"} {"title": "Autonomous Learning of Object-Centric Abstractions for High-Level Planning", "url": "https://openreview.net/forum?id=rrWeE9ZDw_", "detail_url": "https://openreview.net/forum?id=rrWeE9ZDw_", "authors": "Steven James,Benjamin Rosman,George Konidaris", "tags": "ICLR 2022,Poster", "abstract": "We propose a method for autonomously learning an object-centric representation of a continuous and high-dimensional environment that is suitable for planning. Such representations can immediately be transferred between tasks that share the same types of objects, resulting in agents that require fewer samples to learn a model of a new task. We first demonstrate our approach on a 2D crafting domain consisting of numerous objects where the agent learns a compact, lifted representation that generalises across objects. We then apply it to a series of Minecraft tasks to learn object-centric representations and object types - directly from pixel data - that can be leveraged to solve new tasks quickly. The resulting learned representations enable the use of a task-level planner, resulting in an agent capable of transferring learned representations to form complex, long-term plans.", "pdf": "https://openreview.net/pdf/a9a9310c7615055c5fd767dd5c8bde623331a28b.pdf"} {"title": "A fast and accurate splitting method for optimal transport: analysis and implementation", "url": "https://openreview.net/forum?id=fCSq8yrDkc", "detail_url": "https://openreview.net/forum?id=fCSq8yrDkc", "authors": "Vien V. Mai,Jacob Lindb\u00e4ck,Mikael Johansson", "tags": "ICLR 2022,Poster", "abstract": "We develop a fast and reliable method for solving large-scale optimal transport (OT) problems at an unprecedented combination of speed and accuracy. Built on the celebrated Douglas-Rachford splitting technique, our method tackles the original OT problem directly instead of solving an approximate regularized problem, as many state-of-the-art techniques do. This allows us to provide sparse transport plans and avoid numerical issues of methods that use entropic regularization. The algorithm has the same cost per iteration as the popular Sinkhorn method, and each iteration can be executed efficiently, in parallel. The proposed method enjoys an iteration complexity $O(1/\\epsilon)$ compared to the best-known $O(1/\\epsilon^2)$ of the Sinkhorn method. In addition, we establish a linear convergence rate for our formulation of the OT problem. We detail an efficient GPU implementation of the proposed method that maintains a primal-dual stopping criterion at no extra cost. Substantial experiments demonstrate the effectiveness of our method, both in terms of computation times and robustness.", "pdf": "https://openreview.net/pdf/c466eee883ed6f6c6b6dad8f7dc4d5f36092bb25.pdf"} {"title": "Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks", "url": "https://openreview.net/forum?id=VLgmhQDVBV", "detail_url": "https://openreview.net/forum?id=VLgmhQDVBV", "authors": "Benjamin Bowman,Guido Montufar", "tags": "ICLR 2022,Poster", "abstract": "We study the dynamics of a neural network in function space when optimizing the mean squared error via gradient flow. We show that in the underparameterized regime the network learns eigenfunctions of an integral operator $T_K$ determined by the Neural Tangent Kernel at rates corresponding to their eigenvalues. For example, for uniformly distributed data on the sphere $S^{d - 1}$ and rotation invariant weight distributions, the eigenfunctions of $T_K$ are the spherical harmonics. Our results can be understood as describing a spectral bias in the underparameterized regime. The proofs use the concept of ``Damped Deviations'' where deviations of the NTK matter less for eigendirections with large eigenvalues. Aside from the underparameterized regime, the damped deviations point-of-view allows us to extend certain results in the literature in the overparameterized setting. ", "pdf": "https://openreview.net/pdf/ccf082acfc5365ae3951682ff517392f46042ab1.pdf"} {"title": "Discovering Latent Concepts Learned in BERT", "url": "https://openreview.net/forum?id=POTMtpYI1xH", "detail_url": "https://openreview.net/forum?id=POTMtpYI1xH", "authors": "Fahim Dalvi,Abdul Rafae Khan,Firoj Alam,Nadir Durrani,Jia Xu,Hassan Sajjad", "tags": "ICLR 2022,Poster", "abstract": "A large number of studies that analyze deep neural network models and their ability to encode various linguistic and non-linguistic concepts provide an interpretation of the inner mechanics of these models. The scope of the analyses is limited to pre-defined concepts that reinforce the traditional linguistic knowledge and do not reflect on how novel concepts are learned by the model. We address this limitation by discovering and analyzing latent concepts learned in neural network models in an unsupervised fashion and provide interpretations from the model's perspective. In this work, we study: i) what latent concepts exist in the pre-trained BERT model, ii) how the discovered latent concepts align or diverge from classical linguistic hierarchy and iii) how the latent concepts evolve across layers. \nOur findings show: i) a model learns novel concepts (e.g. animal categories and demographic groups), which do not strictly adhere to any pre-defined categorization (e.g. POS, semantic tags), ii) several latent concepts are based on multiple properties which may include semantics, syntax, and morphology, iii) the lower layers in the model dominate in learning shallow lexical concepts while the higher layers learn semantic relations and iv) the discovered latent concepts highlight potential biases learned in the model. We also release a novel BERT ConceptNet dataset consisting of 174 concept labels and 1M annotated instances.", "pdf": "https://openreview.net/pdf/f96833144308f5ae7999c4c5e5f0dc8c6208fe67.pdf"} {"title": "The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks", "url": "https://openreview.net/forum?id=dNigytemkL", "detail_url": "https://openreview.net/forum?id=dNigytemkL", "authors": "Rahim Entezari,Hanie Sedghi,Olga Saukh,Behnam Neyshabur", "tags": "ICLR 2022,Poster", "abstract": "In this paper, we conjecture that if the permutation invariance of neural networks is taken into account, SGD solutions will likely have no barrier in the linear interpolation between them. Although it is a bold conjecture, we show how extensive empirical attempts fall short of refuting it. We further provide a preliminary theoretical result to support our conjecture. Our conjecture has implications for the lottery ticket hypothesis, distributed training, and ensemble methods. The source code is available at \\url{https://github.com/rahimentezari/PermutationInvariance}.", "pdf": "https://openreview.net/pdf/a575dfe6923ea65c17895fd63d13fc299132536f.pdf"} {"title": "Data Poisoning Won\u2019t Save You From Facial Recognition", "url": "https://openreview.net/forum?id=B5XahNLmna", "detail_url": "https://openreview.net/forum?id=B5XahNLmna", "authors": "Evani Radiya-Dixit,Sanghyun Hong,Nicholas Carlini,Florian Tramer", "tags": "ICLR 2022,Poster", "abstract": "Data poisoning has been proposed as a compelling defense against facial recognition models trained on Web-scraped pictures. Users can perturb images they post online, so that models will misclassify future (unperturbed) pictures.\n \n We demonstrate that this strategy provides a false sense of security, as it ignores an inherent asymmetry between the parties: users' pictures are perturbed once and for all before being published (at which point they are scraped) and must thereafter fool all future models---including models trained adaptively against the users' past attacks, or models that use new technologies discovered after the attack.\n \nWe evaluate two systems for poisoning attacks against large-scale facial recognition, Fawkes (500,000+ downloads) and LowKey. We demonstrate how an \"oblivious\" model trainer can simply wait for future developments in computer vision to nullify the protection of pictures collected in the past. We further show that an adversary with black-box access to the attack can (i) train a robust model that resists the perturbations of collected pictures and (ii) detect poisoned pictures uploaded online.\n \nWe caution that facial recognition poisoning will not admit an \"arms race\" between attackers and defenders. Once perturbed pictures are scraped, the attack cannot be changed so any future successful defense irrevocably undermines users' privacy.", "pdf": "https://openreview.net/pdf/664ff8f2d58700ae00821a00773fdedbf383c737.pdf"} {"title": "MetaMorph: Learning Universal Controllers with Transformers", "url": "https://openreview.net/forum?id=Opmqtk_GvYL", "detail_url": "https://openreview.net/forum?id=Opmqtk_GvYL", "authors": "Agrim Gupta,Linxi Fan,Surya Ganguli,Li Fei-Fei", "tags": "ICLR 2022,Poster", "abstract": "Multiple domains like vision, natural language, and audio are witnessing tremendous progress by leveraging Transformers for large scale pre-training followed by task specific fine tuning. In contrast, in robotics we primarily train a single robot for a single task. However, modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies. However, given the exponentially large number of possible robot morphologies, training a controller for each new design is impractical. In this work, we propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space. MetaMorph is based on the insight that robot morphology is just another modality on which we can condition the output of a Transformer. Through extensive experiments we demonstrate that large scale pre-training on a variety of robot morphologies results in policies with combinatorial generalization capabilities, including zero shot generalization to unseen robot morphologies. We further demonstrate that our pre-trained policy can be used for sample-efficient transfer to completely new robot morphologies and tasks.", "pdf": "https://openreview.net/pdf/7ef00fd81bdb696532e182f6073e6e6d9cb15e98.pdf"} {"title": "HTLM: Hyper-Text Pre-Training and Prompting of Language Models", "url": "https://openreview.net/forum?id=P-pPW1nxf1r", "detail_url": "https://openreview.net/forum?id=P-pPW1nxf1r", "authors": "Armen Aghajanyan,Dmytro Okhonko,Mike Lewis,Mandar Joshi,Hu Xu,Gargi Ghosh,Luke Zettlemoyer", "tags": "ICLR 2022,Poster", "abstract": "We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. 'class' and 'id' attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling '