title
stringlengths 12
151
| url
stringlengths 41
43
| detail_url
stringlengths 41
43
| authors
stringlengths 6
562
| tags
stringclasses 3
values | abstract
stringlengths 519
2.34k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Learning Towards The Largest Margins | https://openreview.net/forum?id=hqkhcFHOeKD | https://openreview.net/forum?id=hqkhcFHOeKD | Xiong Zhou,Xianming Liu,Deming Zhai,Junjun Jiang,Xin Gao,Xiangyang Ji | ICLR 2022,Poster | One of the main challenges for feature representation in deep learning-based classification is the design of appropriate loss functions that exhibit strong discriminative power. The classical softmax loss does not explicitly encourage discriminative learning of features. A popular direction of research is to incorporate margins in well-established losses in order to enforce extra intra-class compactness and inter-class separability, which, however, were developed through heuristic means, as opposed to rigorous mathematical principles. In this work, we attempt to address this limitation by formulating the principled optimization objective as learning towards the largest margins. Specifically, we firstly propose to employ the class margin as the measure of inter-class separability, and the sample margin as the measure of intra-class compactness. Accordingly, to encourage discriminative representation of features, the loss function should promote the largest possible margins for both classes and samples. Furthermore, we derive a generalized margin softmax loss to draw general conclusions for the existing margin-based losses. Not only does this principled framework offer new perspectives to understand and interpret existing margin-based losses, but it also provides new insights that can guide the design of new tools, including \textit{sample margin regularization} and \textit{largest margin softmax loss} for class balanced cases, and \textit{zero centroid regularization} for class imbalanced cases. Experimental results demonstrate the effectiveness of our strategy for multiple tasks including visual classification, imbalanced classification, person re-identification, and face verification. | https://openreview.net/pdf/05f12453b1762c08d54507567f592f91d86425be.pdf |
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations? | https://openreview.net/forum?id=28ib9tf6zhr | https://openreview.net/forum?id=28ib9tf6zhr | Yonggan Fu,Shunyao Zhang,Shang Wu,Cheng Wan,Yingyan Lin | ICLR 2022,Poster | Vision transformers (ViTs) have recently set off a new wave in neural architecture design thanks to their record-breaking performance in various vision tasks. In parallel, to fulfill the goal of deploying ViTs into real-world vision applications, their robustness against potential malicious attacks has gained increasing attention. In particular, recent works show that ViTs are more robust against adversarial attacks as compared with convolutional neural networks (CNNs), and conjecture that this is because ViTs focus more on capturing global interactions among different input/feature patches, leading to their improved robustness to local perturbations imposed by adversarial attacks. In this work, we ask an intriguing question: "Under what kinds of perturbations do ViTs become more vulnerable learners compared to CNNs?" Driven by this question, we first conduct a comprehensive experiment regarding the robustness of both ViTs and CNNs under various existing adversarial attacks to understand the underlying reason favoring their robustness. Based on the drawn insights, we then propose a dedicated attack framework, dubbed Patch-Fool, that fools the self-attention mechanism by attacking its basic component (i.e., a single patch) with a series of attention-aware optimization techniques. Interestingly, our Patch-Fool framework shows for the first time that ViTs are not necessarily more robust than CNNs against adversarial perturbations. In particular, we find that ViTs are more vulnerable learners compared with CNNs against our Patch-Fool attack which is consistent across extensive experiments, and the observations from Sparse/Mild Patch-Fool, two variants of Patch-Fool, indicate an intriguing insight that the perturbation density and strength on each patch seem to be the key factors that influence the robustness ranking between ViTs and CNNs. It can be expected that our Patch-Fool framework will shed light on both future architecture designs and training schemes for robustifying ViTs towards their real-world deployment. Our codes are available at https://github.com/RICE-EIC/Patch-Fool. | https://openreview.net/pdf/4c7b8d2f80c4ea1bfe11754da2e7c69fc5183754.pdf |
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation | https://openreview.net/forum?id=Q5uh1Nvv5dm | https://openreview.net/forum?id=Q5uh1Nvv5dm | David Berthelot,Rebecca Roelofs,Kihyuk Sohn,Nicholas Carlini,Alexey Kurakin | ICLR 2022,Poster | We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different one. With the goal of generality, we introduce AdaMatch, a unified solution for unsupervised domain adaptation (UDA), semi-supervised learning (SSL), and semi-supervised domain adaptation (SSDA). In an extensive experimental study, we compare its behavior with respective state-of-the-art techniques from SSL, SSDA, and UDA and find that AdaMatch either matches or significantly exceeds the state-of-the-art in each case using the same hyper-parameters regardless of the dataset or task. For example, AdaMatch nearly doubles the accuracy compared to that of the prior state-of-the-art on the UDA task for DomainNet and even exceeds the accuracy of the prior state-of-the-art obtained with pre-training by 6.4% when AdaMatch is trained completely from scratch. Furthermore, by providing AdaMatch with just one labeled example per class from the target domain (i.e., the SSDA setting), we increase the target accuracy by an additional 6.1%, and with 5 labeled examples, by 13.6%. | https://openreview.net/pdf/8dd30c7eff2e4f152d2d24368c232baec4e5e974.pdf |
Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound | https://openreview.net/forum?id=l_amHf1oaK | https://openreview.net/forum?id=l_amHf1oaK | Claudio Ferrari,Mark Niklas Mueller,Nikola Jovanović,Martin Vechev | ICLR 2022,Poster | State-of-the-art neural network verifiers are fundamentally based on one of two paradigms: either encoding the whole verification problem via tight multi-neuron convex relaxations or applying a Branch-and-Bound (BaB) procedure leveraging imprecise but fast bounding methods on a large number of easier subproblems. The former can capture complex multi-neuron dependencies but sacrifices completeness due to the inherent limitations of convex relaxations. The latter enables complete verification but becomes increasingly ineffective on larger and more challenging networks. In this work, we present a novel complete verifier which combines the strengths of both paradigms: it leverages multi-neuron relaxations to drastically reduce the number of subproblems generated during the BaB process and an efficient GPU-based dual optimizer to solve the remaining ones. An extensive evaluation demonstrates that our verifier achieves a new state-of-the-art on both established benchmarks as well as networks with significantly higher accuracy than previously considered. The latter result (up to 28% certification gains) indicates meaningful progress towards creating verifiers that can handle practically relevant networks. | https://openreview.net/pdf/fcc20218f5754386cf64f4156a1f41039038b5da.pdf |
Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality | https://openreview.net/forum?id=VFBjuF8HEp | https://openreview.net/forum?id=VFBjuF8HEp | Daniel Watson,William Chan,Jonathan Ho,Mohammad Norouzi | ICLR 2022,Poster | Diffusion models have emerged as an expressive family of generative models rivaling GANs in sample quality and autoregressive models in likelihood scores. Standard diffusion models typically require hundreds of forward passes through the model to generate a single high-fidelity sample. We introduce Differentiable Diffusion Sampler Search (DDSS): a method that optimizes fast samplers for any pre-trained diffusion model by differentiating through sample quality scores. We also present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models. We show that optimizing the degrees of freedom of GGDM samplers by maximizing sample quality scores via gradient descent leads to improved sample quality. Our optimization procedure backpropagates through the sampling process using the reparametrization trick and gradient rematerialization. DDSS achieves strong results on unconditional image generation across various datasets (e.g., FID scores on LSUN church 128x128 of 11.6 with only 10 inference steps, and 4.82 with 20 steps, compared to 51.1 and 14.9 with strongest DDPM/DDIM baselines). Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required. | https://openreview.net/pdf/56f0145dd15f32bd53f6dba7efde74914a88f663.pdf |
Distribution Compression in Near-Linear Time | https://openreview.net/forum?id=lzupY5zjaU9 | https://openreview.net/forum?id=lzupY5zjaU9 | Abhishek Shetty,Raaz Dwivedi,Lester Mackey | ICLR 2022,Poster | In distribution compression, one aims to accurately summarize a probability distribution $\mathbb{P}$ using a small number of representative points. Near-optimal thinning procedures achieve this goal by sampling $n$ points from a Markov chain and identifying $\sqrt{n}$ points with $\widetilde{\mathcal{O}}(1/\sqrt{n})$ discrepancy to $\mathbb{P}$. Unfortunately, these algorithms suffer from quadratic or super-quadratic runtime in the sample size $n$. To address this deficiency, we introduce Compress++, a simple meta-procedure for speeding up any thinning algorithm while suffering at most a factor of $4$ in error. When combined with the quadratic-time kernel halving and kernel thinning algorithms of Dwivedi and Mackey (2021), Compress++ delivers $\sqrt{n}$ points with $\mathcal{O}(\sqrt{\log n/n})$ integration error and better-than-Monte-Carlo maximum mean discrepancy in $\mathcal{O}(n \log^3 n)$ time and $\mathcal{O}( \sqrt{n} \log^2 n )$ space. Moreover, Compress++ enjoys the same near-linear runtime given any quadratic-time input and reduces the runtime of super-quadratic algorithms by a square-root factor. In our benchmarks with high-dimensional Monte Carlo samples and Markov chains targeting challenging differential equation posteriors, Compress++ matches or nearly matches the accuracy of its input algorithm in orders of magnitude less time. | https://openreview.net/pdf/484f68f97f561be1f3272522336a9a0b1fa84bbc.pdf |
Capturing Structural Locality in Non-parametric Language Models | https://openreview.net/forum?id=nnU3IUMJmN | https://openreview.net/forum?id=nnU3IUMJmN | Frank F. Xu,Junxian He,Graham Neubig,Vincent Josua Hellendoorn | ICLR 2022,Poster | Structural locality is a ubiquitous feature of real-world datasets, wherein data points are organized into local hierarchies. Some examples include topical clusters in text or project hierarchies in source code repositories. In this paper, we explore utilizing this structural locality within non-parametric language models, which generate sequences that reference retrieved examples from an external source. We propose a simple yet effective approach for adding locality information into such models by adding learned parameters that improve the likelihood of retrieving examples from local neighborhoods. Experiments on two different domains, Java source code and Wikipedia text, demonstrate that locality features improve model efficacy over models without access to these features, with interesting differences. We also perform an analysis of how and where locality features contribute to improving performance and why the traditionally used contextual similarity metrics alone are not enough to grasp the locality structure.
| https://openreview.net/pdf/05677eb0d7fca88dd7c4c6cbefa73f6ae430ad68.pdf |
Audio Lottery: Speech Recognition Made Ultra-Lightweight, Noise-Robust, and Transferable | https://openreview.net/forum?id=9Nk6AJkVYB | https://openreview.net/forum?id=9Nk6AJkVYB | Shaojin Ding,Tianlong Chen,Zhangyang Wang | ICLR 2022,Poster | Lightweight speech recognition models have seen explosive demands owing to a growing amount of speech-interactive features on mobile devices. Since designing such systems from scratch is non-trivial, practitioners typically choose to compress large (pre-trained) speech models. Recently, lottery ticket hypothesis reveals the existence of highly sparse subnetworks that can be trained in isolation without sacrificing the performance of the full models. In this paper, we investigate the tantalizing possibility of using lottery ticket hypothesis to discover lightweight speech recognition models, that are (1) robust to various noise existing in speech; (2) transferable to fit the open-world personalization; and 3) compatible with structured sparsity. We conducted extensive experiments on CNN-LSTM, RNN-Transducer, and Transformer models, and verified the existence of highly sparse winning tickets that can match the full model performance across those backbones. We obtained winning tickets that have less than 20% of full model weights on all backbones, while the most lightweight one only keeps 4.4% weights. Those winning tickets generalize to structured sparsity with no performance loss, and transfer exceptionally from large source datasets to various target datasets. Perhaps most surprisingly, when the training utterances have high background noises, the winning tickets even substantially outperform the full models, showing the extra bonus of noise robustness by inducing sparsity. Codes are available at https://github.com/VITA-Group/Audio-Lottery. | https://openreview.net/pdf/3d42ff881f8ec8954935d0f8bbcb2a21d71106ea.pdf |
Learning to Map for Active Semantic Goal Navigation | https://openreview.net/forum?id=swrMQttr6wN | https://openreview.net/forum?id=swrMQttr6wN | Georgios Georgakis,Bernadette Bucher,Karl Schmeckpeper,Siddharth Singh,Kostas Daniilidis | ICLR 2022,Poster | We consider the problem of object goal navigation in unseen environments. Solving this problem requires learning of contextual semantic priors, a challenging endeavour given the spatial and semantic variability of indoor environments. Current methods learn to implicitly encode these priors through goal-oriented navigation policy functions operating on spatial representations that are limited to the agent's observable areas. In this work, we propose a novel framework that actively learns to generate semantic maps outside the field of view of the agent and leverages the uncertainty over the semantic classes in the unobserved areas to decide on long term goals. We demonstrate that through this spatial prediction strategy, we are able to learn semantic priors in scenes that can be leveraged in unknown environments. Additionally, we show how different objectives can be defined by balancing exploration with exploitation during searching for semantic targets. Our method is validated in the visually realistic environments of the Matterport3D dataset and show improved results on object goal navigation over competitive baselines. | https://openreview.net/pdf/8097afd8a3e6d7c824f59390ca5a9cee0530bbd1.pdf |
Benchmarking the Spectrum of Agent Capabilities | https://openreview.net/forum?id=1W0z96MFEoH | https://openreview.net/forum?id=1W0z96MFEoH | Danijar Hafner | ICLR 2022,Poster | Evaluating the general abilities of intelligent agents requires complex simulation environments. Existing benchmarks typically evaluate only one narrow task per environment, requiring researchers to perform expensive training runs on many different environments. We introduce Crafter, an open world survival game with visual inputs that evaluates a wide range of general abilities within a single environment. Agents either learn from the provided reward signal or through intrinsic objectives and are evaluated by semantically meaningful achievements that can be unlocked during each episode, such as discovering resources and crafting tools. Consistently unlocking all achievements requires strong generalization, deep exploration, and long-term reasoning. We experimentally verify that Crafter is of appropriate difficulty to drive future research and provide baselines scores of reward agents and unsupervised agents. Furthermore, we observe sophisticated behaviors emerging from maximizing the reward signal, such as building tunnel systems, bridges, houses, and plantations. We hope that Crafter will accelerate research progress by quickly evaluating a wide spectrum of abilities. | https://openreview.net/pdf/116a18888b3fb460e882ec2b844128223e3b17ca.pdf |
Mind the Gap: Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks | https://openreview.net/forum?id=vqGi8Kp0wM | https://openreview.net/forum?id=vqGi8Kp0wM | Peihao Zhu,Rameen Abdal,John Femiani,Peter Wonka | ICLR 2022,Poster | We present a new method for one shot domain adaptation. The input to our method is trained GAN that can produce images in domain A and a single reference image I_B from domain B. The proposed algorithm can translate any output of the trained GAN from domain A to domain B. There are two main advantages of our method compared to the current state of the art: First, our solution achieves higher visual quality, e.g. by noticeably reducing overfitting. Second, our solution allows for more degrees of freedom to control the domain gap, i.e. what aspects of image I_B are used to define the domain B. Technically, we realize the new method by building on a pre-trained StyleGAN generator as GAN and a pre-trained CLIP model for representing the domain gap. We propose several new regularizers for controlling the domain gap to optimize the weights of the pre-trained StyleGAN generator to output images in domain B instead of domain A. The regularizers prevent the optimization from taking on too many attributes of the single reference image. Our results show significant visual improvements over the state of the art as well as multiple applications that highlight improved control. | https://openreview.net/pdf/2f6e593f100fa850ecde50e059aa6b2e73a3f6fe.pdf |
On Evaluation Metrics for Graph Generative Models | https://openreview.net/forum?id=EnwCZixjSh | https://openreview.net/forum?id=EnwCZixjSh | Rylee Thompson,Boris Knyazev,Elahe Ghalebi,Jungtaek Kim,Graham W. Taylor | ICLR 2022,Poster | In image generation, generative models can be evaluated naturally by visually inspecting model outputs. However, this is not always the case for graph generative models (GGMs), making their evaluation challenging. Currently, the standard process for evaluating GGMs suffers from three critical limitations: i) it does not produce a single score which makes model selection challenging, ii) in many cases it fails to consider underlying edge and node features, and iii) it is prohibitively slow to perform. In this work, we mitigate these issues by searching for \emph{scalar, domain-agnostic, and scalable metrics} for evaluating and ranking GGMs. To this end, we study existing GGM metrics and neural-network-based metrics emerging from generative models of images that use embeddings extracted from a task-specific network. Motivated by the power of Graph Neural Networks (GNNs) to extract meaningful graph representations \emph{without any training}, we introduce several metrics based on the features extracted by an untrained random GNN. We design experiments to thoroughly test and objectively score metrics on their ability to measure the diversity and fidelity of generated graphs, as well as their sample and computational efficiency. Depending on the quantity of samples, we recommend one of two metrics from our collection of random-GNN-based metrics. We show these two metrics to be more expressive than pre-existing and alternative random-GNN-based metrics using our objective scoring. While we focus on applying these metrics to GGM evaluation, in practice this enables the ability to easily compute the dissimilarity between any two sets of graphs \emph{regardless of domain}. Our code is released at: https://github.com/uoguelph-mlrg/GGM-metrics. | https://openreview.net/pdf/fcb94055fd54a7db263aab7d0f85b591c34e713e.pdf |
Selective Ensembles for Consistent Predictions | https://openreview.net/forum?id=HfUyCRBeQc | https://openreview.net/forum?id=HfUyCRBeQc | Emily Black,Klas Leino,Matt Fredrikson | ICLR 2022,Poster | Recent work has shown that models trained to the same objective, and which achieve similar measures of accuracy on consistent test data, may nonetheless behave very differently on individual predictions. This inconsistency is undesirable in high-stakes contexts, such as medical diagnosis and finance. We show that this duplicitous behavior extends beyond predictions to feature attributions, which may likewise have negative implications for the intelligibility of a model, and one's ability to find recourse for subjects. We then introduce selective ensembles to mitigate such inconsistencies by applying hypothesis testing to the predictions of a set of models trained using randomly-selected starting conditions; importantly, selective ensembles can abstain in cases where a consistent outcome cannot be achieved up to a specified confidence level. We prove that that prediction disagreement between selective ensembles is bounded, and empirically demonstrate that selective ensembles achieve consistent predictions and feature attributions while maintaining low abstention rates. On several benchmark datasets, selective ensembles reach zero inconsistently predicted points, with abstention rates as low as 1.5%. | https://openreview.net/pdf/aef96c65d43466af59147df0d990f0b94efbef7a.pdf |
Graph Condensation for Graph Neural Networks | https://openreview.net/forum?id=WLEx3Jo4QaB | https://openreview.net/forum?id=WLEx3Jo4QaB | Wei Jin,Lingxiao Zhao,Shichang Zhang,Yozen Liu,Jiliang Tang,Neil Shah | ICLR 2022,Poster | Given the prevalence of large-scale graphs in real-world applications, the storage and time for training neural models have raised increasing concerns. To alleviate the concerns, we propose and study the problem of graph condensation for graph neural networks (GNNs). Specifically, we aim to condense the large, original graph into a small, synthetic and highly-informative graph, such that GNNs trained on the small graph and large graph have comparable performance. We approach the condensation problem by imitating the GNN training trajectory on the original graph through the optimization of a gradient matching loss and design a strategy to condense node futures and structural information simultaneously. Extensive experiments have demonstrated the effectiveness of the proposed framework in condensing different graph datasets into informative smaller graphs. In particular, we are able to approximate the original test accuracy by 95.3\% on Reddit, 99.8\% on Flickr and 99.0\% on Citeseer, while reducing their graph size by more than 99.9\%, and the condensed graphs can be used to train various GNN architectures. | https://openreview.net/pdf/fb904d1d840eb264e6ab2e160ff7322153a1fbb0.pdf |
DIVA: Dataset Derivative of a Learning Task | https://openreview.net/forum?id=bVvMOtLMiw | https://openreview.net/forum?id=bVvMOtLMiw | Yonatan Dukler,Alessandro Achille,Giovanni Paolini,Avinash Ravichandran,Marzia Polito,Stefano Soatto | ICLR 2022,Poster | We present a method to compute the derivative of a learning task with respect to a dataset. A learning task is a function from a training set to the validation error, which can be represented by a trained deep neural network (DNN). The ``dataset derivative'' is a linear operator, computed around the trained model, that informs how perturbations of the weight of each training sample affect the validation error, usually computed on a separate validation dataset. Our method, DIVA (Differentiable Validation) hinges on a closed-form differentiable expression of the leave-one-out cross-validation error around a pre-trained DNN. Such expression constitutes the dataset derivative. DIVA could be used for dataset auto-curation, for example removing samples with faulty annotations, augmenting a dataset with additional relevant samples, or rebalancing. More generally, DIVA can be used to optimize the dataset, along with the parameters of the model, as part of the training process without the need for a separate validation dataset, unlike bi-level optimization methods customary in AutoML. To illustrate the flexibility of DIVA, we report experiments on sample auto-curation tasks such as outlier rejection, dataset extension, and automatic aggregation of multi-modal data. | https://openreview.net/pdf/c20ae574c689fe5fbecb96f791b3e678973e0053.pdf |
Towards General Function Approximation in Zero-Sum Markov Games | https://openreview.net/forum?id=sA4qIu3zv6v | https://openreview.net/forum?id=sA4qIu3zv6v | Baihe Huang,Jason D. Lee,Zhaoran Wang,Zhuoran Yang | ICLR 2022,Poster | This paper considers two-player zero-sum finite-horizon Markov games with simultaneous moves. The study focuses on the challenging settings where the value
function or the model is parameterized by general function classes. Provably efficient
algorithms for both decoupled and coordinated settings are developed. In the decoupled setting where the agent controls a single player and plays against an arbitrary opponent, we propose a new model-free algorithm. The sample complexity is governed by the Minimax Eluder dimension—a new dimension of the function class in Markov games. As a special case, this method improves the state-of-the-art algorithm
by a $\sqrt{d}$ factor in the regret when the reward function and transition kernel are parameterized with d-dimensional linear features. In the coordinated setting where both
players are controlled by the agent, we propose a model-based algorithm and a model-free algorithm. In the model-based algorithm, we prove that sample complexity can
be bounded by a generalization of Witness rank to Markov games. The model-free
algorithm enjoys a $\sqrt{K}$-regret upper bound where $K$ is the number of episodes. Our
algorithms are based on new techniques of alternate optimism | https://openreview.net/pdf/89164a5698b4ced1396254451108620fc52d5bc1.pdf |
Exposing the Implicit Energy Networks behind Masked Language Models via Metropolis--Hastings | https://openreview.net/forum?id=6PvWo1kEvlT | https://openreview.net/forum?id=6PvWo1kEvlT | Kartik Goyal,Chris Dyer,Taylor Berg-Kirkpatrick | ICLR 2022,Poster | While recent work has shown that scores from models trained by the ubiquitous masked language modeling (MLM) objective effectively discriminate probable from improbable sequences, it is still an open question if these MLMs specify a principled probability distribution over the space of possible sequences. In this paper, we interpret MLMs as energy-based sequence models and propose two energy parametrizations derivable from the trained MLMs. In order to draw samples correctly from these models, we develop a tractable sampling scheme based on the Metropolis--Hastings Monte Carlo algorithm. In our approach, samples are proposed from the same masked conditionals used for training the masked language models, and they are accepted or rejected based on their energy values according to the target distribution. We validate the effectiveness of the proposed parametrizations by exploring the quality of samples drawn from these energy-based models for both open-ended unconditional generation and a conditional generation task of machine translation. We theoretically and empirically justify our sampling algorithm by showing that the masked conditionals on their own do not yield a Markov chain whose stationary distribution is that of our target distribution, and our approach generates higher quality samples than other recently proposed undirected generation approaches (Wang et al., 2019, Ghazvininejad et al., 2019). | https://openreview.net/pdf/dfdc7212f0c035baaec71e0d9d64317aec15492b.pdf |
ClimateGAN: Raising Climate Change Awareness by Generating Images of Floods | https://openreview.net/forum?id=EZNOb_uNpJk | https://openreview.net/forum?id=EZNOb_uNpJk | Victor Schmidt,Alexandra Luccioni,Mélisande Teng,Tianyu Zhang,Alexia Reynaud,Sunand Raghupathi,Gautier Cosne,Adrien Juraver,Vahe Vardanyan,Alex Hernández-García,Yoshua Bengio | ICLR 2022,Poster | Climate change is a major threat to humanity and the actions required to prevent its catastrophic consequences include changes in both policy-making and individual behaviour. However, taking action requires understanding its seemingly abstract and distant consequences. Projecting the potential impacts of extreme climate events such as flooding in familiar places can help make the impacts of climate change more concrete and encourage action. As part of a larger initiative to build a website (https://thisclimatedoesnotexist.com) that projects extreme climate events onto user-chosen photos, we present our solution to simulate photo-realistic floods on authentic images. To address this complex task in the absence of suitable data, we propose ClimateGAN, a model that leverages both simulated and real data through unsupervised domain adaptation and conditional image generation. In this paper, we describe the details of our framework, thoroughly evaluate the main components of our architecture and demonstrate that our model is capable of robustly generating photo-realistic flooding on street images. | https://openreview.net/pdf/ca121d72177c0fb77244bde0b2958681a89d4b98.pdf |
A Comparison of Hamming Errors of Representative Variable Selection Methods | https://openreview.net/forum?id=nhN-fqxmNGx | https://openreview.net/forum?id=nhN-fqxmNGx | Tracy Ke,Longlin Wang | ICLR 2022,Poster | Lasso is a celebrated method for variable selection in linear models, but it faces challenges when the covariates are moderately or strongly correlated. This motivates alternative approaches such as using a non-convex penalty, adding a ridge regularization, or conducting a post-Lasso thresholding. In this paper, we compare Lasso with 5 other methods: Elastic net, SCAD, forward selection, thresholded Lasso, and forward backward selection. We measure their performances theoretically by the expected Hamming error, assuming that the regression coefficients are ${\it iid}$ drawn from a two-point mixture and that the Gram matrix is block-wise diagonal. By deriving the rates of convergence of Hamming errors and the phase diagrams, we obtain useful conclusions about the pros and cons of different methods. | https://openreview.net/pdf/ae8e44624ed225194ef2c6ef294ae6d5067515b8.pdf |
A Program to Build E(N)-Equivariant Steerable CNNs | https://openreview.net/forum?id=WE4qe9xlnQw | https://openreview.net/forum?id=WE4qe9xlnQw | Gabriele Cesa,Leon Lang,Maurice Weiler | ICLR 2022,Poster | Equivariance is becoming an increasingly popular design choice to build data efficient neural networks by exploiting prior knowledge about the symmetries of the problem at hand. Euclidean steerable CNNs are one of the most common classes of equivariant networks. While the constraints these architectures need to satisfy are understood, existing approaches are tailored to specific (classes of) groups. No generally applicable method that is practical for implementation has been described so far. In this work, we generalize the Wigner-Eckart theorem proposed in Lang & Weiler (2020), which characterizes general $G$-steerable kernel spaces for compact groups $G$ over their homogeneous spaces, to arbitrary $G$-spaces. This enables us to directly parameterize filters in terms of a band-limited basis on the whole space rather than on $G$'s orbits, but also to easily implement steerable CNNs equivariant to a large number of groups. To demonstrate its generality, we instantiate our method on a variety of isometry groups acting on the Euclidean space $\mathbb{R}^3$. Our framework allows us to build $E(3)$ and $SE(3)$-steerable CNNs like previous works, but also CNNs with arbitrary $G\leq O(3)$-steerable kernels. For example, we build 3D CNNs equivariant to the symmetries of platonic solids or choose $G=SO(2)$ when working with 3D data having only azimuthal symmetries. We compare these models on 3D shapes and molecular datasets, observing improved performance by matching the model's symmetries to the ones of the data. | https://openreview.net/pdf/6d634b6f1eabc70593f897e223c78025e3029b52.pdf |
Minimax Optimization with Smooth Algorithmic Adversaries | https://openreview.net/forum?id=UdxJ2fJx7N0 | https://openreview.net/forum?id=UdxJ2fJx7N0 | Tanner Fiez,Chi Jin,Praneeth Netrapalli,Lillian J Ratliff | ICLR 2022,Poster | This paper considers minimax optimization $\min_x \max_y f(x, y)$ in the challenging setting where $f$ can be both nonconvex in $x$ and nonconcave in $y$. Though such optimization problems arise in many machine learning paradigms including training generative adversarial networks (GANs) and adversarially robust models, from a theoretical point of view, two fundamental issues remain: (i) the absence of simple and efficiently computable optimality notions, and (ii) cyclic or diverging behavior of existing algorithms. This paper proposes a new theoretical framework for nonconvex-nonconcave minimax optimization that addresses both of the above issues. The starting point of this paper is the observation that, under a computational budget, the max-player can not fully maximize $f(x,\cdot)$ since nonconcave maximization is NP-hard in general. So, we propose a new framework, and a corresponding algorithm, for the min-player to play against \emph{smooth algorithms} deployed by the adversary (i.e., the max-player) instead of against full maximization. Our algorithm is guaranteed to make monotonic progress (thus having no limit cycles or diverging behavior), and to find an appropriate ``stationary point'' in a polynomial number of iterations. Our framework covers practically relevant settings where the smooth algorithms deployed by the adversary are multi-step stochastic gradient ascent, and its accelerated version. We further present experimental results that confirm our theoretical findings and demonstrate the effectiveness of the proposed approach in practice on simple, conceptual settings. | https://openreview.net/pdf/6f978c34600cf6fcf440c6e1bf8d1f93e0afce3d.pdf |
On Distributed Adaptive Optimization with Gradient Compression | https://openreview.net/forum?id=CI-xXX9dg9l | https://openreview.net/forum?id=CI-xXX9dg9l | Xiaoyun Li,Belhal Karimi,Ping Li | ICLR 2022,Poster | We study COMP-AMS, a distributed optimization framework based on gradient averaging and adaptive AMSGrad algorithm. Gradient compression with error feedback is applied to reduce the communication cost in the gradient transmission process. Our convergence analysis of COMP-AMS shows that such compressed gradient averaging strategy yields same convergence rate as standard AMSGrad, and also exhibits the linear speedup effect w.r.t. the number of local workers. Compared with recently proposed protocols on distributed adaptive methods, COMP-AMS is simple and convenient. Numerical experiments are conducted to justify the theoretical findings, and demonstrate that the proposed method can achieve same test accuracy as the full-gradient AMSGrad with substantial communication savings. With its simplicity and efficiency, COMP-AMS can serve as a useful distributed training framework for adaptive methods. | https://openreview.net/pdf/84313c8e0bf7b65d71addc3b16aba48f161f4092.pdf |
Leveraging unlabeled data to predict out-of-distribution performance | https://openreview.net/forum?id=o_HsiMPYh_x | https://openreview.net/forum?id=o_HsiMPYh_x | Saurabh Garg,Sivaraman Balakrishnan,Zachary Chase Lipton,Behnam Neyshabur,Hanie Sedghi | ICLR 2022,Poster | Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions
that may cause performance drops. In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data. We propose Average Thresholded Confidence (ATC), a practical method that learns a \emph{threshold} on the model's confidence, predicting accuracy as the fraction of unlabeled examples for which model confidence exceeds that threshold. ATC outperforms previous methods across several model architectures, types of distribution shifts (e.g., due to synthetic corruptions, dataset reproduction, or novel subpopulations), and datasets (\textsc{Wilds}-FMoW, ImageNet, \breeds, CIFAR, and MNIST). In our experiments, ATC estimates target performance $2\text{--}4\times$ more accurately than prior methods. We also explore the theoretical foundations of the problem, proving that, in general, identifying the accuracy is just as hard as identifying the optimal predictor and thus, the efficacy of any method rests upon (perhaps unstated) assumptions on the nature of the shift. Finally, analyzing our method on some toy distributions, we provide insights concerning when it works.
| https://openreview.net/pdf/f94008d1c0cfc4177d8617db211b62b1f85906ea.pdf |
VC dimension of partially quantized neural networks in the overparametrized regime | https://openreview.net/forum?id=7udZAsEzd60 | https://openreview.net/forum?id=7udZAsEzd60 | Yutong Wang,Clayton Scott | ICLR 2022,Poster | Vapnik-Chervonenkis (VC) theory has so far been unable to explain the small generalization error of overparametrized neural networks. Indeed, existing applications of VC theory to large networks obtain upper bounds on VC dimension that are proportional to the number of weights, and for a large class of networks, these upper bound are known to be tight. In this work, we focus on a class of partially quantized networks that we refer to as hyperplane arrangement neural networks (HANNs). Using a sample compression analysis, we show that HANNs can have VC dimension significantly smaller than the number of weights, while being highly expressive. In particular, empirical risk minimization over HANNs in the overparametrized regime achieves the minimax rate for classification with Lipschitz posterior class probability. We further demonstrate the expressivity of HANNs empirically. On a panel of 121 UCI datasets, overparametrized HANNs are able to match the performance of state-of-the-art full-precision models. | https://openreview.net/pdf/9760187606b3496a5f4a0fe752a22416bb4a2e21.pdf |
Optimal Representations for Covariate Shift | https://openreview.net/forum?id=Rf58LPCwJj0 | https://openreview.net/forum?id=Rf58LPCwJj0 | Yangjun Ruan,Yann Dubois,Chris J. Maddison | ICLR 2022,Poster | Machine learning systems often experience a distribution shift between training and testing. In this paper, we introduce a simple variational objective whose optima are exactly the set of all representations on which risk minimizers are guaranteed to be robust to any distribution shift that preserves the Bayes predictor, e.g., covariate shifts. Our objective has two components. First, a representation must remain discriminative for the task, i.e., some predictor must be able to simultaneously minimize the source and target risk. Second, the representation's marginal support needs to be the same across source and target. We make this practical by designing self-supervised objectives that only use unlabelled data and augmentations to train robust representations.
Our objectives give insights into the robustness of CLIP, and further improve CLIP's representations to achieve SOTA results on DomainBed. | https://openreview.net/pdf/ddc6369b11aed2bc1a72bc2f493bb2ebd0f65be7.pdf |
Fortuitous Forgetting in Connectionist Networks | https://openreview.net/forum?id=ei3SY1_zYsE | https://openreview.net/forum?id=ei3SY1_zYsE | Hattie Zhou,Ankit Vani,Hugo Larochelle,Aaron Courville | ICLR 2022,Poster | Forgetting is often seen as an unwanted characteristic in both human and machine learning. However, we propose that forgetting can in fact be favorable to learning. We introduce forget-and-relearn as a powerful paradigm for shaping the learning trajectories of artificial neural networks. In this process, the forgetting step selectively removes undesirable information from the model, and the relearning step reinforces features that are consistently useful under different conditions. The forget-and-relearn framework unifies many existing iterative training algorithms in the image classification and language emergence literature, and allows us to understand the success of these algorithms in terms of the disproportionate forgetting of undesirable information. We leverage this understanding to improve upon existing algorithms by designing more targeted forgetting operations. Insights from our analysis provide a coherent view on the dynamics of iterative training in neural networks and offer a clear path towards performance improvements. | https://openreview.net/pdf/ca4d5fd0fac40867b797ca356f4056c7cb11fc6a.pdf |
EigenGame Unloaded: When playing games is better than optimizing | https://openreview.net/forum?id=So6YAqnqgMj | https://openreview.net/forum?id=So6YAqnqgMj | Ian Gemp,Brian McWilliams,Claire Vernade,Thore Graepel | ICLR 2022,Poster | We build on the recently proposed EigenGame that views eigendecomposition as a competitive game. EigenGame's updates are biased if computed using minibatches of data, which hinders convergence and more sophisticated parallelism in the stochastic setting. In this work, we propose an unbiased stochastic update that is asymptotically equivalent to EigenGame, enjoys greater parallelism allowing computation on datasets of larger sample sizes, and outperforms EigenGame in experiments. We present applications to finding the principal components of massive datasets and performing spectral clustering of graphs. We analyze and discuss our proposed update in the context of EigenGame and the shift in perspective from optimization to games. | https://openreview.net/pdf/cedcb096f43d8f1b1e43c8969cf5b1dd7e83d5ae.pdf |
Contextualized Scene Imagination for Generative Commonsense Reasoning | https://openreview.net/forum?id=Oh1r2wApbPv | https://openreview.net/forum?id=Oh1r2wApbPv | PeiFeng Wang,Jonathan Zamora,Junfeng Liu,Filip Ilievski,Muhao Chen,Xiang Ren | ICLR 2022,Poster | Humans use natural language to compose common concepts from their environment into plausible, day-to-day scene descriptions. However, such generative commonsense reasoning (GCSR) skills are lacking in state-of-the-art text generation methods. Descriptive sentences about arbitrary concepts generated by neural text generation models (e.g., pre-trained text-to-text Transformers) are often grammatically fluent but may not correspond to human common sense, largely due to their lack of mechanisms to capture concept relations, to identify implicit concepts, and to perform generalizable reasoning about unseen concept compositions. In this paper, we propose an Imagine-and-Verbalize (I\&V) method, which learns to imagine a relational scene knowledge graph (SKG) with relations between the input concepts, and leverage the SKG as a constraint when generating a plausible scene description. We collect and harmonize a set of knowledge resources from different domains and modalities, providing a rich auxiliary supervision signal for I\&V. The experiments demonstrate the effectiveness of I\&V in improving language models on both concept-to-sentence and concept-to-story generation tasks, while enabling the model to learn well from fewer task examples and generate SKGs that make common sense to human annotators. | https://openreview.net/pdf/a66e1b12b2211131a44463611c8c272c21decbfb.pdf |
Scene Transformer: A unified architecture for predicting future trajectories of multiple agents | https://openreview.net/forum?id=Wm3EA5OlHsG | https://openreview.net/forum?id=Wm3EA5OlHsG | Jiquan Ngiam,Vijay Vasudevan,Benjamin Caine,Zhengdong Zhang,Hao-Tien Lewis Chiang,Jeffrey Ling,Rebecca Roelofs,Alex Bewley,Chenxi Liu,Ashish Venugopal,David J Weiss,Benjamin Sapp,Zhifeng Chen,Jonathon Shlens | ICLR 2022,Poster | Predicting the motion of multiple agents is necessary for planning in dynamic environments. This task is challenging for autonomous driving since agents (e.g., vehicles and pedestrians) and their associated behaviors may be diverse and influence one another. Most prior work have focused on predicting independent futures for each agent based on all past motion, and planning against these independent predictions. However, planning against independent predictions can make it challenging to represent the future interaction possibilities between different agents, leading to sub-optimal planning. In this work, we formulate a model for predicting the behavior of all agents jointly, producing consistent futures that account for interactions between agents. Inspired by recent language modeling approaches, we use a masking strategy as the query to our model, enabling one to invoke a single model to predict agent behavior in many ways, such as potentially conditioned on the goal or full future trajectory of the autonomous vehicle or the behavior of other agents in the environment. Our model architecture employs attention to combine features across road elements, agent interactions, and time steps. We evaluate our approach on autonomous driving datasets for both marginal and joint motion prediction, and achieve state of the art performance across two popular datasets. Through combining a scene-centric approach, agent permutation equivariant model, and a sequence masking strategy, we show that our model can unify a variety of motion prediction tasks from joint motion predictions to conditioned prediction. | https://openreview.net/pdf/92f191f2cdcf1389ed2d3dce901833dc5fc6deaf.pdf |
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals | https://openreview.net/forum?id=qY79G8jGsep | https://openreview.net/forum?id=qY79G8jGsep | Asma Ghandeharioun,Been Kim,Chun-Liang Li,Brendan Jou,Brian Eoff,Rosalind Picard | ICLR 2022,Poster | Explaining deep learning model inferences is a promising venue for scientific understanding, improving safety, uncovering hidden biases, evaluating fairness, and beyond, as argued by many scholars. One of the principal benefits of counterfactual explanations is allowing users to explore "what-if" scenarios through what does not and cannot exist in the data, a quality that many other forms of explanation such as heatmaps and influence functions are inherently incapable of doing. However, most previous work on generative explainability cannot disentangle important concepts effectively, produces unrealistic examples, or fails to retain relevant information. We propose a novel approach, DISSECT, that jointly trains a generator, a discriminator, and a concept disentangler to overcome such challenges using little supervision. DISSECT generates Concept Traversals (CTs), defined as a sequence of generated examples with increasing degrees of concepts that influence a classifier's decision. By training a generative model from a classifier's signal, DISSECT offers a way to discover a classifier's inherent "notion" of distinct concepts automatically rather than rely on user-predefined concepts. We show that DISSECT produces CTs that (1) disentangle several concepts, (2) are influential to a classifier's decision and are coupled to its reasoning due to joint training (3), are realistic, (4) preserve relevant information, and (5) are stable across similar inputs. We validate DISSECT on several challenging synthetic and realistic datasets where previous methods fall short of satisfying desirable criteria for interpretability and show that it performs consistently well. Finally, we present experiments showing applications of DISSECT for detecting potential biases of a classifier and identifying spurious artifacts that impact predictions. | https://openreview.net/pdf/8e8a8d5dafd24c9cba49d3671b2ee34d0decdecf.pdf |
Heteroscedastic Temporal Variational Autoencoder For Irregularly Sampled Time Series | https://openreview.net/forum?id=Az7opqbQE-3 | https://openreview.net/forum?id=Az7opqbQE-3 | Satya Narayan Shukla,Benjamin Marlin | ICLR 2022,Poster | Irregularly sampled time series commonly occur in several domains where they present a significant challenge to standard deep learning models. In this paper, we propose a new deep learning framework for probabilistic interpolation of irregularly sampled time series that we call the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE includes a novel input layer to encode information about input observation sparsity, a temporal VAE architecture to propagate uncertainty due to input sparsity, and a heteroscedastic output layer to enable variable uncertainty in the output interpolations. Our results show that the proposed architecture is better able to reflect variable uncertainty through time due to sparse and irregular sampling than a range of baseline and traditional models, as well as recently proposed deep latent variable models that use homoscedastic output layers. | https://openreview.net/pdf/4a602866528e0ae9511889c65b61991ad9ddfd8b.pdf |
A Neural Tangent Kernel Perspective of Infinite Tree Ensembles | https://openreview.net/forum?id=vUH85MOXO7h | https://openreview.net/forum?id=vUH85MOXO7h | Ryuichi Kanoh,Mahito Sugiyama | ICLR 2022,Poster | In practical situations, the tree ensemble is one of the most popular models along with neural networks. A soft tree is a variant of a decision tree. Instead of using a greedy method for searching splitting rules, the soft tree is trained using a gradient method in which the entire splitting operation is formulated in a differentiable form. Although ensembles of such soft trees have been used increasingly in recent years, little theoretical work has been done to understand their behavior. By considering an ensemble of infinite soft trees, this paper introduces and studies the Tree Neural Tangent Kernel (TNTK), which provides new insights into the behavior of the infinite ensemble of soft trees. Using the TNTK, we theoretically identify several non-trivial properties, such as global convergence of the training, the equivalence of the oblivious tree structure, and the degeneracy of the TNTK induced by the deepening of the trees. | https://openreview.net/pdf/39b3d2b8700abc51932e7eea69ff8d0868dc2be8.pdf |
AlphaZero-based Proof Cost Network to Aid Game Solving | https://openreview.net/forum?id=nKWjE4QF1hB | https://openreview.net/forum?id=nKWjE4QF1hB | Ti-Rong Wu,Chung-Chin Shih,Ting Han Wei,Meng-Yu Tsai,Wei-Yuan Hsu,I-Chen Wu | ICLR 2022,Poster | The AlphaZero algorithm learns and plays games without hand-crafted expert knowledge. However, since its objective is to play well, we hypothesize that a better objective can be defined for the related but separate task of solving games. This paper proposes a novel approach to solving problems by modifying the training target of the AlphaZero algorithm, such that it prioritizes solving the game quickly, rather than winning. We train a Proof Cost Network (PCN), where proof cost is a heuristic that estimates the amount of work required to solve problems. This matches the general concept of the so-called proof number from proof number search, which has been shown to be well-suited for game solving. We propose two specific training targets. The first finds the shortest path to a solution, while the second estimates the proof cost. We conduct experiments on solving 15x15 Gomoku and 9x9 Killall-Go problems with both MCTS-based and FDFPN solvers. Comparisons between using AlphaZero networks and PCN as heuristics show that PCN can solve more problems. | https://openreview.net/pdf/b5c23474ea991857d67e3e750bb82c36a669b2e9.pdf |
Bayesian Framework for Gradient Leakage | https://openreview.net/forum?id=f2lrIbGx3x7 | https://openreview.net/forum?id=f2lrIbGx3x7 | Mislav Balunovic,Dimitar Iliev Dimitrov,Robin Staab,Martin Vechev | ICLR 2022,Poster | Federated learning is an established method for training machine learning models without sharing training data. However, recent work has shown that it cannot guarantee data privacy as shared gradients can still leak sensitive information. To formalize the problem of gradient leakage, we propose a theoretical framework that enables, for the first time, analysis of the Bayes optimal adversary phrased as an optimization problem. We demonstrate that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients. Our experiments confirm the effectiveness of the Bayes optimal adversary when it has knowledge of the underlying distribution. Further, our experimental evaluation shows that several existing heuristic defenses are not effective against stronger attacks, especially early in the training process. Thus, our findings indicate that the construction of more effective defenses and their evaluation remains an open problem.
| https://openreview.net/pdf/4e51a98c83f488bc5362a078c71216dab544be00.pdf |
Universalizing Weak Supervision | https://openreview.net/forum?id=YpPiNigTzMT | https://openreview.net/forum?id=YpPiNigTzMT | Changho Shin,Winfred Li,Harit Vishwakarma,Nicholas Carl Roberts,Frederic Sala | ICLR 2022,Poster | Weak supervision (WS) frameworks are a popular way to bypass hand-labeling large datasets for training data-hungry models.
These approaches synthesize multiple noisy but cheaply-acquired estimates of labels into a set of high-quality pseudo-labels for downstream training. However, the synthesis technique is specific to a particular kind of label, such as binary labels or sequences, and each new label type requires manually designing a new synthesis algorithm. Instead, we propose a universal technique that enables weak supervision over any label type while still offering desirable properties, including practical flexibility, computational efficiency, and theoretical guarantees. We apply this technique to important problems previously not tackled by WS frameworks including learning to rank, regression, and learning in hyperbolic space. Theoretically, our synthesis approach produces a consistent estimators for learning some challenging but important generalizations of the exponential family model. Experimentally, we validate our framework and show improvement over baselines in diverse settings including real-world learning-to-rank and regression problems along with learning on hyperbolic manifolds. | https://openreview.net/pdf/a2adc08eeb52dcddf2563c7bb42940946813b522.pdf |
Maximum n-times Coverage for Vaccine Design | https://openreview.net/forum?id=ULfq0qR25dY | https://openreview.net/forum?id=ULfq0qR25dY | Ge Liu,Alexander Dimitrakakis,Brandon Carter,David Gifford | ICLR 2022,Poster | We introduce the maximum $n$-times coverage problem that selects $k$ overlays to maximize the summed coverage of weighted elements, where each element must be covered at least $n$ times. We also define the min-cost $n$-times coverage problem where the objective is to select the minimum set of overlays such that the sum of the weights of elements that are covered at least $n$ times is at least $\tau$. Maximum $n$-times coverage is a generalization of the multi-set multi-cover problem, is NP-complete, and is not submodular. We introduce two new practical solutions for $n$-times coverage based on integer linear programming and sequential greedy optimization. We show that maximum $n$-times coverage is a natural way to frame peptide vaccine design, and find that it produces a pan-strain COVID-19 vaccine design that is superior to 29 other published designs in predicted population coverage and the expected number of peptides displayed by each individual's HLA molecules. | https://openreview.net/pdf/9d61f13ecd3d02a7e3ed6243e5e82f05c5f456cf.pdf |
KL Guided Domain Adaptation | https://openreview.net/forum?id=0JzqUlIVVDd | https://openreview.net/forum?id=0JzqUlIVVDd | A. Tuan Nguyen,Toan Tran,Yarin Gal,Philip Torr,Atilim Gunes Baydin | ICLR 2022,Poster | Domain adaptation is an important problem and often needed for real-world applications. In this problem, instead of i.i.d. training and testing datapoints, we assume that the source (training) data and the target (testing) data have different distributions. With that setting, the empirical risk minimization training procedure often does not perform well, since it does not account for the change in the distribution. A common approach in the domain adaptation literature is to learn a representation of the input that has the same (marginal) distribution over the source and the target domain. However, these approaches often require additional networks and/or optimizing an adversarial (minimax) objective, which can be very expensive or unstable in practice. To improve upon these marginal alignment techniques, in this paper, we first derive a generalization bound for the target loss based on the training loss and the reverse Kullback-Leibler (KL) divergence between the source and the target representation distributions. Based on this bound, we derive an algorithm that minimizes the KL term to obtain a better generalization to the target domain. We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples without any additional network or a minimax objective. This leads to a theoretically sound alignment method which is also very efficient and stable in practice. Experimental results also suggest that our method outperforms other representation-alignment approaches. | https://openreview.net/pdf/943a05167d50e4a4de4e6c043f7c7e6374502f72.pdf |
From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness | https://openreview.net/forum?id=Mspk_WYKoEH | https://openreview.net/forum?id=Mspk_WYKoEH | Lingxiao Zhao,Wei Jin,Leman Akoglu,Neil Shah | ICLR 2022,Poster | Message Passing Neural Networks (MPNNs) are a common type of Graph Neural Network (GNN), in which each node’s representation is computed recursively by aggregating representations (“messages”) from its immediate neighbors akin to a star-shaped pattern. MPNNs are appealing for being efficient and scalable, however their expressiveness is upper-bounded by the 1st-order Weisfeiler-Lehman isomorphism test (1-WL). In response, prior works propose highly expressive models at the cost of scalability and sometimes generalization performance. Our work stands between these two regimes: we introduce a general framework to uplift any MPNN to be more expressive, with limited scalability overhead and greatly improved practical performance. We achieve this by extending local aggregation in MPNNs from star patterns to general subgraph patterns (e.g., k-egonets): in our framework, each node representation is computed as the encoding of a surrounding induced subgraph rather than encoding of immediate neighbors only (i.e. a star). We choose the subgraph encoder to be a GNN (mainly MPNNs, considering scalability) to design a general framework that serves as a wrapper to uplift any GNN. We call our proposed method GNN-AK (GNN As Kernel), as the framework resembles a convolutional neural network by replacing the kernel with
GNNs. Theoretically, we show that our framework is strictly more powerful than 1&2-WL, and is not less powerful than 3-WL. We also design subgraph sampling strategies which greatly reduce memory footprint and improve speed while maintaining performance. Our method sets new state-of-the-art performance by large margins for several well-known graph ML tasks; specifically, 0.08 MAE on ZINC,
74.79% and 86.887% accuracy on CIFAR10 and PATTERN respectively. | https://openreview.net/pdf/cc341ac588b917bee10fc4d5bb31b4a119b6108b.pdf |
NETWORK INSENSITIVITY TO PARAMETER NOISE VIA PARAMETER ATTACK DURING TRAINING | https://openreview.net/forum?id=-8sBpe7rDiV | https://openreview.net/forum?id=-8sBpe7rDiV | Julian Büchel,Fynn Firouz Faber,Dylan Richard Muir | ICLR 2022,Poster | Neuromorphic neural network processors, in the form of compute-in-memory crossbar arrays of memristors, or in the form of subthreshold analog and mixed-signal ASICs, promise enormous advantages in compute density and energy efficiency for NN-based ML tasks. However, these technologies are prone to computational non-idealities, due to process variation and intrinsic device physics. This degrades the task performance of networks deployed to the processor, by introducing parameter noise into the deployed model. While it is possible to calibrate each device, or train networks individually for each processor, these approaches are expensive and impractical for commercial deployment. Alternative methods are therefore needed to train networks that are inherently robust against parameter variation, as a consequence of network architecture and parameters. We present a new network training algorithm that attacks network parameters during training, and promotes robust performance during inference in the face of random parameter variation. Our approach introduces a loss regularization term that penalizes the susceptibility of a network to weight perturbation. We compare against previous approaches for producing parameter insensitivity such as dropout, weight smoothing and introducing parameter noise during training. We show that our approach produces models that are more robust to random mismatch-induced parameter variation as well as to targeted parameter variation. Our approach finds minima in flatter locations in the weight-loss landscape compared with other approaches, highlighting that the networks found by our technique are less sensitive to parameter perturbation. Our work provides an approach to deploy neural network architectures to inference devices that suffer from computational non-idealities, with minimal loss of performance. This method will enable deployment at scale to novel energy-efficient computational substrates, promoting cheaper and more prevalent edge inference. | https://openreview.net/pdf/b7b77ce8535702dba33084aa20eb08cae53193f4.pdf |
Gradient Importance Learning for Incomplete Observations | https://openreview.net/forum?id=fXHl76nO2AZ | https://openreview.net/forum?id=fXHl76nO2AZ | Qitong Gao,Dong Wang,Joshua David Amason,Siyang Yuan,Chenyang Tao,Ricardo Henao,Majda Hadziahmetovic,Lawrence Carin,Miroslav Pajic | ICLR 2022,Poster | Though recent works have developed methods that can generate estimates (or imputations) of the missing entries in a dataset to facilitate downstream analysis, most depend on assumptions that may not align with real-world applications and could suffer from poor performance in subsequent tasks such as classification. This is particularly true if the data have large missingness rates or a small sample size. More importantly, the imputation error could be propagated into the prediction step that follows, which may constrain the capabilities of the prediction model. In this work, we introduce the gradient importance learning (GIL) method to train multilayer perceptrons (MLPs) and long short-term memories (LSTMs) to directly perform inference from inputs containing missing values without imputation. Specifically, we employ reinforcement learning (RL) to adjust the gradients used to train these models via back-propagation. This allows the model to exploit the underlying information behind missingness patterns. We test the approach on real-world time-series (i.e., MIMIC-III), tabular data obtained from an eye clinic, and a standard dataset (i.e., MNIST), where our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods. | https://openreview.net/pdf/77f82d36ef5cbde5647d6e9f7fb7dd38ce4e2a91.pdf |
Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset | https://openreview.net/forum?id=v6s3HVjPerv | https://openreview.net/forum?id=v6s3HVjPerv | Leon Sixt,Martin Schuessler,Oana-Iuliana Popescu,Philipp Weiß,Tim Landgraf | ICLR 2022,Poster | A variety of methods exist to explain image classification models. However, whether they provide any benefit to users over simply comparing various inputs and the model’s respective predictions remains unclear. We conducted a user study (N=240) to test how such a baseline explanation technique performs against concept-based and counterfactual explanations. To this end, we contribute a synthetic dataset generator capable of biasing individual attributes and quantifying their relevance to the model. In a study, we assess if participants can identify the relevant set of attributes compared to the ground-truth. Our results show that the baseline outperformed concept-based explanations. Counterfactual explanations from an invertible neural network performed similarly as the baseline. Still, they allowed users to identify some attributes more accurately. Our results highlight the importance of measuring how well users can reason about biases of a model, rather than solely relying on technical evaluations or proxy tasks. We open-source our study and dataset so it can serve as a blue-print for future studies. | https://openreview.net/pdf/49e3023b785924a7159ee756c546ac2ec523e8ea.pdf |
Understanding the Variance Collapse of SVGD in High Dimensions | https://openreview.net/forum?id=Qycd9j5Qp9J | https://openreview.net/forum?id=Qycd9j5Qp9J | Jimmy Ba,Murat A Erdogdu,Marzyeh Ghassemi,Shengyang Sun,Taiji Suzuki,Denny Wu,Tianzong Zhang | ICLR 2022,Poster | Stein variational gradient descent (SVGD) is a deterministic inference algorithm that evolves a set of particles to fit a target distribution. Despite its computational efficiency, SVGD often underestimates the variance of the target distribution in high dimensions. In this work we attempt to explain the variance collapse in SVGD. On the qualitative side, we compare the SVGD update with gradient descent on the maximum mean discrepancy (MMD) objective; we observe that the variance collapse phenomenon relates to the bias from deterministic updates present in the "driving force" of SVGD, and empirically verify that removal of such bias leads to more accurate variance estimation. On the quantitative side, we demonstrate that the variance collapse of SVGD can be accurately predicted in the proportional asymptotic limit, i.e., when the number of particles $n$ and dimensions $d$ diverge at the same rate. In particular, for learning high-dimensional isotropic Gaussians, we derive the exact equilibrium variance for both SVGD and MMD-descent under certain near-orthogonality assumption on the converged particles, and confirm that SVGD suffers from the "curse of dimensionality". | https://openreview.net/pdf/71e77dab5447ab6226d0f2e58132575f2217dc3b.pdf |
Generalisation in Lifelong Reinforcement Learning through Logical Composition | https://openreview.net/forum?id=ZOcX-eybqoL | https://openreview.net/forum?id=ZOcX-eybqoL | Geraud Nangue Tasse,Steven James,Benjamin Rosman | ICLR 2022,Poster | We leverage logical composition in reinforcement learning to create a framework that enables an agent to autonomously determine whether a new task can be immediately solved using its existing abilities, or whether a task-specific skill should be learned. In the latter case, the proposed algorithm also enables the agent to learn the new task faster by generating an estimate of the optimal policy. Importantly, we provide two main theoretical results: we bound the performance of the transferred policy on a new task, and we give bounds on the necessary and sufficient number of tasks that need to be learned throughout an agent's lifetime to generalise over a distribution. We verify our approach in a series of experiments, where we perform transfer learning both after learning a set of base tasks, and after learning an arbitrary set of tasks. We also demonstrate that, as a side effect of our transfer learning approach, an agent can produce an interpretable Boolean expression of its understanding of the current task. Finally, we demonstrate our approach in the full lifelong setting where an agent receives tasks from an unknown distribution. Starting from scratch, an agent is able to quickly generalise over the task distribution after learning only a few tasks, which are sub-logarithmic in the size of the task space. | https://openreview.net/pdf/89cb79a9b9bb6a9a833a7a8ae73c8c5a87792970.pdf |
PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions | https://openreview.net/forum?id=gSdSJoenupI | https://openreview.net/forum?id=gSdSJoenupI | Zhaoqi Leng,Mingxing Tan,Chenxi Liu,Ekin Dogus Cubuk,Jay Shi,Shuyang Cheng,Dragomir Anguelov | ICLR 2022,Poster | Cross-entropy loss and focal loss are the most common choices when training deep neural networks for classification problems. Generally speaking, however, a good loss function can take on much more flexible forms, and should be tailored for different tasks and datasets. Motivated by how functions can be approximated via Taylor expansion, we propose a simple framework, named PolyLoss, to view and design loss functions as a linear combination of polynomial functions. Our PolyLoss allows the importance of different polynomial bases to be easily adjusted depending on the targeting tasks and datasets, while naturally subsuming the aforementioned cross-entropy loss and focal loss as special cases. Extensive experimental results show that the optimal choice within the PolyLoss is indeed dependent on the task and dataset. Simply by introducing one extra hyperparameter and adding one line of code, our Poly-1 formulation outperforms the cross-entropy loss and focal loss on 2D image classification, instance segmentation, object detection, and 3D object detection tasks, sometimes by a large margin. | https://openreview.net/pdf/d1430448cff98fb37273293f39735ba9c6a4313a.pdf |
Improving Non-Autoregressive Translation Models Without Distillation | https://openreview.net/forum?id=I2Hw58KHp8O | https://openreview.net/forum?id=I2Hw58KHp8O | Xiao Shi Huang,Felipe Perez,Maksims Volkovs | ICLR 2022,Poster | Transformer-based autoregressive (AR) machine translation models have achieved significant performance improvements, nearing human-level accuracy on some languages. The AR framework translates one token at a time which can be time consuming, especially for long sequences. To accelerate inference, recent work has been exploring non-autoregressive (NAR) approaches that translate blocks of tokens in parallel. Despite significant progress, leading NAR models still lag behind their AR counterparts, and only become competitive when trained with distillation. In this paper we investigate possible reasons behind this performance gap, namely, the indistinguishability of tokens, and mismatch between training and inference. We then propose the Conditional Masked Language Model with Correction (CMLMC) that addresses these problems. Empirically, we show that CMLMC achieves state-of-the-art NAR performance when trained on raw data without distillation and approaches AR performance on multiple datasets. Full code for this work will be released at the time of publication. | https://openreview.net/pdf/fe5e18c9939f10295c39693c81d77b03816cad63.pdf |
A Theory of Tournament Representations | https://openreview.net/forum?id=zzk231Ms1Ih | https://openreview.net/forum?id=zzk231Ms1Ih | Arun Rajkumar,Vishnu Veerathu,Abdul Bakey Mir | ICLR 2022,Poster | Real-world tournaments are almost always intransitive. Recent works have noted that parametric models which assume $d$ dimensional node representations can effectively model intransitive tournaments. However, nothing is known about the structure of the class of tournaments that arise out of any fixed $d$ dimensional representations. In this work, we develop a novel theory for understanding parametric tournament representations. Our first contribution is to structurally characterize the class of tournaments that arise out of $d$ dimensional representations. We do this by showing that these tournament classes have forbidden configurations that must necessarily be a union of flip classes, a novel way to partition the set of all tournaments. We further characterize rank $2$ tournaments completely by showing that the associated forbidden flip class contains just $2$ tournaments. Specifically, we show that the rank $2$ tournaments are equivalent to locally transitive tournaments. This insight allows us to show that the minimum feedback arc set problem on this tournament class can be solved using the standard Quicksort procedure. We also exhibit specific forbidden configurations for rank $4$ tournaments. For a general rank $d$ tournament class, we show that the flip class associated with a coned-doubly regular tournament of size $\mathcal{O}(\sqrt{d})$ must be a forbidden configuration. To answer a dual question, using a celebrated result of Froster, we show a lower bound of $\Theta(\sqrt{n})$ on the minimum dimension needed to represent all tournaments on $n$ nodes. For any given tournament, we show a novel upper bound on the smallest representation dimension that depends on the least size of the number of unique nodes in any feedback arc set of the flip class associated with a tournament. We show how our results also shed light on the upper bound of sign-rank of matrices. | https://openreview.net/pdf/a7853d8c301f8a37bc858f4c428d73862dabff26.pdf |
Convergent and Efficient Deep Q Learning Algorithm | https://openreview.net/forum?id=OJm3HZuj4r7 | https://openreview.net/forum?id=OJm3HZuj4r7 | Zhikang T. Wang,Masahito Ueda | ICLR 2022,Poster | Despite the empirical success of the deep Q network (DQN) reinforcement learning algorithm and its variants, DQN is still not well understood and it does not guarantee convergence. In this work, we show that DQN can indeed diverge and cease to operate in realistic settings. Although there exist gradient-based convergent methods, we show that they actually have inherent problems in learning dynamics which cause them to fail even for simple tasks. To overcome these problems, we propose a convergent DQN algorithm (C-DQN) that is guaranteed to converge and can work with large discount factors (0.9998). It learns robustly in difficult settings and can learn several difficult games in the Atari 2600 benchmark that DQN fails to solve. | https://openreview.net/pdf/d999c3cb704da4722ea5330b5dd48600eb9c4ef4.pdf |
Trigger Hunting with a Topological Prior for Trojan Detection | https://openreview.net/forum?id=TXsjU8BaibT | https://openreview.net/forum?id=TXsjU8BaibT | Xiaoling Hu,Xiao Lin,Michael Cogswell,Yi Yao,Susmit Jha,Chao Chen | ICLR 2022,Poster | Despite their success and popularity, deep neural networks (DNNs) are vulnerable when facing backdoor attacks. This impedes their wider adoption, especially in mission critical applications. This paper tackles the problem of Trojan detection, namely, identifying Trojaned models – models trained with poisoned data. One popular approach is reverse engineering, i.e., recovering the triggers on a clean image by manipulating the model’s prediction. One major challenge of reverse engineering approach is the enormous search space of triggers. To this end, we propose innovative priors such as diversity and topological simplicity to not only increase the chances of finding the appropriate triggers but also improve the quality of the found triggers. Moreover, by encouraging a diverse set of trigger candidates, our method can perform effectively in cases with unknown target labels. We demonstrate that these priors can significantly improve the quality of the recovered triggers, resulting in substantially improved Trojan detection accuracy as validated on both synthetic and publicly available TrojAI benchmarks. | https://openreview.net/pdf/4db1d42d467c296c5ec7fa3f38e37dcb5c140e84.pdf |
Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL | https://openreview.net/forum?id=JM2kFbJvvI | https://openreview.net/forum?id=JM2kFbJvvI | Yanchao Sun,Ruijie Zheng,Yongyuan Liang,Furong Huang | ICLR 2022,Poster | Evaluating the worst-case performance of a reinforcement learning (RL) agent under the strongest/optimal adversarial perturbations on state observations (within some constraints) is crucial for understanding the robustness of RL agents. However, finding the optimal adversary is challenging, in terms of both whether we can find the optimal attack and how efficiently we can find it. Existing works on adversarial RL either use heuristics-based methods that may not find the strongest adversary, or directly train an RL-based adversary by treating the agent as a part of the environment, which can find the optimal adversary but may become intractable in a large state space.
This paper introduces a novel attacking method to find the optimal attacks through collaboration between a designed function named "actor" and an RL-based learner named "director'". The actor crafts state perturbations for a given policy perturbation direction, and the director learns to propose the best policy perturbation directions. Our proposed algorithm, PA-AD, is theoretically optimal and significantly more efficient than prior RL-based works in environments with large state spaces. Empirical results show that our proposed PA-AD universally outperforms state-of-the-art attacking methods in various Atari and MuJoCo environments. By applying PA-AD to adversarial training, we achieve state-of-the-art empirical robustness in multiple tasks under strong adversaries. | https://openreview.net/pdf/b11335ea1d1d4ca95531723261e11735e0550bc4.pdf |
Chunked Autoregressive GAN for Conditional Waveform Synthesis | https://openreview.net/forum?id=v3aeIsY_vVX | https://openreview.net/forum?id=v3aeIsY_vVX | Max Morrison,Rithesh Kumar,Kundan Kumar,Prem Seetharaman,Aaron Courville,Yoshua Bengio | ICLR 2022,Poster | Conditional waveform synthesis models learn a distribution of audio waveforms given conditioning such as text, mel-spectrograms, or MIDI. These systems employ deep generative models that model the waveform via either sequential (autoregressive) or parallel (non-autoregressive) sampling. Generative adversarial networks (GANs) have become a common choice for non-autoregressive waveform synthesis. However, state-of-the-art GAN-based models produce artifacts when performing mel-spectrogram inversion. In this paper, we demonstrate that these artifacts correspond with an inability for the generator to learn accurate pitch and periodicity. We show that simple pitch and periodicity conditioning is insufficient for reducing this error relative to using autoregression. We discuss the inductive bias that autoregression provides for learning the relationship between instantaneous frequency and phase, and show that this inductive bias holds even when autoregressively sampling large chunks of the waveform during each forward pass. Relative to prior state-of-the-art GAN-based models, our proposed model, Chunked Autoregressive GAN (CARGAN) reduces pitch error by 40-60%, reduces training time by 58%, maintains a fast inference speed suitable for real-time or interactive applications, and maintains or improves subjective quality. | https://openreview.net/pdf/070239829c83980ec499e2eff346d48eafe3ecb5.pdf |
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks | https://openreview.net/forum?id=psh0oeMSBiF | https://openreview.net/forum?id=psh0oeMSBiF | Fan Wu,Linyi Li,Huan Zhang,Bhavya Kailkhura,Krishnaram Kenthapadi,Ding Zhao,Bo Li | ICLR 2022,Poster | As reinforcement learning (RL) has achieved near human-level performance in a variety of tasks, its robustness has raised great attention. While a vast body of research has explored test-time (evasion) attacks in RL and corresponding defenses, its robustness against training-time (poisoning) attacks remains largely unanswered. In this work, we focus on certifying the robustness of offline RL in the presence of poisoning attacks, where a subset of training trajectories could be arbitrarily manipulated. We propose the first certification framework, COPA, to certify the number of poisoning trajectories that can be tolerated regarding different certification criteria. Given the complex structure of RL, we propose two certification criteria: per-state action stability and cumulative reward bound. To further improve the certification, we propose new partition and aggregation protocols to train robust policies. We further prove that some of the proposed certification methods are theoretically tight and some are NP-Complete problems. We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certifications for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties. All experimental results are available at https://copa-leaderboard.github.io. | https://openreview.net/pdf/0a24a116cb24a1e99cd715566dae243e36472472.pdf |
ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning | https://openreview.net/forum?id=Vzh1BFUCiIX | https://openreview.net/forum?id=Vzh1BFUCiIX | Vamsi Aribandi,Yi Tay,Tal Schuster,Jinfeng Rao,Huaixiu Steven Zheng,Sanket Vaibhav Mehta,Honglei Zhuang,Vinh Q. Tran,Dara Bahri,Jianmo Ni,Jai Gupta,Kai Hui,Sebastian Ruder,Donald Metzler | ICLR 2022,Poster | Despite the recent success of multi-task learning and transfer learning for natural language processing (NLP), few works have systematically studied the effect of scaling up the number of tasks during pre-training. Towards this goal, this paper introduces ExMix (Extreme Mixture): a massive collection of 107 supervised NLP tasks across diverse domains and task-families. Using ExMix, we study the effect of multi-task pre-training at the largest scale to date, and analyze co-training transfer amongst common families of tasks. Through this analysis, we show that manually curating an ideal set of tasks for multi-task pre-training is not straightforward, and that multi-task scaling can vastly improve models on its own. Finally, we propose ExT5: a model pre-trained using a multi-task objective of self-supervised span denoising and supervised ExMix. Via extensive experiments, we show that ExT5 outperforms strong T5 baselines on SuperGLUE, GEM, Rainbow, Closed-Book QA tasks, and several tasks outside of ExMix. ExT5 also significantly improves sample efficiency while pre-training. | https://openreview.net/pdf/b64da5c159b90bf56d174fc67459b74928711232.pdf |
Provable Adaptation across Multiway Domains via Representation Learning | https://openreview.net/forum?id=gRCCdgpVZf | https://openreview.net/forum?id=gRCCdgpVZf | Zhili Feng,Shaobo Han,Simon Shaolei Du | ICLR 2022,Poster | This paper studies zero-shot domain adaptation where each domain is indexed on a multi-dimensional array, and we only have data from a small subset of domains. Our goal is to produce predictors that perform well on \emph{unseen} domains. We propose a model which consists of a domain-invariant latent representation layer and a domain-specific linear prediction layer with a low-rank tensor structure. Theoretically, we present explicit sample complexity bounds to characterize the prediction error on unseen domains in terms of the number of domains with training data and the number of data per domain. To our knowledge, this is the first finite-sample guarantee for zero-shot domain adaptation. In addition, we provide experiments on two-way MNIST and four-way fiber sensing datasets to demonstrate the effectiveness of our proposed model. | https://openreview.net/pdf/097cce8a39240bc2a614483e1cb4e0314237f10a.pdf |
Efficient Token Mixing for Transformers via Adaptive Fourier Neural Operators | https://openreview.net/forum?id=EXHG-A3jlM | https://openreview.net/forum?id=EXHG-A3jlM | John Guibas,Morteza Mardani,Zongyi Li,Andrew Tao,Anima Anandkumar,Bryan Catanzaro | ICLR 2022,Poster | Vision transformers have delivered tremendous success in representation learning. This is primarily due to effective token mixing through self attention. However, this scales quadratically with the number of pixels, which becomes infeasible for high-resolution inputs. To cope with this challenge, we propose Adaptive Fourier Neural Operator (AFNO) as an efficient token mixer that learns to mix in the Fourier domain. AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution. This principle was previously used to design FNO, which solves global convolution efficiently in the Fourier domain and has shown promise in learning challenging PDEs. To handle challenges in visual representation learning such as discontinuities in images and high resolution inputs, we propose principled architectural modifications to FNO which results in memory and computational efficiency. This includes imposing a block-diagonal structure on the channel mixing weights, adaptively sharing weights across tokens, and sparsifying the frequency modes via soft-thresholding and shrinkage. The resulting model is highly parallel with a quasi-linear complexity and has linear memory in the sequence size. AFNO outperforms self-attention mechanisms for few-shot segmentation in terms of both efficiency and accuracy. For Cityscapes segmentation with the Segformer-B3 backbone, AFNO can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms. | https://openreview.net/pdf/bec7c123720932f2545dfb12e85bab8ac5cca6ff.pdf |
Sample Selection with Uncertainty of Losses for Learning with Noisy Labels | https://openreview.net/forum?id=xENf4QUL4LW | https://openreview.net/forum?id=xENf4QUL4LW | Xiaobo Xia,Tongliang Liu,Bo Han,Mingming Gong,Jun Yu,Gang Niu,Masashi Sugiyama | ICLR 2022,Poster | In learning with noisy labels, the sample selection approach is very popular, which regards small-loss data as correctly labeled data during training. However, losses are generated on-the-fly based on the model being trained with noisy labels, and thus large-loss data are likely but not certain to be incorrect. There are actually two possibilities of a large-loss data point: (a) it is mislabeled, and then its loss decreases slower than other data, since deep neural networks learn patterns first; (b) it belongs to an underrepresented group of data and has not been selected yet. In this paper, we incorporate the uncertainty of losses by adopting interval estimation instead of point estimation of losses, where lower bounds of the confidence intervals of losses derived from distribution-free concentration inequalities, but not losses themselves, are used for sample selection. In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try. As a result, we can better explore underrepresented data that are correctly labeled but seem to be mislabeled at first glance. Experiments demonstrate that the proposed method is superior to baselines and robust to a broad range of label noise types. | https://openreview.net/pdf/0ebab5bba4b36eec025abfd2e21f947e05d6e662.pdf |
Data-Driven Offline Optimization for Architecting Hardware Accelerators | https://openreview.net/forum?id=GsH-K1VIyy | https://openreview.net/forum?id=GsH-K1VIyy | Aviral Kumar,Amir Yazdanbakhsh,Milad Hashemi,Kevin Swersky,Sergey Levine | ICLR 2022,Poster | To attain higher efficiency, the industry has gradually reformed towards application-specific hardware accelerators. While such a paradigm shift is already starting to show promising results, designers need to spend considerable manual effort and perform large number of time-consuming simulations to find accelerators that can accelerate multiple target applications while obeying design constraints. Moreover, such a simulation-driven approach must be re-run from scratch every time the set of target applications or design constraints change. An alternative paradigm is to use a data-driven, offline approach that utilizes logged simulation data, to architect hardware accelerators, without needing any form of simulations. Such an approach not only alleviates the need to run time-consuming simulation, but also enables data reuse and applies even when set of target applications changes. In this paper, we develop such a data-driven offline optimization method for designing hardware accelerators, dubbed PRIME, that enjoys all of these properties. Our approach learns a conservative, robust estimate of the desired cost function, utilizes infeasible points and optimizes the design against this estimate without any additional simulator queries during optimization. PRIME architects accelerators---tailored towards both single- and multi-applications---improving performance upon stat-of-the-art simulation-driven methods by about 1.54x and 1.20x, while considerably reducing the required total simulation time by 93% and 99%, respectively. In addition, PRIME also architects effective accelerators for unseen applications in a zero-shot setting, outperforming simulation-based methods by 1.26x. | https://openreview.net/pdf/62fa3ad6648729230b552447a872cf6777743905.pdf |
Multi-Agent MDP Homomorphic Networks | https://openreview.net/forum?id=H7HDG--DJF0 | https://openreview.net/forum?id=H7HDG--DJF0 | Elise van der Pol,Herke van Hoof,Frans A Oliehoek,Max Welling | ICLR 2022,Poster | This paper introduces Multi-Agent MDP Homomorphic Networks, a class of networks that allows distributed execution using only local information, yet is able to share experience between global symmetries in the joint state-action space of cooperative multi-agent systems. In cooperative multi-agent systems, complex symmetries arise between different configurations of the agents and their local observations. For example, consider a group of agents navigating: rotating the state globally results in a permutation of the optimal joint policy. Existing work on symmetries in single agent reinforcement learning can only be generalized to the fully centralized setting, because such approaches rely on the global symmetry in the full state-action spaces, and these can result in correspondences across agents. To encode such symmetries while still allowing distributed execution we propose a factorization that decomposes global symmetries into local transformations. Our proposed factorization allows for distributing the computation that enforces global symmetries over local agents and local interactions. We introduce a multi-agent equivariant policy network based on this factorization. We show empirically on symmetric multi-agent problems that globally symmetric distributable policies improve data efficiency compared to non-equivariant baselines. | https://openreview.net/pdf/3a8f28592a8f20859b54c37f57cb659f7b0664fa.pdf |
Geometry-Consistent Neural Shape Representation with Implicit Displacement Fields | https://openreview.net/forum?id=yhCp5RcZD7 | https://openreview.net/forum?id=yhCp5RcZD7 | Wang Yifan,Lukas Rahmann,Olga Sorkine-hornung | ICLR 2022,Poster | We present implicit displacement fields, a novel representation for detailed 3D geometry. Inspired by a classic surface deformation technique, displacement mapping, our method represents a complex surface as a smooth base surface plus a displacement along the base's normal directions, resulting in a frequency-based shape decomposition, where the high-frequency signal is constrained geometrically by the low-frequency signal. Importantly, this disentanglement is unsupervised thanks to a tailored architectural design that has an innate frequency hierarchy by construction. We explore implicit displacement field surface reconstruction and detail transfer
and demonstrate superior representational power, training stability, and generalizability. | https://openreview.net/pdf/55c1560b8382311a7f02b90aaba2fa21e4475e9d.pdf |
Modeling Label Space Interactions in Multi-label Classification using Box Embeddings | https://openreview.net/forum?id=tyTH9kOxcvh | https://openreview.net/forum?id=tyTH9kOxcvh | Dhruvesh Patel,Pavitra Dangati,Jay-Yoon Lee,Michael Boratko,Andrew McCallum | ICLR 2022,Poster | Multi-label classification is a challenging structured prediction task in which a set of output class labels are predicted for each input. Real-world datasets often have natural or latent taxonomic relationships between labels, making it desirable for models to employ label representations capable of capturing such taxonomies. Most existing multi-label classification methods do not do so, resulting in label predictions that are inconsistent with the taxonomic constraints, thus failing to accurately represent the fundamentals of problem setting. In this work, we introduce the multi-label box model (MBM), a multi-label classification method that combines the encoding power of neural networks with the inductive bias and probabilistic semantics of box embeddings (Vilnis, et al 2018). Box embeddings can be understood as trainable Venn-diagrams based on hyper-rectangles. Representing labels by boxes rather than vectors, MBM is able to capture taxonomic relations among labels. Furthermore, since box embeddings allow these relations to be learned by stochastic gradient descent from data, and to be read as calibrated conditional probabilities, our model is endowed with a high degree of interpretability. This interpretability also facilitates the injection of partial information about label-label relationships into model training, to further improve its consistency. We provide theoretical grounding for our method and show experimentally the model's ability to learn the true latent taxonomic structure from data. Through extensive empirical evaluations on both small and large-scale multi-label classification datasets, we show that BBM can significantly improve taxonomic consistency while preserving or surpassing the state-of-the-art predictive performance. | https://openreview.net/pdf/f5671d43125692a6533d9c7a1996335b8a1cd482.pdf |
It Takes Two to Tango: Mixup for Deep Metric Learning | https://openreview.net/forum?id=ZKy2X3dgPA | https://openreview.net/forum?id=ZKy2X3dgPA | Shashanka Venkataramanan,Bill Psomas,Ewa Kijak,laurent amsaleg,Konstantinos Karantzalos,Yannis Avrithis | ICLR 2022,Poster | Metric learning involves learning a discriminative representation such that embeddings of similar classes are encouraged to be close, while embeddings of dissimilar classes are pushed far apart. State-of-the-art methods focus mostly on sophisticated loss functions or mining strategies. On the one hand, metric learning losses consider two or more examples at a time. On the other hand, modern data augmentation methods for classification consider two or more examples at a time. The combination of the two ideas is under-studied.
In this work, we aim to bridge this gap and improve representations using mixup, which is a powerful data augmentation approach interpolating two or more examples and corresponding target labels at a time. This task is challenging because, unlike classification, the loss functions used in metric learning are not additive over examples, so the idea of interpolating target labels is not straightforward. To the best of our knowledge, we are the first to investigate mixing both examples and target labels for deep metric learning. We develop a generalized formulation that encompasses existing metric learning loss functions and modify it to accommodate for mixup, introducing Metric Mix, or Metrix. We also introduce a new metric---utilization---to demonstrate that by mixing examples during training, we are exploring areas of the embedding space beyond the training classes, thereby improving representations. To validate the effect of improved representations, we show that mixing inputs, intermediate representations or embeddings along with target labels significantly outperforms state-of-the-art metric learning methods on four benchmark deep metric learning datasets. | https://openreview.net/pdf/1b4683c706bc39fb7b56b3982f8c10166b29773d.pdf |
Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation | https://openreview.net/forum?id=G89-1yZLFHk | https://openreview.net/forum?id=G89-1yZLFHk | Bichen Wu,Ruizhe Cheng,Peizhao Zhang,Tianren Gao,Joseph E. Gonzalez,Peter Vajda | ICLR 2022,Poster | Traditional computer vision models are trained to predict a fixed set of predefined categories. Recently, natural language has been shown to be a broader and richer source of supervision that provides finer descriptions to visual concepts than supervised "gold" labels. Previous works, such as CLIP, use InfoNCE loss to train a model to predict the pairing between images and text captions. CLIP, however, is data hungry and requires more than 400M image-text pairs for training. The inefficiency can be \textit{partially} attributed to the fact that the image-text pairs are noisy. To address this, we propose OTTER (Optimal TransporT distillation for Efficient zero-shot Recognition), which uses online entropic optimal transport to find a soft image-text match as labels for contrastive learning. Based on pretrained image and text encoders, models trained with OTTER achieve strong performance with only 3M image text pairs. Compared with InfoNCE loss, label smoothing, and knowledge distillation, OTTER consistently outperforms these baselines in zero-shot evaluation on Google Open Images (19,958 classes) and multi-labeled ImageNet 10K (10032 classes) from Tencent ML-Images. Over 42 evaluations on 7 different dataset/architecture settings x 6 metrics, OTTER outperforms (32) or ties (2) all baselines in 34 of them. Our source code is open sourced at https://github.com/facebookresearch/OTTER. | https://openreview.net/pdf/4692c27fcf85afed7f22e02ea4a1c14104fce2a4.pdf |
A Statistical Framework for Efficient Out of Distribution Detection in Deep Neural Networks | https://openreview.net/forum?id=Oy9WeuZD51 | https://openreview.net/forum?id=Oy9WeuZD51 | Matan Haroush,Tzviel Frostig,Ruth Heller,Daniel Soudry | ICLR 2022,Poster | Background.
Commonly, Deep Neural Networks (DNNs) generalize well on samples drawn from a distribution similar to that of the training set. However, DNNs' predictions are brittle and unreliable when the test samples are drawn from a dissimilar distribution.
This is a major concern for deployment in real-world applications, where such behavior may come at a considerable cost, such as industrial production lines, autonomous vehicles, or healthcare applications.
Contributions.
We frame Out Of Distribution (OOD) detection in DNNs as a statistical hypothesis testing problem. Tests generated within our proposed framework combine evidence from the entire network.
Unlike previous OOD detection heuristics, this framework returns a $p$-value for each test sample. It is guaranteed to maintain the Type I Error (T1E - incorrectly predicting OOD for an actual in-distribution sample) for test data. Moreover, this allows to combine several detectors while maintaining the T1E.
Building on this framework, we suggest a novel OOD procedure based on low-order statistics. Our method achieves comparable or better results than state-of-the-art methods on well-accepted OOD benchmarks, without retraining the network parameters or assuming prior knowledge on the test distribution --- and at a fraction of the computational cost. | https://openreview.net/pdf/8ab4fc0f10bb1b17497961ee8ff9912af8ed2cc3.pdf |
FedBABU: Toward Enhanced Representation for Federated Image Classification | https://openreview.net/forum?id=HuaYQfggn5u | https://openreview.net/forum?id=HuaYQfggn5u | Jaehoon Oh,SangMook Kim,Se-Young Yun | ICLR 2022,Poster | Federated learning has evolved to improve a single global model under data heterogeneity (as a curse) or to develop multiple personalized models using data heterogeneity (as a blessing). However, little research has considered both directions simultaneously. In this paper, we first investigate the relationship between them by analyzing Federated Averaging at the client level and determine that a better federated global model performance does not constantly improve personalization. To elucidate the cause of this personalization performance degradation problem, we decompose the entire network into the body (extractor), which is related to universality, and the head (classifier), which is related to personalization. We then point out that this problem stems from training the head. Based on this observation, we propose a novel federated learning algorithm, coined FedBABU, which only updates the body of the model during federated training (i.e., the head is randomly initialized and never updated), and the head is fine-tuned for personalization during the evaluation process. Extensive experiments show consistent performance improvements and an efficient personalization of FedBABU. The code is available at https://github.com/jhoon-oh/FedBABU. | https://openreview.net/pdf/09e0b377fa4e3200e80d267b3e1df94235e10a45.pdf |
Should I Run Offline Reinforcement Learning or Behavioral Cloning? | https://openreview.net/forum?id=AP1MKT37rJ | https://openreview.net/forum?id=AP1MKT37rJ | Aviral Kumar,Joey Hong,Anikait Singh,Sergey Levine | ICLR 2022,Poster | Offline reinforcement learning (RL) algorithms can acquire effective policies by utilizing only previously collected experience, without any online interaction. While it is widely understood that offline RL is able to extract good policies even from highly suboptimal data, in practice offline RL is often used with data that resembles demonstrations. In this case, one can also use behavioral cloning (BC) algorithms, which mimic a subset of the dataset via supervised learning. It seems natural to ask: When should we prefer offline RL over BC? In this paper, our goal is to characterize environments and dataset compositions where offline RL leads to better performance than BC. In particular, we characterize the properties of environments that allow offline RL methods to perform better than BC methods even when only provided with expert data. Additionally, we show that policies trained on suboptimal data that is sufficiently noisy can attain better performance than even BC algorithms with expert data, especially on long-horizon problems. We validate our theoretical results via extensive experiments on both diagnostic and high-dimensional domains including robot manipulation, maze navigation and Atari games, when learning from a variety of data sources. We observe that modern offline RL methods trained on suboptimal, noisy data in sparse reward domains outperform cloning the expert data in several practical problems. | https://openreview.net/pdf/ab91050974b19858a9a241236b4d69019903de0e.pdf |
Learning State Representations via Retracing in Reinforcement Learning | https://openreview.net/forum?id=CLpxpXqqBV | https://openreview.net/forum?id=CLpxpXqqBV | Changmin Yu,Dong Li,Jianye HAO,Jun Wang,Neil Burgess | ICLR 2022,Poster | We propose learning via retracing, a novel self-supervised approach for learning the state representation (and the associated dynamics model) for reinforcement learning tasks. In addition to the predictive (reconstruction) supervision in the forward direction, we propose to include "retraced" transitions for representation/model learning, by enforcing the cycle-consistency constraint between the original and retraced states, hence improve upon the sample efficiency of learning. Moreover, learning via retracing explicitly propagates information about future transitions backward for inferring previous states, thus facilitates stronger representation learning for the downstream reinforcement learning tasks. We introduce Cycle-Consistency World Model (CCWM), a concrete model-based instantiation of learning via retracing. Additionally we propose a novel adaptive "truncation" mechanism for counteracting the negative impacts brought by "irreversible" transitions such that learning via retracing can be maximally effective. Through extensive empirical studies on visual-based continuous control benchmarks, we demonstrate that CCWM achieves state-of-the-art performance in terms of sample efficiency and asymptotic performance, whilst exhibiting behaviours that are indicative of stronger representation learning. | https://openreview.net/pdf/04d24e2870546f3dcff312162e1b4006ecd641b7.pdf |
Open-World Semi-Supervised Learning | https://openreview.net/forum?id=O-r8LOR-CCA | https://openreview.net/forum?id=O-r8LOR-CCA | Kaidi Cao,Maria Brbic,Jure Leskovec | ICLR 2022,Poster | A fundamental limitation of applying semi-supervised learning in real-world settings is the assumption that unlabeled test data contains only classes previously encountered in the labeled training data. However, this assumption rarely holds for data in-the-wild, where instances belonging to novel classes may appear at testing time. Here, we introduce a novel open-world semi-supervised learning setting that formalizes the notion that novel classes may appear in the unlabeled test data. In this novel setting, the goal is to solve the class distribution mismatch problem between labeled and unlabeled data, where at the test time every input instance either needs to be classified into one of the existing classes or a new unseen class needs to be initialized and the instance assigned to it. To tackle this challenging problem, we propose ORCA, an end-to-end approach that assigns instances to previously seen classes or forms novel classes by grouping similar instances without assuming any prior knowledge. The key idea in ORCA is to utilize uncertainty adaptive margin to circumvent the bias towards seen classes caused by learning seen classes faster than the novel classes. In this way, ORCA gradually increases the discriminability of the model during the training and reduces the gap between intra-class variance of seen with respect to novel classes. Extensive experiments on image classification datasets and a single-cell dataset demonstrate that ORCA consistently outperforms alternative baselines, achieving 25% improvement on seen and 96% improvement on novel classes of the ImageNet dataset. | https://openreview.net/pdf/e5ffbb438b307d601bd7794c87fae3c23950a63f.pdf |
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent | https://openreview.net/forum?id=af1eUDdUVz | https://openreview.net/forum?id=af1eUDdUVz | Oliver Bryniarski,Nabeel Hingun,Pedro Pachuca,Vincent Wang,Nicholas Carlini | ICLR 2022,Poster | Evading adversarial example detection defenses requires finding adversarial examples that must simultaneously (a) be misclassified by the model and (b) be detected as non-adversarial. We find that existing attacks that attempt to satisfy multiple simultaneous constraints often over-optimize against one constraint at the cost of satisfying another. We introduce Selective Projected Gradient Descent and Orthogonal Projected Gradient Descent, improved attack techniques to generate adversarial examples that avoid this problem by orthogonalizing the gradients when running standard gradient-based attacks. We use our technique to evade four state-of-the-art detection defenses, reducing their accuracy to 0% while maintaining a 0% detection rate. | https://openreview.net/pdf/3d2eb96b012475581aa80cda16373c217e28c087.pdf |
Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off | https://openreview.net/forum?id=Azh9QBQ4tR7 | https://openreview.net/forum?id=Azh9QBQ4tR7 | Rahul Rade,Seyed-Mohsen Moosavi-Dezfooli | ICLR 2022,Poster | While adversarial training has become the de facto approach for training robust classifiers, it leads to a drop in accuracy. This has led to prior works postulating that accuracy is inherently at odds with robustness. Yet, the phenomenon remains inexplicable. In this paper, we closely examine the changes induced in the decision boundary of a deep network during adversarial training. We find that adversarial training leads to unwarranted increase in the margin along certain adversarial directions, thereby hurting accuracy. Motivated by this observation, we present a novel algorithm, called Helper-based Adversarial Training (HAT), to reduce this effect by incorporating additional wrongly labelled examples during training. Our proposed method provides a notable improvement in accuracy without compromising robustness. It achieves a better trade-off between accuracy and robustness in comparison to existing defenses. Code is available at https://github.com/imrahulr/hat. | https://openreview.net/pdf/c2a72787c4e6f0d24586b17eab7ca97027346386.pdf |
Expressivity of Emergent Languages is a Trade-off between Contextual Complexity and Unpredictability | https://openreview.net/forum?id=WxuE_JWxjkW | https://openreview.net/forum?id=WxuE_JWxjkW | Shangmin Guo,Yi Ren,Kory Wallace Mathewson,Simon Kirby,Stefano V Albrecht,Kenny Smith | ICLR 2022,Poster | Researchers are using deep learning models to explore the emergence of language in various language games, where agents interact and develop an emergent language to solve tasks. We focus on the factors that determine the expressivity of emergent languages, which reflects the amount of information about input spaces those languages are capable of encoding. We measure the expressivity of emergent languages based on the generalisation performance across different games, and demonstrate that the expressivity of emergent languages is a trade-off between the complexity and unpredictability of the context those languages emerged from. Another contribution of this work is the discovery of message type collapse, i.e. the number of unique messages is lower than that of inputs. We also show that using the contrastive loss proposed by Chen et al. (2020) can alleviate this problem. | https://openreview.net/pdf/be46689741877d2b59dc56c09443500af7dd2941.pdf |
Fast AdvProp | https://openreview.net/forum?id=hcoswsDHNAW | https://openreview.net/forum?id=hcoswsDHNAW | Jieru Mei,Yucheng Han,Yutong Bai,Yixiao Zhang,Yingwei Li,Xianhang Li,Alan Yuille,Cihang Xie | ICLR 2022,Poster | Adversarial Propagation (AdvProp) is an effective way to improve recognition models, leveraging adversarial examples. Nonetheless, AdvProp suffers from the extremely slow training speed, mainly because: a) extra forward and backward passes are required for generating adversarial examples; b) both original samples and their adversarial counterparts are used for training (i.e., 2X data). In this paper, we introduce Fast AdvProp, which aggressively revamps AdvProp's costly training components, rendering the method nearly as cheap as the vanilla training. Specifically, our modifications in Fast AdvProp are guided by the hypothesis that disentangled learning with adversarial examples is the key for performance improvements, while other training recipes (e.g., paired clean and adversarial training samples, multi-step adversarial attackers) could be largely simplified.
Our empirical results show that, compared to the vanilla training baseline, Fast AdvProp is able to further model performance on a spectrum of visual benchmarks, without incurring extra training cost. Additionally, our ablations find Fast AdvProp scales better if larger models are used, is compatible with existing data augmentation methods (i.e., Mixup and CutMix), and can be easily adapted to other recognition tasks like object detection. The code is available here: https://github.com/meijieru/fast_advprop. | https://openreview.net/pdf/12e365a996eeb801b2173df149f6f8bc69ec02fa.pdf |
Triangle and Four Cycle Counting with Predictions in Graph Streams | https://openreview.net/forum?id=8in_5gN9I0 | https://openreview.net/forum?id=8in_5gN9I0 | Justin Y Chen,Talya Eden,Piotr Indyk,Honghao Lin,Shyam Narayanan,Ronitt Rubinfeld,Sandeep Silwal,Tal Wagner,David Woodruff,Michael Zhang | ICLR 2022,Poster | We propose data-driven one-pass streaming algorithms for estimating the number of triangles and four cycles, two fundamental problems in graph analytics that are widely studied in the graph data stream literature. Recently, Hsu et al. (2019) and Jiang et al. (2020) applied machine learning techniques in other data stream problems, using a trained oracle that can predict certain properties of the stream elements to improve on prior “classical” algorithms that did not use oracles. In this paper, we explore the power of a “heavy edge” oracle in multiple graph edge streaming models. In the adjacency list model, we present a one-pass triangle counting algorithm improving upon the previous space upper bounds without such an oracle. In the arbitrary order model, we present algorithms for both triangle and four cycle estimation with fewer passes and the same space complexity as in previous algorithms, and we show several of these bounds are optimal. We analyze our algorithms under several noise models, showing that the algorithms perform well even when the oracle errs. Our methodology expands upon prior work on “classical” streaming algorithms, as previous multi-pass and random order streaming algorithms can be seen as special cases of our algorithms, where the first pass or random order was used to implement the heavy edge oracle. Lastly, our experiments demonstrate advantages of the proposed method compared to state-of-the-art streaming algorithms. | https://openreview.net/pdf/25b70c42018200ce5f79c1f1dfc16f4c95ff9304.pdf |
Is Fairness Only Metric Deep? Evaluating and Addressing Subgroup Gaps in Deep Metric Learning | https://openreview.net/forum?id=js62_xuLDDv | https://openreview.net/forum?id=js62_xuLDDv | Natalie Dullerud,Karsten Roth,Kimia Hamidieh,Nicolas Papernot,Marzyeh Ghassemi | ICLR 2022,Poster | Deep metric learning (DML) enables learning with less supervision through its emphasis on the similarity structure of representations. There has been much work on improving generalization of DML in settings like zero-shot retrieval, but little is known about its implications for fairness. In this paper, we are the first to evaluate state-of-the-art DML methods trained on imbalanced data, and to show the negative impact these representations have on minority subgroup performance when used for downstream tasks. In this work, we first define fairness in DML through an analysis of three properties of the representation space -- inter-class alignment, intra-class alignment, and uniformity -- and propose \textit{\textbf{finDML}}, the \textit{\textbf{f}}airness \textit{\textbf{i}}n \textit{\textbf{n}}on-balanced \textit{\textbf{DML}} benchmark to characterize representation fairness. Utilizing \textit{finDML}, we find bias in DML representations to propagate to common downstream classification tasks. Surprisingly, this bias is propagated even when training data in the downstream task is re-balanced. To address this problem, we present Partial Attribute De-correlation (\textit{\textbf{\pad}}) to disentangle feature representations from sensitive attributes and reduce performance gaps between subgroups in both embedding space and downstream metrics. | https://openreview.net/pdf/f404cf882e197b2c86f3e62a769c3cbf9024a9b5.pdf |
NodePiece: Compositional and Parameter-Efficient Representations of Large Knowledge Graphs | https://openreview.net/forum?id=xMJWUKJnFSw | https://openreview.net/forum?id=xMJWUKJnFSw | Mikhail Galkin,Etienne Denis,Jiapeng Wu,William L. Hamilton | ICLR 2022,Poster | Conventional representation learning algorithms for knowledge graphs (KG) map each entity to a unique embedding vector.
Such a shallow lookup results in a linear growth of memory consumption for storing the embedding matrix and incurs high computational costs of working with real-world KGs.
Drawing parallels with subword tokenization commonly used in NLP, we explore the landscape of more parameter-efficient node embedding strategies with possibly sublinear memory requirements.
To this end, we propose NodePiece, an anchor-based approach to learn a fixed-size entity vocabulary.
In NodePiece, a vocabulary of subword/sub-entity units is constructed from anchor nodes in a graph with known relation types. Given such a fixed-size vocabulary, it is possible to bootstrap an encoding and embedding for any entity, including those unseen during training.
Experiments show that NodePiece performs competitively in node classification, link prediction, and relation prediction tasks retaining less than 10% of explicit nodes in a graph as anchors and often having 10x fewer parameters. To this end, we show that a NodePiece-enabled model outperforms existing shallow models on a large OGB WikiKG 2 graph having 70x fewer parameters.
| https://openreview.net/pdf/6eb641d163812ce838dbad1b8e7fddebb2c72c12.pdf |
Pix2seq: A Language Modeling Framework for Object Detection | https://openreview.net/forum?id=e42KbIw6Wb | https://openreview.net/forum?id=e42KbIw6Wb | Ting Chen,Saurabh Saxena,Lala Li,David J. Fleet,Geoffrey Hinton | ICLR 2022,Poster | We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are expressed as sequences of discrete tokens, and we train a neural network to perceive the image and generate the desired sequence. Our approach is based mainly on the intuition that if a neural network knows about where and what the objects are, we just need to teach it how to read them out. Beyond the use of task-specific data augmentations, our approach makes minimal assumptions about the task, yet it achieves competitive results on the challenging COCO dataset, compared to highly specialized and well optimized detection algorithms. | https://openreview.net/pdf/1f7291d96e3b195bdf0664dfb0f5313b0eab7a04.pdf |
Particle Stochastic Dual Coordinate Ascent: Exponential convergent algorithm for mean field neural network optimization | https://openreview.net/forum?id=PQQp7AJwz3 | https://openreview.net/forum?id=PQQp7AJwz3 | Kazusato Oko,Taiji Suzuki,Atsushi Nitanda,Denny Wu | ICLR 2022,Poster | We introduce Particle-SDCA, a gradient-based optimization algorithm for two-layer neural networks in the mean field regime that achieves exponential convergence rate in regularized empirical risk minimization. The proposed algorithm can be regarded as an infinite dimensional extension of Stochastic Dual Coordinate Ascent (SDCA) in the probability space: we exploit the convexity of the dual problem, for which the coordinate-wise proximal gradient method can be applied. Our proposed method inherits advantages of the original SDCA, including (i) exponential convergence (with respect to the outer iteration steps), and (ii) better dependency on the sample size and condition number than the full-batch gradient method. One technical challenge in implementing the SDCA update is the intractable integral over the entire parameter space at every step. To overcome this limitation, we propose a tractable \textit{particle method} that approximately solves the dual problem, and an importance re-weighted technique to reduce the computational cost. The convergence rate of our method is verified by numerical experiments. | https://openreview.net/pdf/b6a0af59072ab41c5553c6952e5a786b25d0adde.pdf |
The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders | https://openreview.net/forum?id=7_JR7WpwKV1 | https://openreview.net/forum?id=7_JR7WpwKV1 | Divyansh Pareek,Andrej Risteski | ICLR 2022,Poster | Training and using modern neural-network based latent-variable generative models (like Variational Autoencoders) often require simultaneously training a generative direction along with an inferential (encoding) direction, which approximates the posterior distribution over the latent variables. Thus, the question arises: how complex does the inferential model need to be, in order to be able to accurately model the posterior distribution of a given generative model? In this paper, we identify an important property of the generative map impacting the required size of the encoder. We show that if the generative map is ``strongly invertible" (in a sense we suitably formalize), the inferential model need not be much more complex. Conversely, we prove that there exist non-invertible generative maps, for which the encoding direction needs to be exponentially larger (under standard assumptions in computational complexity). Importantly, we do not require the generative model to be layerwise invertible, which a lot of the related literature assumes and isn't satisfied by many architectures used in practice (e.g. convolution and pooling based networks). Thus, we provide theoretical support for the empirical wisdom that learning deep generative models is harder when data lies on a low-dimensional manifold. | https://openreview.net/pdf/4116475bedc76111284bad627cb9a8fbaec2059b.pdf |
Tracking the risk of a deployed model and detecting harmful distribution shifts | https://openreview.net/forum?id=Ro_zAjZppv | https://openreview.net/forum?id=Ro_zAjZppv | Aleksandr Podkopaev,Aaditya Ramdas | ICLR 2022,Poster | When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain---but not all---distribution shifts could result in significant performance degradation. In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially, making interventions by a human expert (or model retraining) unnecessary. While several works have developed tests for distribution shifts, these typically either use non-sequential methods, or detect arbitrary shifts (benign or harmful), or both. We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate. In this work, we design simple sequential tools for testing if the difference between source (training) and target (test) distributions leads to a significant increase in a risk function of interest, like accuracy or calibration. Recent advances in constructing time-uniform confidence sequences allow efficient aggregation of statistical evidence accumulated during the tracking process. The designed framework is applicable in settings where (some) true labels are revealed after the prediction is performed, or when batches of labels become available in a delayed fashion. We demonstrate the efficacy of the proposed framework through an extensive empirical study on a collection of simulated and real datasets. | https://openreview.net/pdf/f763a5271b61d98bca4127ab14ce483150d152c4.pdf |
Towards Understanding the Robustness Against Evasion Attack on Categorical Data | https://openreview.net/forum?id=BmJV7kyAmg | https://openreview.net/forum?id=BmJV7kyAmg | Hongyan Bao,Yufei Han,Yujun Zhou,Yun Shen,Xiangliang Zhang | ICLR 2022,Poster | Characterizing and assessing the adversarial vulnerability of classification models with categorical input has been a practically important, while rarely explored research problem. Our work echoes the challenge by first unveiling the impact factors of adversarial vulnerability of classification models with categorical data based on an information-theoretic adversarial risk analysis about the targeted classifier. Though certifying the robustness of such classification models is intrinsically an NP-hard combinatorial problem, our study shows that the robustness certification can be solved via an efficient greedy exploration of the discrete attack space for any measurable classifiers with a mild smoothness constraint. Our proposed robustness certification framework is instantiated with deep neural network models applied on real-world safety-critic data sources. Our empirical observations confirm the impact of the key adversarial risk factors with categorical input. | https://openreview.net/pdf/b599972b615dea56e3cd777bb3c09e18b73ba736.pdf |
Learning Curves for SGD on Structured Features | https://openreview.net/forum?id=WPI2vbkAl3Q | https://openreview.net/forum?id=WPI2vbkAl3Q | Blake Bordelon,Cengiz Pehlevan | ICLR 2022,Poster | The generalization performance of a machine learning algorithm such as a neural network depends in a non-trivial way on the structure of the data distribution. To analyze the influence of data structure on test loss dynamics, we study an exactly solveable model of stochastic gradient descent (SGD) on the square loss which predicts test error when training on features with arbitrary covariance structure. We solve the theory exactly for both Gaussian features and arbitrary features and we show that the simpler Gaussian model accurately predicts test loss of nonlinear random-feature models and neural networks in the kernel regime trained with SGD on real datasets such as MNIST and CIFAR-10. We show that the optimal batch size at a fixed compute budget is typically small and depends on the feature correlation structure, demonstrating the computational benefits of SGD with small batch sizes. Lastly, we extend our theory to the more usual setting of stochastic gradient descent on a fixed subsampled training set, showing that both training and test error can be accurately predicted in our framework on real data. | https://openreview.net/pdf/05e1bd43845bd2321a0ab8593b8960931a65e24e.pdf |
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training | https://openreview.net/forum?id=Qaw16njk6L | https://openreview.net/forum?id=Qaw16njk6L | Chengyue Gong,Dilin Wang,Meng Li,Xinlei Chen,Zhicheng Yan,Yuandong Tian,qiang liu,Vikas Chandra | ICLR 2022,Poster | Designing accurate and efficient vision transformers (ViTs) is a highly important but challenging task. Supernet-based one-shot neural architecture search (NAS) enables fast architecture optimization and has achieved state-of-the-art (SOTA) results on convolutional neural networks (CNNs). However, directly applying the supernet-based NAS to optimize ViTs leads to poor performance - even worse compared to training single ViTs. In this work, we observe that the poor performance is due to a gradient conflict issue: the gradients of different sub-networks conflict with that of the supernet more severely in ViTs than CNNs, which leads to early saturation in training and inferior convergence. To alleviate this issue, we propose a series of techniques, including a gradient projection algorithm, a switchable layer scaling design, and a simplified data augmentation and regularization training recipe. The proposed techniques significantly improve the convergence and the performance of all sub-networks. Our discovered hybrid ViT model family, dubbed NASViT, achieves top-1 accuracy from 78.2% to 81.8% on ImageNet from 200M to 800M FLOPs, and outperforms all the prior art CNNs and ViTs, including AlphaNet and LeViT, etc. When transferred to semantic segmentation tasks, NASViTs also outperform previous backbones on both Cityscape and ADE20K datasets, achieving 73.2% and 37.9% mIoU with only 5G FLOPs, respectively. Code is available at
https://github.com/facebookresearch/NASViT.
| https://openreview.net/pdf/a6df48abb7e0bb493e7c343c46beb7b365cdc788.pdf |
Graphon based Clustering and Testing of Networks: Algorithms and Theory | https://openreview.net/forum?id=sTNHCrIKDQc | https://openreview.net/forum?id=sTNHCrIKDQc | Mahalakshmi Sabanayagam,Leena Chennuru Vankadara,Debarghya Ghoshdastidar | ICLR 2022,Poster | Network-valued data are encountered in a wide range of applications, and pose challenges in learning due to their complex structure and absence of vertex correspondence. Typical examples of such problems include classification or grouping of protein structures and social networks. Various methods, ranging from graph kernels to graph neural networks, have been proposed that achieve some success in graph classification problems. However, most methods have limited theoretical justification, and their applicability beyond classification remains unexplored. In this work, we propose methods for clustering multiple graphs, without vertex correspondence, that are inspired by the recent literature on estimating graphons---symmetric functions corresponding to infinite vertex limit of graphs. We propose a novel graph distance based on sorting-and-smoothing graphon estimators. Using the proposed graph distance, we present two clustering algorithms and show that they achieve state-of-the-art results. We prove the statistical consistency of both algorithms under Lipschitz assumptions on the graph degrees. We further study the applicability of the proposed distance for graph two-sample testing problems. | https://openreview.net/pdf/bc3a82e090f7f3cfaa9a92ef69181887e0348ede.pdf |
Network Augmentation for Tiny Deep Learning | https://openreview.net/forum?id=TYw3-OlrRm- | https://openreview.net/forum?id=TYw3-OlrRm- | Han Cai,Chuang Gan,Ji Lin,Song Han | ICLR 2022,Poster | We introduce Network Augmentation (NetAug), a new training method for improving the performance of tiny neural networks. Existing regularization techniques (e.g., data augmentation, dropout) have shown much success on large neural networks by adding noise to overcome over-fitting. However, we found these techniques hurt the performance of tiny neural networks. We argue that training tiny models are different from large models: rather than augmenting the data, we should augment the model, since tiny models tend to suffer from under-fitting rather than over-fitting due to limited capacity. To alleviate this issue, NetAug augments the network (reverse dropout) instead of inserting noise into the dataset or the network. It puts the tiny model into larger models and encourages it to work as a sub-model of larger models to get extra supervision, in addition to functioning as an independent model. At test time, only the tiny model is used for inference, incurring zero inference overhead. We demonstrate the effectiveness of NetAug on image classification and object detection. NetAug consistently improves the performance of tiny models, achieving up to 2.2% accuracy improvement on ImageNet. On object detection, achieving the same level of performance, NetAug requires 41% fewer MACs on Pascal VOC and 38% fewer MACs on COCO than the baseline. | https://openreview.net/pdf/484496875b902e745fc4d6514abb817e7be477c2.pdf |
Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations | https://openreview.net/forum?id=o-1v9hdSult | https://openreview.net/forum?id=o-1v9hdSult | Sarath Sreedharan,Utkarsh Soni,Mudit Verma,Siddharth Srivastava,Subbarao Kambhampati | ICLR 2022,Poster | As increasingly complex AI systems are introduced into our daily lives, it becomes important for such systems to be capable of explaining the rationale for their decisions and allowing users to contest these decisions. A significant hurdle to allowing for such explanatory dialogue could be the {\em vocabulary mismatch} between the user and the AI system. This paper introduces methods for providing contrastive explanations in terms of user-specified concepts for sequential decision-making settings where the system's model of the task may be best represented as an inscrutable model. We do this by building partial symbolic models of a local approximation of the task that can be leveraged to answer the user queries. We test these methods on a popular Atari game (Montezuma's Revenge) and variants of Sokoban (a well-known planning benchmark) and report the results of user studies to evaluate whether people find explanations generated in this form useful. | https://openreview.net/pdf/2558c3735ba361f65aac84ecf8e9f4624e87dec8.pdf |
Distributional Reinforcement Learning with Monotonic Splines | https://openreview.net/forum?id=C8Ltz08PtBp | https://openreview.net/forum?id=C8Ltz08PtBp | Yudong Luo,Guiliang Liu,Haonan Duan,Oliver Schulte,Pascal Poupart | ICLR 2022,Poster | Distributional Reinforcement Learning (RL) differs from traditional RL by estimating the distribution over returns to capture the intrinsic uncertainty of MDPs. One key challenge in distributional RL lies in how to parameterize the quantile function when minimizing the Wasserstein metric of temporal differences. Existing algorithms use step functions or piecewise linear functions. In this paper, we propose to learn smooth continuous quantile functions represented by monotonic rational-quadratic splines, which also naturally solve the quantile crossing problem. Experiments in stochastic environments show that a dense estimation for quantile functions enhances distributional RL in terms of faster empirical convergence and higher rewards in most cases. | https://openreview.net/pdf/376a906de470631ee01098610befe6addc3d72de.pdf |
Toward Faithful Case-based Reasoning through Learning Prototypes in a Nearest Neighbor-friendly Space. | https://openreview.net/forum?id=R79ZGjHhv6p | https://openreview.net/forum?id=R79ZGjHhv6p | Seyed Omid Davoudi,Majid Komeili | ICLR 2022,Poster | Recent advances in machine learning have brought opportunities for the ever-increasing use of AI in the real world. This has created concerns about the black-box nature of many of the most recent machine learning approaches. In this work, we propose an interpretable neural network that leverages metric and prototype learning for classification tasks. It encodes its own explanations and provides an improved case-based reasoning through learning prototypes in an embedding space learned by a probabilistic nearest neighbor rule. Through experiments, we demonstrated the effectiveness of the proposed method in both performance and the accuracy of the explanations provided. | https://openreview.net/pdf/6d0714a184aa752df631ed2df558e8cfee0d4bb9.pdf |
Augmented Sliced Wasserstein Distances | https://openreview.net/forum?id=iMqTLyfwnOO | https://openreview.net/forum?id=iMqTLyfwnOO | Xiongjie Chen,Yongxin Yang,Yunpeng Li | ICLR 2022,Poster | While theoretically appealing, the application of the Wasserstein distance to large-scale machine learning problems has been hampered by its prohibitive computational cost. The sliced Wasserstein distance and its variants improve the computational efficiency through the random projection, yet they suffer from low accuracy if the number of projections is not sufficiently large, because the majority of projections result in trivially small values. In this work, we propose a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs), constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks. It is derived from a key observation that (random) linear projections of samples residing on these hypersurfaces would translate to much more flexible nonlinear projections in the original sample space, so they can capture complex structures of the data distribution. We show that the hypersurfaces can be optimized by gradient ascent efficiently. We provide the condition under which the ASWD is a valid metric and show that this can be obtained by an injective neural network architecture. Numerical results demonstrate that the ASWD significantly outperforms other Wasserstein variants for both synthetic and real-world problems. | https://openreview.net/pdf/d09a765a0ca6e8fe66e61db6af5518d089814c41.pdf |
Relational Learning with Variational Bayes | https://openreview.net/forum?id=Az-7gJc6lpr | https://openreview.net/forum?id=Az-7gJc6lpr | Kuang-Hung Liu | ICLR 2022,Poster | In psychology, relational learning refers to the ability to recognize and respond to relationship among objects irrespective of the nature of those objects. Relational learning has long been recognized as a hallmark of human cognition and a key question in artificial intelligence research. In this work, we propose an unsupervised learning method for addressing the relational learning problem where we learn the underlying relationship between a pair of data irrespective of the nature of those data. The central idea of the proposed method is to encapsulate the relational learning problem with a probabilistic graphical model in which we perform inference to learn about data relationship and other relational processing tasks. | https://openreview.net/pdf/9d3dfe42360aa203adb14bacece6acbb08064ac0.pdf |
Provably Robust Adversarial Examples | https://openreview.net/forum?id=UMfhoMtIaP5 | https://openreview.net/forum?id=UMfhoMtIaP5 | Dimitar Iliev Dimitrov,Gagandeep Singh,Timon Gehr,Martin Vechev | ICLR 2022,Poster | We introduce the concept of provably robust adversarial examples for deep neural networks – connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations). We present a novel method called PARADE for generating these regions in a scalable manner which works by iteratively refining the region initially obtained via sampling until a refined region is certified to be adversarial with existing state-of-the-art verifiers. At each step, a novel optimization procedure is applied to maximize the region's volume under the constraint that the convex relaxation of the network behavior with respect to the region implies a chosen bound on the certification objective. Our experimental evaluation shows the effectiveness of PARADE: it successfully finds large provably robust regions including ones containing $\approx 10^{573}$ adversarial examples for pixel intensity and $\approx 10^{599}$ for geometric perturbations. The provability enables our robust examples to be significantly more effective against state-of-the-art defenses based on randomized smoothing than the individual attacks used to construct the regions. | https://openreview.net/pdf/3b8eb27fbc166f48033673d3fadc49a86ef0b79f.pdf |
Joint Shapley values: a measure of joint feature importance | https://openreview.net/forum?id=vcUmUvQCloe | https://openreview.net/forum?id=vcUmUvQCloe | Chris Harris,Richard Pymar,Colin Rowat | ICLR 2022,Poster | The Shapley value is one of the most widely used measures of feature importance partly as it measures a feature's average effect on a model's prediction. We introduce joint Shapley values, which directly extend Shapley's axioms and intuitions: joint Shapley values measure a set of features' average effect on a model's prediction. We prove the uniqueness of joint Shapley values, for any order of explanation. Results for games show that joint Shapley values present different insights from existing interaction indices, which assess the effect of a feature within a set of features. The joint Shapley values seem to provide sensible results in ML attribution problems. With binary features, we present a presence-adjusted global value that is more consistent with local intuitions than the usual approach. | https://openreview.net/pdf/7d8a95bb048b3b204b4a1c9a95e93486a12439a1.pdf |
Low-Budget Active Learning via Wasserstein Distance: An Integer Programming Approach | https://openreview.net/forum?id=v8OlxjGn23S | https://openreview.net/forum?id=v8OlxjGn23S | Rafid Mahmood,Sanja Fidler,Marc T Law | ICLR 2022,Poster | Active learning is the process of training a model with limited labeled data by selecting a core subset of an unlabeled data pool to label. The large scale of data sets used in deep learning forces most sample selection strategies to employ efficient heuristics. This paper introduces an integer optimization problem for selecting a core set that minimizes the discrete Wasserstein distance from the unlabeled pool. We demonstrate that this problem can be tractably solved with a Generalized Benders Decomposition algorithm. Our strategy uses high-quality latent features that can be obtained by unsupervised learning on the unlabeled pool. Numerical results on several data sets show that our optimization approach is competitive with baselines and particularly outperforms them in the low budget regime where less than one percent of the data set is labeled. | https://openreview.net/pdf/9dac127c30d4567d8dde179f21749b9ca5494686.pdf |
Efficient Self-supervised Vision Transformers for Representation Learning | https://openreview.net/forum?id=fVu3o-YUGQK | https://openreview.net/forum?id=fVu3o-YUGQK | Chunyuan Li,Jianwei Yang,Pengchuan Zhang,Mei Gao,Bin Xiao,Xiyang Dai,Lu Yuan,Jianfeng Gao | ICLR 2022,Poster | This paper investigates two techniques for developing efficient self-supervised vision transformers (EsViT) for visual representation learning. First, we show through a comprehensive empirical study that multi-stage architectures with sparse self-attentions can significantly reduce modeling complexity but with a cost of losing the ability to capture fine-grained correspondences between image regions. Second, we propose a new pre-training task, non-contrastive region-matching, which allows the model to capture fine-grained region dependencies and as a result significantly improves the quality of the learned vision representations. Our results show that combining the two techniques, EsViT achieves 81.3% top-1 on the ImageNet linear probe evaluation, outperforming prior arts with around an order magnitude of higher throughput. When transferring to downstream linear classification tasks, EsViT outperforms its supervised counterpart on 17 out of 18 datasets. The code and pre-trained models are released at: https://github.com/microsoft/esvit | https://openreview.net/pdf/e7b63dccef8ad598db1c36a2386c8d8a63058e8e.pdf |
Visual Representation Learning Does Not Generalize Strongly Within the Same Domain | https://openreview.net/forum?id=9RUHPlladgh | https://openreview.net/forum?id=9RUHPlladgh | Lukas Schott,Julius Von Kügelgen,Frederik Träuble,Peter Vincent Gehler,Chris Russell,Matthias Bethge,Bernhard Schölkopf,Francesco Locatello,Wieland Brendel | ICLR 2022,Poster | An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.
In this paper, we test whether 17 unsupervised, weakly supervised, and fully supervised representation learning approaches correctly infer the generative factors of variation in simple datasets (dSprites, Shapes3D, MPI3D) from controlled environments, and on our contributed CelebGlow dataset.
In contrast to prior robustness work that introduces novel factors of variation during test time, such as blur or other (un)structured noise, we here recompose, interpolate, or extrapolate only existing factors of variation from the training data set (e.g., small and medium-sized objects during training and large objects during testing). Models that learn the correct mechanism should be able to generalize to this benchmark.
In total, we train and test 2000+ models and observe that all of them struggle to learn the underlying mechanism regardless of supervision signal and architectural bias. Moreover, the generalization capabilities of all tested models drop significantly as we move from artificial datasets towards more realistic real-world datasets.
Despite their inability to identify the correct mechanism, the models are quite modular as their ability to infer other in-distribution factors remains fairly stable, providing only a single factor is out-of-distribution. These results point to an important yet understudied problem of learning mechanistic models of observations that can facilitate generalization. | https://openreview.net/pdf/775e024ab2e9ce40e6b2f7608d5b1eb2c1136e75.pdf |
Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions | https://openreview.net/forum?id=e2Lle5cij9D | https://openreview.net/forum?id=e2Lle5cij9D | Arda Sahiner,Tolga Ergen,Batu Ozturkler,Burak Bartan,John M. Pauly,Morteza Mardani,Mert Pilanci | ICLR 2022,Poster | Generative Adversarial Networks (GANs) are commonly used for modeling complex distributions of data. Both the generators and discriminators of GANs are often modeled by neural networks, posing a non-transparent optimization problem which is non-convex and non-concave over the generator and discriminator, respectively. Such networks are often heuristically optimized with gradient descent-ascent (GDA), but it is unclear whether the optimization problem contains any saddle points, or whether heuristic methods can find them in practice. In this work, we analyze the training of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality, and for a variety of generators expose the conditions under which Wasserstein GANs can be solved exactly with convex optimization approaches, or can be represented as convex-concave games. Using this convex duality interpretation, we further demonstrate the impact of different activation functions of the discriminator. Our observations are verified with numerical results demonstrating the power of the convex interpretation, with an application in progressive training of convex architectures corresponding to linear generators and quadratic-activation discriminators for CelebA image generation. The code for our experiments is available at https://github.com/ardasahiner/ProCoGAN. | https://openreview.net/pdf/733796fc142ddb063afc1a0818ecba208aef1465.pdf |
Memory Augmented Optimizers for Deep Learning | https://openreview.net/forum?id=NRX9QZ6yqt | https://openreview.net/forum?id=NRX9QZ6yqt | Paul-Aymeric Martin McRae,Prasanna Parthasarathi,Mido Assran,Sarath Chandar | ICLR 2022,Poster | Popular approaches for minimizing loss in data-driven learning often involve an abstraction or an explicit retention of the history of gradients for efficient parameter updates.
The aggregated history of gradients nudges the parameter updates in the right direction even when the gradients at any given step are not informative.
Although the history of gradients summarized in meta-parameters or explicitly stored in memory has been shown effective in theory and practice, the question of whether $all$ or only a subset of the gradients in the history are sufficient in deciding the parameter updates remains unanswered.
In this paper, we propose a framework of memory-augmented gradient descent optimizers that retain a limited view of their gradient history in their internal memory.
Such optimizers scale well to large real-life datasets, and our experiments show that the memory augmented extensions of standard optimizers enjoy accelerated convergence and improved performance on a majority of computer vision and language tasks that we considered.
Additionally, we prove that the proposed class of optimizers with fixed-size memory converge under assumptions of strong convexity, regardless of which gradients are selected or how they are linearly combined to form the update step. | https://openreview.net/pdf/874e2c95385be68f564d4d96107e652253f10706.pdf |
Orchestrated Value Mapping for Reinforcement Learning | https://openreview.net/forum?id=c87d0TS4yX | https://openreview.net/forum?id=c87d0TS4yX | Mehdi Fatemi,Arash Tavakoli | ICLR 2022,Poster | We present a general convergent class of reinforcement learning algorithms that is founded on two distinct principles: (1) mapping value estimates to a different space using arbitrary functions from a broad class, and (2) linearly decomposing the reward signal into multiple channels. The first principle enables incorporating specific properties into the value estimator that can enhance learning. The second principle, on the other hand, allows for the value function to be represented as a composition of multiple utility functions. This can be leveraged for various purposes, e.g. dealing with highly varying reward scales, incorporating a priori knowledge about the sources of reward, and ensemble learning. Combining the two principles yields a general blueprint for instantiating convergent algorithms by orchestrating diverse mapping functions over multiple reward channels. This blueprint generalizes and subsumes algorithms such as Q-Learning, Log Q-Learning, and Q-Decomposition. In addition, our convergence proof for this general class relaxes certain required assumptions in some of these algorithms. Based on our theory, we discuss several interesting configurations as special cases. Finally, to illustrate the potential of the design space that our theory opens up, we instantiate a particular algorithm and evaluate its performance on the Atari suite. | https://openreview.net/pdf/9ef3cef089b9f45f5bdb93fddb0ed8ccfa9e3268.pdf |
Learning to Generalize across Domains on Single Test Samples | https://openreview.net/forum?id=CIaQKbTBwtU | https://openreview.net/forum?id=CIaQKbTBwtU | Zehao Xiao,Xiantong Zhen,Ling Shao,Cees G. M. Snoek | ICLR 2022,Poster | We strive to learn a model from a set of source domains that generalizes well to unseen target domains. The main challenge in such a domain generalization scenario is the unavailability of any target domain data during training, resulting in the learned model not being explicitly adapted to the unseen target domains. We propose learning to generalize across domains on single test samples. We leverage a meta-learning paradigm to learn our model to acquire the ability of adaptation with single samples at training time so as to further adapt itself to each single test sample at test time. We formulate the adaptation to the single test sample as a variational Bayesian inference problem, which incorporates the test sample as a conditional into the generation of model parameters. The adaptation to each test sample requires only one feed-forward computation at test time without any fine-tuning or self-supervised training on additional data from the unseen domains. Extensive ablation studies demonstrate that our model learns the ability to adapt models to each single sample by mimicking domain shifts during training. Further, our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization. | https://openreview.net/pdf/4fcc67594340f12c1beb7e4f1ce64c7be6f70c0a.pdf |
Prototype memory and attention mechanisms for few shot image generation | https://openreview.net/forum?id=lY0-7bj0Vfz | https://openreview.net/forum?id=lY0-7bj0Vfz | Tianqin Li,Zijie Li,Andrew Luo,Harold Rockwell,Amir Barati Farimani,Tai Sing Lee | ICLR 2022,Poster | Recent discoveries indicate that the neural codes in the primary visual cortex (V1) of macaque monkeys are complex, diverse and sparse. This leads us to ponder the computational advantages and functional role of these “grandmother cells." Here, we propose that such cells can serve as prototype memory priors that bias and shape the distributed feature processing within the image generation process in the brain. These memory prototypes are learned by momentum online clustering and are utilized via a memory-based attention operation, which we define as Memory Concept Attention (MoCA). To test our proposal, we show in a few-shot image generation task, that having a prototype memory during attention can improve image synthesis quality, learn interpretable visual concept clusters, as well as improve the robustness of the model. Interestingly, we also find that our attentional memory mechanism can implicitly modify the horizontal connections by updating the transformation into the prototype embedding space for self-attention. Insofar as GANs can be seen as plausible models for reasoning about the top-down synthesis in the analysis-by-synthesis loop of the hierarchical visual cortex, our findings demonstrate a plausible computational role for these “prototype concept" neurons in visual processing in the brain. | https://openreview.net/pdf/c2a4a72f1bd5890c4beeb93de11cac4746eae2c1.pdf |
TPU-GAN: Learning temporal coherence from dynamic point cloud sequences | https://openreview.net/forum?id=FEBFJ98FKx | https://openreview.net/forum?id=FEBFJ98FKx | Zijie Li,Tianqin Li,Amir Barati Farimani | ICLR 2022,Poster | Point cloud sequence is an important data representation that provides flexible shape and motion information. Prior work demonstrates that incorporating scene flow information into loss can make model learn temporally coherent feature spaces. However, it is prohibitively expensive to acquire point correspondence information across frames in real-world environments. In this work, we propose a super-resolution generative adversarial network (GAN) for upsampling dynamic point cloud sequences, which does not require point correspondence annotation. Our model, Temporal Point cloud Upsampling GAN (TPU-GAN), can implicitly learn the underlying temporal coherence from point cloud sequence, which in turn guides the generator to produce temporally coherent output. In addition, we propose a learnable masking module to adapt upsampling ratio according to the point distribution. We conduct extensive experiments on point cloud sequences from two different domains: particles in the fluid dynamical system and human action scanned data. The quantitative and qualitative evaluation demonstrates the effectiveness of our method on upsampling tasks as well as learning temporal coherence from irregular point cloud sequences. | https://openreview.net/pdf/52569840ae5698d2203efde4f8f06d012fa7868a.pdf |
A First-Occupancy Representation for Reinforcement Learning | https://openreview.net/forum?id=JBAZe2yN6Ub | https://openreview.net/forum?id=JBAZe2yN6Ub | Ted Moskovitz,Spencer R Wilson,Maneesh Sahani | ICLR 2022,Poster | Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states. The successor representation (SR), which measures the expected cumulative, discounted state occupancy under a fixed policy, enables efficient transfer to different reward structures in an otherwise constant Markovian environment and has been hypothesized to underlie aspects of biological behavior and neural activity. However, in the real world, rewards may only be available for consumption once, may shift location, or agents may simply aim to reach goal states as rapidly as possible without the constraint of artificially imposed task horizons. In such cases, the most behaviorally-relevant representation would carry information about when the agent was likely to first reach states of interest, rather than how often it should expect to visit them over a potentially infinite time span. To reflect such demands, we introduce the first-occupancy representation (FR), which measures the expected temporal discount to the first time a state is accessed. We demonstrate that the FR facilitates exploration, the selection of efficient paths to desired states, allows the agent, under certain conditions, to plan provably optimal trajectories defined by a sequence of subgoals, and induces similar behavior to animals avoiding threatening stimuli. | https://openreview.net/pdf/46abdff2d131f44012d855cdd93c0fa7034d601a.pdf |
Deep ReLU Networks Preserve Expected Length | https://openreview.net/forum?id=ci7LBzDn2Q | https://openreview.net/forum?id=ci7LBzDn2Q | Boris Hanin,Ryan Jeong,David Rolnick | ICLR 2022,Poster | Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize. One natural measure of complexity is how the network distorts length - if the network takes a unit-length curve as input, what is the length of the resulting curve of outputs? It has been widely believed that this length grows exponentially in network depth. We prove that in fact this is not the case: the expected length distortion does not grow with depth, and indeed shrinks slightly, for ReLU networks with standard random initialization. We also generalize this result by proving upper bounds both for higher moments of the length distortion and for the distortion of higher-dimensional volumes. These theoretical results are corroborated by our experiments. | https://openreview.net/pdf/726f7b1d7efcb38a8f1685099dbfc32c938b1267.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.