title
stringlengths 12
151
| url
stringlengths 41
43
| detail_url
stringlengths 41
43
| authors
stringlengths 6
562
| tags
stringclasses 3
values | abstract
stringlengths 519
2.34k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations | https://openreview.net/forum?id=fCG75wd39ze | https://openreview.net/forum?id=fCG75wd39ze | JAEHOON LEE,Jinsung Jeon,Sheo yon Jhin,Jihyeon Hyeong,Jayoung Kim,Minju Jo,Kook Seungji,Noseong Park | ICLR 2022,Poster | The problem of processing very long time-series data (e.g., a length of more than 10,000) is a long-standing research problem in machine learning. Recently, one breakthrough, called neural rough differential equations (NRDEs), has been proposed and has shown that it is able to process such data. Their main concept is to use the log-signature transform, which is known to be more efficient than the Fourier transform for irregular long time-series, to convert a very long time-series sample into a relatively shorter series of feature vectors. However, the log-signature transform causes non-trivial spatial overheads. To this end, we present the method of LOweR-Dimensional embedding of log-signature (LORD), where we define an NRDE-based autoencoder to implant the higher-depth log-signature knowledge into the lower-depth log-signature. We show that the encoder successfully combines the higher-depth and the lower-depth log-signature knowledge, which greatly stabilizes the training process and increases the model accuracy. In our experiments with benchmark datasets, the improvement ratio by our method is up to 75\% in terms of various classification and forecasting evaluation metrics. | https://openreview.net/pdf/178b571a9442283c345d20ba2bdda24dab7e0aea.pdf |
Generalized Natural Gradient Flows in Hidden Convex-Concave Games and GANs | https://openreview.net/forum?id=bsycpMi00R1 | https://openreview.net/forum?id=bsycpMi00R1 | Andjela Mladenovic,Iosif Sakos,Gauthier Gidel,Georgios Piliouras | ICLR 2022,Poster | Game-theoretic formulations in machine learning have recently risen in prominence, whereby entire modeling paradigms are best captured as zero-sum games. Despite their popularity, however, their dynamics are still poorly understood. This lack of theory is often substantiated with painful empirical observations of volatile training dynamics and even divergence. Such results highlight the need to develop an appropriate theory with convergence guarantees that are powerful enough to inform practice. This paper studies the generalized Gradient Descent-Ascent (GDA) flow in a large class of non-convex non-concave Zero-Sum games dubbed Hidden Convex-Concave games, a class of games that includes GANs. We focus on two specific geometries: a novel geometry induced by the hidden convex-concave structure that we call the hidden mapping geometry and the Fisher information geometry. For the hidden mapping geometry, we prove global convergence under mild assumptions. In the case of Fisher information geometry, we provide a complete picture of the dynamics in an interesting special setting of team competition via invariant function analysis. | https://openreview.net/pdf/0630aee415ca1b8fb2ede23f4f1a8a70876fb603.pdf |
Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization | https://openreview.net/forum?id=sPIFuucA3F | https://openreview.net/forum?id=sPIFuucA3F | Thanh Nguyen-Tang,Sunil Gupta,A. Tuan Nguyen,Svetha Venkatesh | ICLR 2022,Poster | Offline policy learning (OPL) leverages existing data collected a priori for policy optimization without any active exploration. Despite the prevalence and recent interest in this problem, its theoretical and algorithmic foundations in function approximation settings remain under-developed. In this paper, we consider this problem on the axes of distributional shift, optimization, and generalization in offline contextual bandits with neural networks. In particular, we propose a provably efficient offline contextual bandit with neural network function approximation that does not require any functional assumption on the reward. We show that our method provably generalizes over unseen contexts under a milder condition for distributional shift than the existing OPL works. Notably, unlike any other OPL method, our method learns from the offline data in an online manner using stochastic gradient descent, allowing us to leverage the benefits of online learning into an offline setting. Moreover, we show that our method is more computationally efficient and has a better dependence on the effective dimension of the neural network than an online counterpart. Finally, we demonstrate the empirical effectiveness of our method in a range of synthetic and real-world OPL problems. | https://openreview.net/pdf/adc89029ee15c473bef28493efafef96544ac523.pdf |
THOMAS: Trajectory Heatmap Output with learned Multi-Agent Sampling | https://openreview.net/forum?id=QDdJhACYrlX | https://openreview.net/forum?id=QDdJhACYrlX | Thomas Gilles,Stefano Sabatini,Dzmitry Tsishkou,Bogdan Stanciulescu,Fabien Moutarde | ICLR 2022,Poster | In this paper, we propose THOMAS, a joint multi-agent trajectory prediction framework allowing for an efficient and consistent prediction of multi-agent multi-modal trajectories. We present a unified model architecture for simultaneous agent future heatmap estimation, in which we leverage hierarchical and sparse image generation for fast and memory-efficient inference. We propose a learnable trajectory recombination model that takes as input a set of predicted trajectories for each agent and outputs its consistent reordered recombination. This recombination module is able to realign the initially independent modalities so that they do no collide and are coherent with each other. We report our results on the Interaction multi-agent prediction challenge and rank $1^{st}$ on the online test leaderboard. | https://openreview.net/pdf/a8ce9facf1e0dfc642c02f9849f5b7910589efad.pdf |
CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability | https://openreview.net/forum?id=rHMaBYbkkRJ | https://openreview.net/forum?id=rHMaBYbkkRJ | Martin Mundt,Steven Lang,Quentin Delfosse,Kristian Kersting | ICLR 2022,Poster | What is the state of the art in continual machine learning? Although a natural question for predominant static benchmarks, the notion to train systems in a lifelong manner entails a plethora of additional challenges with respect to set-up and evaluation. The latter have recently sparked a growing amount of critiques on prominent algorithm-centric perspectives and evaluation protocols being too narrow, resulting in several attempts at constructing guidelines in favor of specific desiderata or arguing against the validity of prevalent assumptions. In this work, we depart from this mindset and argue that the goal of a precise formulation of desiderata is an ill-posed one, as diverse applications may always warrant distinct scenarios. Instead, we introduce the Continual Learning EValuation Assessment Compass: the CLEVA-Compass. The compass provides the visual means to both identify how approaches are practically reported and how works can simultaneously be contextualized in the broader literature landscape. In addition to promoting compact specification in the spirit of recent replication trends, it thus provides an intuitive chart to understand the priorities of individual systems, where they resemble each other, and what elements are missing towards a fair comparison. | https://openreview.net/pdf/966f7548b61575e6823e6bf65299692e5dc4bc71.pdf |
Neural Stochastic Dual Dynamic Programming | https://openreview.net/forum?id=aisKPsMM3fg | https://openreview.net/forum?id=aisKPsMM3fg | Hanjun Dai,Yuan Xue,Zia Syed,Dale Schuurmans,Bo Dai | ICLR 2022,Poster | Stochastic dual dynamic programming (SDDP) is a state-of-the-art method for solving multi-stage stochastic optimization, widely used for modeling real-world process optimization tasks. Unfortunately, SDDP has a worst-case complexity that scales exponentially in the number of decision variables, which severely limits applicability to only low dimensional problems. To overcome this limitation, we extend SDDP by introducing a trainable neural model that learns to map problem instances to a piece-wise linear value function within intrinsic low-dimension space, which is architected specifically to interact with a base SDDP solver, so that can accelerate optimization performance on new instances. The proposed Neural Stochastic Dual Dynamic Programming ($$\nu$$-SDDP) continually self-improves by solving successive problems. An empirical investigation demonstrates that $$\nu$$-SDDP can significantly reduce problem solving cost without sacrificing solution quality over competitors such as SDDP and reinforcement learning algorithms, across a range of synthetic and real-world process optimization problems. | https://openreview.net/pdf/ae3ff3d1303130f7aae694a6d73bb6bef6e9970e.pdf |
DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations | https://openreview.net/forum?id=BrPdX1bDZkQ | https://openreview.net/forum?id=BrPdX1bDZkQ | Geon-Hyeong Kim,Seokin Seo,Jongmin Lee,Wonseok Jeon,HyeongJoo Hwang,Hongseok Yang,Kee-Eung Kim | ICLR 2022,Poster | We consider offline imitation learning (IL), which aims to mimic the expert's behavior from its demonstration without further interaction with the environment. One of the main challenges in offline IL is to deal with the narrow support of the data distribution exhibited by the expert demonstrations that cover only a small fraction of the state and the action spaces. As a result, offline IL algorithms that rely only on expert demonstrations are very unstable since the situation easily deviates from those in the expert demonstrations. In this paper, we assume additional demonstration data of unknown degrees of optimality, which we call imperfect demonstrations. Under this setting, we propose DemoDICE, which effectively utilizes imperfect demonstrations by matching the stationary distribution of a policy with experts' distribution while penalizing its deviation from the overall demonstrations. Compared with the recent IL algorithms that adopt adversarial minimax training objectives, we substantially stabilize overall learning process by reducing minimax optimization to a direct convex optimization in a principled manner. Using extensive tasks, we show that DemoDICE achieves promising results in the offline IL from expert and imperfect demonstrations. | https://openreview.net/pdf/e5325598f049024b1d7f5b5d86157b8d521d2547.pdf |
Learning to Extend Molecular Scaffolds with Structural Motifs | https://openreview.net/forum?id=ZTsoE8G3GG | https://openreview.net/forum?id=ZTsoE8G3GG | Krzysztof Maziarz,Henry Richard Jackson-Flux,Pashmina Cameron,Finton Sirockin,Nadine Schneider,Nikolaus Stiefl,Marwin Segler,Marc Brockschmidt | ICLR 2022,Poster | Recent advancements in deep learning-based modeling of molecules promise to accelerate in silico drug discovery. A plethora of generative models is available, building molecules either atom-by-atom and bond-by-bond or fragment-by-fragment. However, many drug discovery projects require a fixed scaffold to be present in the generated molecule, and incorporating that constraint has only recently been explored. Here, we propose MoLeR, a graph-based model that naturally supports scaffolds as initial seed of the generative procedure, which is possible because it is not conditioned on the generation history. Our experiments show that MoLeR performs comparably to state-of-the-art methods on unconstrained molecular optimization tasks, and outperforms them on scaffold-based tasks, while being an order of magnitude faster to train and sample from than existing approaches. Furthermore, we show the influence of a number of seemingly minor design choices on the overall performance. | https://openreview.net/pdf/329f388ad7404ff01bdbb88c90c981af357646e0.pdf |
Discrepancy-Based Active Learning for Domain Adaptation | https://openreview.net/forum?id=p98WJxUC3Ca | https://openreview.net/forum?id=p98WJxUC3Ca | Antoine de Mathelin,François Deheeger,Mathilde MOUGEOT,Nicolas Vayatis | ICLR 2022,Poster | The goal of the paper is to design active learning strategies which lead to domain adaptation under an assumption of Lipschitz functions. Building on previous work by Mansour et al. (2009) we adapt the concept of discrepancy distance between source and target distributions to restrict the maximization over the hypothesis class to a localized class of functions which are performing accurate labeling on the source domain. We derive generalization error bounds for such active learning strategies in terms of Rademacher average and localized discrepancy for general loss functions which satisfy a regularity condition. A practical K-medoids algorithm that can address the case of large data set is inferred from the theoretical bounds. Our numerical experiments show that the proposed algorithm is competitive against other state-of-the-art active learning techniques in the context of domain adaptation, in particular on large data sets of around one hundred thousand images. | https://openreview.net/pdf/c414fd73c946f1f7e7d5cd305b3daff711d9c75b.pdf |
Gradient Matching for Domain Generalization | https://openreview.net/forum?id=vDwBW49HmO | https://openreview.net/forum?id=vDwBW49HmO | Yuge Shi,Jeffrey Seely,Philip Torr,Siddharth N,Awni Hannun,Nicolas Usunier,Gabriel Synnaeve | ICLR 2022,Poster | Machine learning systems typically assume that the distributions of training and test sets match closely. However, a critical requirement of such systems in the real world is their ability to generalize to unseen domains. Here, we propose an _inter-domain gradient matching_ objective that targets domain generalization by maximizing the inner product between gradients from different domains. Since direct optimization of the gradient inner product can be computationally prohibitive --- it requires computation of second-order derivatives –-- we derive a simpler first-order algorithm named Fish that approximates its optimization. We perform experiments on the Wilds benchmark, which captures distribution shift in the real world, as well as the DomainBed benchmark that focuses more on synthetic-to-real transfer. Our method produces competitive results on both benchmarks, demonstrating its effectiveness across a wide range of domain generalization tasks. | https://openreview.net/pdf/8a8aa9b1acdc5b55622687f272cb96ad87fa97b8.pdf |
Objects in Semantic Topology | https://openreview.net/forum?id=d5SCUJ5t1k | https://openreview.net/forum?id=d5SCUJ5t1k | Shuo Yang,Peize Sun,Yi Jiang,Xiaobo Xia,Ruiheng Zhang,Zehuan Yuan,Changhu Wang,Ping Luo,Min Xu | ICLR 2022,Poster | A more realistic object detection paradigm, Open-World Object Detection, has arised increasing research interests in the community recently. A qualified open-world object detector can not only identify objects of known categories, but also discover unknown objects, and incrementally learn to categorize them when their annotations progressively arrive. Previous works rely on independent modules to recognize unknown categories and perform incremental learning, respectively. In this paper, we provide a unified perspective: Semantic Topology. During the life-long learning of an open-world object detector, all object instances from the same category are assigned to their corresponding pre-defined node in the semantic topology, including the `unknown' category. This constraint builds up discriminative feature representations and consistent relationships among objects, thus enabling the detector to distinguish unknown objects out of the known categories, as well as making learned features of known objects undistorted when learning new categories incrementally. Extensive experiments demonstrate that semantic topology, either randomly-generated or derived from a well-trained language model, could outperform the current state-of-the-art open-world object detectors by a large margin, e.g., the absolute open-set error (the number of unknown instances that are wrongly labeled as known) is reduced from 7832 to 2546, exhibiting the inherent superiority of semantic topology on open-world object detection. | https://openreview.net/pdf/ff70332fa4b027995f092ed696137154488aa5fc.pdf |
Hidden Parameter Recurrent State Space Models For Changing Dynamics Scenarios | https://openreview.net/forum?id=ds8yZOUsea | https://openreview.net/forum?id=ds8yZOUsea | Vaisakh Shaj,Dieter Büchler,Rohit Sonker,Philipp Becker,Gerhard Neumann | ICLR 2022,Poster | Recurrent State-space models (RSSMs) are highly expressive models for learning patterns in time series data and for system identification. However, these models are often based on the assumption that the dynamics are fixed and unchanging, which is rarely the case in real-world scenarios. Many control applications often exhibit tasks with similar, but not identical dynamics, that can be modelled as having a common latent structure. We introduce the Hidden Parameter Recurrent State Space Models (HiP-RSSMs), a framework that parametrizes a family of related state-space models with a low-dimensional set of latent factors. We present a simple and effective way of performing learning and inference over this Gaussian graphical model that avoids approximations like variational inference. We show that HiP-RSSMs outperforms RSSMs and competing multi-task models on several challenging robotic benchmarks both on real systems and simulations. | https://openreview.net/pdf/677a627df12a4c559d9876846d7c116f34b2f4cd.pdf |
Graph Neural Network Guided Local Search for the Traveling Salesperson Problem | https://openreview.net/forum?id=ar92oEosBIg | https://openreview.net/forum?id=ar92oEosBIg | Benjamin Hudson,Qingbiao Li,Matthew Malencia,Amanda Prorok | ICLR 2022,Poster | Solutions to the Traveling Salesperson Problem (TSP) have practical applications to processes in transportation, logistics, and automation, yet must be computed with minimal delay to satisfy the real-time nature of the underlying tasks. However, solving large TSP instances quickly without sacrificing solution quality remains challenging for current approximate algorithms. To close this gap, we present a hybrid data-driven approach for solving the TSP based on Graph Neural Networks (GNNs) and Guided Local Search (GLS). Our model predicts the regret of including each edge of the problem graph in the solution; GLS uses these predictions in conjunction with the original problem graph to find solutions. Our experiments demonstrate that this approach converges to optimal solutions at a faster rate than three recent learning based approaches for the TSP. Notably, we reduce the mean optimality gap on the 100-node problem set from 1.534% to 0.705%, a 2x improvement. When generalizing from 20-node instances to the 100-node problem set, we reduce the optimality gap from 18.845% to 2.622%, a 7x improvement. | https://openreview.net/pdf/353e2dea7badc5c7bf552499ab129def8f532705.pdf |
On the Pitfalls of Heteroscedastic Uncertainty Estimation with Probabilistic Neural Networks | https://openreview.net/forum?id=aPOpXlnV1T | https://openreview.net/forum?id=aPOpXlnV1T | Maximilian Seitzer,Arash Tavakoli,Dimitrije Antic,Georg Martius | ICLR 2022,Poster | Capturing aleatoric uncertainty is a critical part of many machine learning systems. In deep learning, a common approach to this end is to train a neural network to estimate the parameters of a heteroscedastic Gaussian distribution by maximizing the logarithm of the likelihood function under the observed data. In this work, we examine this approach and identify potential hazards associated with the use of log-likelihood in conjunction with gradient-based optimizers. First, we present a synthetic example illustrating how this approach can lead to very poor but stable parameter estimates. Second, we identify the culprit to be the log-likelihood loss, along with certain conditions that exacerbate the issue. Third, we present an alternative formulation, termed $\beta$-NLL, in which each data point's contribution to the loss is weighted by the $\beta$-exponentiated variance estimate. We show that using an appropriate $\beta$ largely mitigates the issue in our illustrative example. Fourth, we evaluate this approach on a range of domains and tasks and show that it achieves considerable improvements and performs more robustly concerning hyperparameters, both in predictive RMSE and log-likelihood criteria. | https://openreview.net/pdf/542fc7335389cbc9933fb0ec11722efc30b958e8.pdf |
Label-Efficient Semantic Segmentation with Diffusion Models | https://openreview.net/forum?id=SlxSY2UZQT | https://openreview.net/forum?id=SlxSY2UZQT | Dmitry Baranchuk,Andrey Voynov,Ivan Rubachev,Valentin Khrulkov,Artem Babenko | ICLR 2022,Poster | Denoising diffusion probabilistic models have recently received much research attention since they outperform alternative approaches, such as GANs, and currently provide state-of-the-art generative performance. The superior performance of diffusion models has made them an appealing tool in several applications, including inpainting, super-resolution, and semantic editing. In this paper, we demonstrate that diffusion models can also serve as an instrument for semantic segmentation, especially in the setup when labeled data is scarce. In particular, for several pretrained diffusion models, we investigate the intermediate activations from the networks that perform the Markov step of the reverse diffusion process. We show that these activations effectively capture the semantic information from an input image and appear to be excellent pixel-level representations for the segmentation problem. Based on these observations, we describe a simple segmentation method, which can work even if only a few training images are provided. Our approach significantly outperforms the existing alternatives on several datasets for the same amount of human supervision. | https://openreview.net/pdf/7f702e218df7a81da790cff07136c4f77297f473.pdf |
Language model compression with weighted low-rank factorization | https://openreview.net/forum?id=uPv9Y3gmAI5 | https://openreview.net/forum?id=uPv9Y3gmAI5 | Yen-Chang Hsu,Ting Hua,Sungen Chang,Qian Lou,Yilin Shen,Hongxia Jin | ICLR 2022,Poster | Factorizing a large matrix into small matrices is a popular strategy for model compression. Singular value decomposition (SVD) plays a vital role in this compression strategy, approximating a learned matrix with fewer parameters. However, SVD minimizes the squared error toward reconstructing the original matrix without gauging the importance of the parameters, potentially giving a larger reconstruction error for those who affect the task accuracy more. In other words, the optimization objective of SVD is not aligned with the trained model's task accuracy. We analyze this previously unexplored problem, make observations, and address it by introducing Fisher information to weigh the importance of parameters affecting the model prediction. This idea leads to our method: Fisher-Weighted SVD (FWSVD). Although the factorized matrices from our approach do not result in smaller reconstruction errors, we find that our resulting task accuracy is much closer to the original model's performance. We perform analysis with the transformer-based language models, showing our weighted SVD largely alleviates the mismatched optimization objectives and can maintain model performance with a higher compression rate. Our method can directly compress a task-specific model while achieving better performance than other compact model strategies requiring expensive model pre-training. Moreover, the evaluation of compressing an already compact model shows our method can further reduce 9% to 30% parameters with an insignificant impact on task accuracy. | https://openreview.net/pdf/a5edead703a518eda031d7e25734d372b8287883.pdf |
Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization | https://openreview.net/forum?id=QuObT9BTWo | https://openreview.net/forum?id=QuObT9BTWo | Xi Lin,Zhiyuan Yang,Qingfu Zhang | ICLR 2022,Poster | Multiobjective combinatorial optimization (MOCO) problems can be found in many real-world applications. However, exactly solving these problems would be very challenging, particularly when they are NP-hard. Many handcrafted heuristic methods have been proposed to tackle different MOCO problems over the past decades. In this work, we generalize the idea of neural combinatorial optimization, and develop a learning-based approach to approximate the whole Pareto set for a given MOCO problem without further search procedure. We propose a single preference-conditioned model to directly generate approximate Pareto solutions for any trade-off preference, and design an efficient multiobjective reinforcement learning algorithm to train this model. Our proposed method can be treated as a learning-based extension for the widely-used decomposition-based multiobjective evolutionary algorithm (MOEA/D). It uses a single model to accommodate all the possible preferences, whereas other methods use a finite number of solutions to approximate the Pareto set. Experimental results show that our proposed method significantly outperforms some other methods on the multiobjective traveling salesman problem, multiobjective vehicle routing problem, and multiobjective knapsack problem in terms of solution quality, speed, and model efficiency. | https://openreview.net/pdf/975b8b82804eaa05309f02856d161ef85810c9ca.pdf |
Prototypical Contrastive Predictive Coding | https://openreview.net/forum?id=8la28hZOwug | https://openreview.net/forum?id=8la28hZOwug | Kyungmin Lee | ICLR 2022,Poster | Transferring representational knowledge of a model to another is a wide-ranging topic in machine learning. Those applications include the distillation of a large supervised or self-supervised teacher model to a smaller student model or self-supervised learning via self-distillation. Knowledge distillation is an original method to solve these problems, which minimizes a cross-entropy loss between the prototypical probabilistic outputs of teacher and student networks. On the other hand, contrastive learning has shown its competency in transferring representations as they allow students to capture the information of teacher representations. In this paper, we amalgamate the advantages of knowledge distillation and contrastive learning by modeling the critic of a contrastive objective by the prototypical probabilistic discrepancy between two features. We refer to it as prototypical contrastive predictive coding and present efficient implementation using the proposed objective for three distillation tasks: supervised model compression, self-supervised model compression, and self-supervised learning via self-distillation. Through extensive experiments, we validate the effectiveness of our method and show that our method achieves state-of-the-art performance in supervised / self-supervised model compression. | https://openreview.net/pdf/9d170cd1c9aae6c853bd762d8238dfed410f721c.pdf |
Adversarial Robustness Through the Lens of Causality | https://openreview.net/forum?id=cZAi1yWpiXQ | https://openreview.net/forum?id=cZAi1yWpiXQ | Yonggang Zhang,Mingming Gong,Tongliang Liu,Gang Niu,Xinmei Tian,Bo Han,Bernhard Schölkopf,Kun Zhang | ICLR 2022,Poster | The adversarial vulnerability of deep neural networks has attracted significant attention in machine learning. As causal reasoning has an instinct for modeling distribution change, it is essential to incorporate causality into analyzing this specific type of distribution change induced by adversarial attacks. However, causal formulations of the intuition of adversarial attacks and the development of robust DNNs are still lacking in the literature. To bridge this gap, we construct a causal graph to model the generation process of adversarial examples and define the adversarial distribution to formalize the intuition of adversarial attacks. From the causal perspective, we study the distinction between the natural and adversarial distribution and conclude that the origin of adversarial vulnerability is the focus of models on spurious correlations. Inspired by the causal understanding, we propose the \emph{Causal}-inspired \emph{Adv}ersarial distribution alignment method, CausalAdv, to eliminate the difference between natural and adversarial distributions by considering spurious correlations. Extensive experiments demonstrate the efficacy of the proposed method. Our work is the first attempt towards using causality to understand and mitigate the adversarial vulnerability. | https://openreview.net/pdf/409af234b081e8d93ddd0a2b3e2d79d3f3a24b19.pdf |
Distributionally Robust Fair Principal Components via Geodesic Descents | https://openreview.net/forum?id=9NVd-DMtThY | https://openreview.net/forum?id=9NVd-DMtThY | Hieu Vu,Toan Tran,Man-Chung Yue,Viet Anh Nguyen | ICLR 2022,Poster | Principal component analysis is a simple yet useful dimensionality reduction technique in modern machine learning pipelines. In consequential domains such as college admission, healthcare and credit approval, it is imperative to take into account emerging criteria such as the fairness and the robustness of the learned projection. In this paper, we propose a distributionally robust optimization problem for principal component analysis which internalizes a fairness criterion in the objective function. The learned projection thus balances the trade-off between the total reconstruction error and the reconstruction error gap between subgroups, taken in the min-max sense over all distributions in a moment-based ambiguity set. The resulting optimization problem over the Stiefel manifold can be efficiently solved by a Riemannian subgradient descent algorithm with a sub-linear convergence rate. Our experimental results on real-world datasets show the merits of our proposed method over state-of-the-art baselines. | https://openreview.net/pdf/a8fe9d7b929beb8c14e69b5eaf7902ec099f3aa4.pdf |
Understanding and Improving Graph Injection Attack by Promoting Unnoticeability | https://openreview.net/forum?id=wkMG8cdvh7- | https://openreview.net/forum?id=wkMG8cdvh7- | Yongqiang Chen,Han Yang,Yonggang Zhang,MA KAILI,Tongliang Liu,Bo Han,James Cheng | ICLR 2022,Poster | Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i.e., Graph Modification Attack (GMA). Although GIA has achieved promising results, little is known about why it is successful and whether there is any pitfall behind the success. To understand the power of GIA, we compare it with GMA and find that GIA can be provably more harmful than GMA due to its relatively high flexibility. However, the high flexibility will also lead to great damage to the homophily distribution of the original graph, i.e., similarity among neighbors. Consequently, the threats of GIA can be easily alleviated or even prevented by homophily-based defenses designed to recover the original homophily. To mitigate the issue, we introduce a novel constraint – homophily unnoticeability that enforces GIA to preserve the homophily, and propose Harmonious Adversarial Objective (HAO) to instantiate it. Extensive experiments verify that GIA with HAO can break homophily-based defenses and outperform previous GIA attacks by a significant margin. We believe our methods can serve for a more reliable evaluation of the robustness of GNNs. | https://openreview.net/pdf/fe3533162beb8ac7e98d14852e9e6ec3ba4f5fd7.pdf |
Learning to Guide and to be Guided in the Architect-Builder Problem | https://openreview.net/forum?id=swiyAeGzFhQ | https://openreview.net/forum?id=swiyAeGzFhQ | Paul Barde,Tristan Karch,Derek Nowrouzezahrai,Clément Moulin-Frier,Christopher Pal,Pierre-Yves Oudeyer | ICLR 2022,Poster | We are interested in interactive agents that learn to coordinate, namely, a $builder$ -- which performs actions but ignores the goal of the task, i.e. has no access to rewards -- and an $architect$ which guides the builder towards the goal of the task.
We define and explore a formal setting where artificial agents are equipped with mechanisms that allow them to simultaneously learn a task while at the same time evolving a shared communication protocol.
Ideally, such learning should only rely on high-level communication priors and be able to handle a large variety of tasks and meanings while deriving communication protocols that can be reused across tasks.
The field of Experimental Semiotics has shown the extent of human proficiency at learning from a priori unknown instructions meanings. Therefore, we take inspiration from it and present the Architect-Builder Problem (ABP): an asymmetrical setting in which an architect must learn to guide a builder towards constructing a specific structure. The architect knows the target structure but cannot act in the environment and can only send arbitrary messages to the builder. The builder on the other hand can act in the environment, but receives no rewards nor has any knowledge about the task, and must learn to solve it relying only on the messages sent by the architect. Crucially, the meaning of messages is initially not defined nor shared between the agents but must be negotiated throughout learning.
Under these constraints, we propose Architect-Builder Iterated Guiding (ABIG), a solution to the Architect-Builder Problem where the architect leverages a learned model of the builder to guide it while the builder uses self-imitation learning to reinforce its guided behavior. To palliate to the non-stationarity induced by the two agents concurrently learning, ABIG structures the sequence of interactions between the agents into interaction frames. We analyze the key learning mechanisms of ABIG and test it in a 2-dimensional instantiation of the ABP where tasks involve grasping cubes, placing them at a given location, or building various shapes. In this environment, ABIG results in a low-level, high-frequency, guiding communication protocol that not only enables an architect-builder pair to solve the task at hand, but that can also generalize to unseen tasks. | https://openreview.net/pdf/87250d43c6be74fc8c9ea00693ffaa2364df1b2f.pdf |
Phase Collapse in Neural Networks | https://openreview.net/forum?id=iPHLcmtietq | https://openreview.net/forum?id=iPHLcmtietq | Florentin Guth,John Zarka,Stéphane Mallat | ICLR 2022,Poster | Deep convolutional classifiers linearly separate image classes and improve accuracy as depth increases. They progressively reduce the spatial dimension whereas the number of channels grows with depth. Spatial variability is therefore transformed into variability along channels. A fundamental challenge is to understand the role of non-linearities together with convolutional filters in this transformation. ReLUs with biases are often interpreted as thresholding operators that improve discrimination through sparsity. This paper demonstrates that it is a different mechanism called \emph{phase collapse} which eliminates spatial variability while linearly separating classes. We show that collapsing the phases of complex wavelet coefficients is sufficient to reach the classification accuracy of ResNets of similar depths. However, replacing the phase collapses with thresholding operators that enforce sparsity considerably degrades the performance. We explain these numerical results by showing that the iteration of phase collapses progressively improves separation of classes, as opposed to thresholding non-linearities. | https://openreview.net/pdf/e7edb0ba8bb5814255b8fcf9d0c3100a71b6718d.pdf |
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training | https://openreview.net/forum?id=TBpg4PnXhYH | https://openreview.net/forum?id=TBpg4PnXhYH | Wenyong Huang,Zhenhe Zhang,Yu Ting Yeung,Xin Jiang,Qun Liu | ICLR 2022,Poster | We introduce a new approach for speech pre-training named SPIRAL which works by learning denoising representation of perturbed data in a teacher-student framework.
Specifically, given a speech utterance, we first feed the utterance to a teacher network to obtain corresponding representation. Then the same utterance is perturbed and fed to a student network. The student network is trained to output representation resembling that of the teacher. At the same time, the teacher network is updated as moving average of student's weights over training steps. In order to prevent representation collapse, we apply an in-utterance contrastive loss as pre-training objective and impose position randomization on the input to the teacher. SPIRAL achieves competitive or better results compared to state-of-the-art speech pre-training method wav2vec 2.0, with significant reduction of training cost (80% for BASE model, 65% for LARGE model).
Furthermore, we address the problem of noise-robustness that is critical to real-world speech applications. We propose multi-condition pre-training by perturbing the student's input with various types of additive noise. We demonstrate that multi-condition pre-trained SPIRAL models are more robust to noisy speech (9.0% - 13.3% relative word error rate reduction on real noisy test data), compared to applying multi-condition training solely in the fine-tuning stage. Source code is available at https://github.com/huawei-noah/Speech-Backbones/tree/main/SPIRAL. | https://openreview.net/pdf/237e7f0b9a5b83acde4d28436da1c2d60b89393c.pdf |
Improving the Accuracy of Learning Example Weights for Imbalance Classification | https://openreview.net/forum?id=J_PHjw4gvXJ | https://openreview.net/forum?id=J_PHjw4gvXJ | Yuqi Liu,Bin Cao,Jing Fan | ICLR 2022,Poster | To solve the imbalance classification, methods of weighting examples have been proposed. Recent work has studied to assign adaptive weights to training examples through learning mechanisms, that is, the weights, similar to classification models, are regarded as parameters that need to be learned. However, the algorithms in recent work use local information to approximately optimize the weights, which may lead to inaccurate learning of the weights. In this work, we first propose a novel mechanism of learning with a constraint, which can accurately train the weights and model. Then, we propose a combined method of our learning mechanism and the work by Hu et al., which can promote each other to perform better. Our proposed method can be applied to any type of deep network model. Experiments show that compared with the state-of-the-art algorithms, our method has significant improvement in varieties of settings, including text and image classification over different imbalance ratios, binary and multi-class classification. | https://openreview.net/pdf/43e63bb40f4809118c1924577a2cac09588d2c23.pdf |
Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks | https://openreview.net/forum?id=Czsdv-S4-w9 | https://openreview.net/forum?id=Czsdv-S4-w9 | Sihyun Yu,Jihoon Tack,Sangwoo Mo,Hyunsu Kim,Junho Kim,Jung-Woo Ha,Jinwoo Shin | ICLR 2022,Poster | In the deep learning era, long video generation of high-quality still remains challenging due to the spatio-temporal complexity and continuity of videos. Existing prior works have attempted to model video distribution by representing videos as 3D grids of RGB values, which impedes the scale of generated videos and neglects continuous dynamics. In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue. By utilizing INRs of video, we propose dynamics-aware implicit generative adversarial network (DIGAN), a novel generative adversarial network for video generation. Specifically, we introduce (a) an INR-based video generator that improves the motion dynamics by manipulating the space and time coordinates differently and (b) a motion discriminator that efficiently identifies the unnatural motions without observing the entire long frame sequences. We demonstrate the superiority of DIGAN under various datasets, along with multiple intriguing properties, e.g., long video synthesis, video extrapolation, and non-autoregressive video generation. For example, DIGAN improves the previous state-of-the-art FVD score on UCF-101 by 30.7% and can be trained on 128 frame videos of 128x128 resolution, 80 frames longer than the 48 frames of the previous state-of-the-art method. | https://openreview.net/pdf/e4d5da34d0754b0239cec0a03c6473b915bfd9a8.pdf |
Efficient Learning of Safe Driving Policy via Human-AI Copilot Optimization | https://openreview.net/forum?id=0cgU-BZp2ky | https://openreview.net/forum?id=0cgU-BZp2ky | Quanyi Li,Zhenghao Peng,Bolei Zhou | ICLR 2022,Poster | Human intervention is an effective way to inject human knowledge into the training loop of reinforcement learning, which can bring fast learning and ensured training safety. Given the very limited budget of human intervention, it remains challenging to design when and how human expert interacts with the learning agent in the training. In this work, we develop a novel human-in-the-loop learning method called Human-AI Copilot Optimization (HACO).To allow the agent's sufficient exploration in the risky environments while ensuring the training safety, the human expert can take over the control and demonstrate how to avoid probably dangerous situations or trivial behaviors. The proposed HACO then effectively utilizes the data both from the trial-and-error exploration and human's partial demonstration to train a high-performing agent. HACO extracts proxy state-action values from partial human demonstration and optimizes the agent to improve the proxy values meanwhile reduce the human interventions. The experiments show that HACO achieves a substantially high sample efficiency in the safe driving benchmark. HACO can train agents to drive in unseen traffic scenarios with a handful of human intervention budget and achieve high safety and generalizability, outperforming both reinforcement learning and imitation learning baselines with a large margin. Code and demo video are included in the supplementary materials. | https://openreview.net/pdf/c0b165aabfc0cf4dea07b0341e17033a3bc5722b.pdf |
Enhancing Cross-lingual Transfer by Manifold Mixup | https://openreview.net/forum?id=OjPmfr9GkVv | https://openreview.net/forum?id=OjPmfr9GkVv | Huiyun Yang,Huadong Chen,Hao Zhou,Lei Li | ICLR 2022,Poster | Based on large-scale pre-trained multilingual representations, recent cross-lingual transfer methods have achieved impressive transfer performances. However, the performance of target languages still lags far behind the source language. In this paper, our analyses indicate such a performance gap is strongly associated with the cross-lingual representation discrepancy. To achieve better cross-lingual transfer performance, we propose the cross-lingual manifold mixup (X-Mixup) method, which adaptively calibrates the representation discrepancy and gives a compromised representation for target languages. Experiments on the XTREME benchmark show X-Mixup achieves 1.8% performance gains on multiple text understanding tasks, compared with strong baselines, and significantly reduces the cross-lingual representation discrepancy. | https://openreview.net/pdf/dbe81cd4937fe7d696c1a2beb6c1a81c871a7a56.pdf |
Evolutionary Diversity Optimization with Clustering-based Selection for Reinforcement Learning | https://openreview.net/forum?id=74x5BXs4bWD | https://openreview.net/forum?id=74x5BXs4bWD | Yutong Wang,Ke Xue,Chao Qian | ICLR 2022,Poster | Reinforcement Learning (RL) has achieved significant successes, which aims to obtain a single policy maximizing the expected cumulative rewards for a given task. However, in many real-world scenarios, e.g., navigating in complex environments and controlling robots, one may need to find a set of policies having both high rewards and diverse behaviors, which can bring better exploration and robust few-shot adaptation. Recently, some methods have been developed by using evolutionary techniques, including iterative reproduction and selection of policies. However, due to the inefficient selection mechanisms, these methods cannot fully guarantee both high quality and diversity. In this paper, we propose EDO-CS, a new Evolutionary Diversity Optimization algorithm with Clustering-based Selection. In each iteration, the policies are divided into several clusters based on their behaviors, and a high-quality policy is selected from each cluster for reproduction. EDO-CS also adaptively balances the importance between quality and diversity in the reproduction process. Experiments on various (i.e., deceptive and multi-modal) continuous control tasks, show the superior performance of EDO-CS over previous methods, i.e., EDO-CS can achieve a set of policies with both high quality and diversity efficiently while previous methods cannot. | https://openreview.net/pdf/b887e71bcdd86242a6fbbc501be12552141c01ed.pdf |
CURVATURE-GUIDED DYNAMIC SCALE NETWORKS FOR MULTI-VIEW STEREO | https://openreview.net/forum?id=_Wzj0J2xs2D | https://openreview.net/forum?id=_Wzj0J2xs2D | Khang Truong Giang,Soohwan Song,Sungho Jo | ICLR 2022,Poster | Multi-view stereo (MVS) is a crucial task for precise 3D reconstruction. Most recent studies tried to improve the performance of matching cost volume in MVS by introducing a skilled design to cost formulation or cost regularization. In this paper, we focus on learning robust feature extraction to enhance the performance of matching costs, without need of heavy computation in the other steps. In particular, we present a dynamic scale feature extraction network, namely, CDSFNet. It is composed of multiple novel convolution layers, each of which can select a proper patch scale for each pixel guided by the normal curvature of image surface. As a result, CDFSNet can estimate the optimal patch scales to learn discriminative features for accurate matching computation between reference and source images. By combining the robust extracted features with an appropriate cost formulation strategy, our final MVS architecture can estimate depth maps more precisely. Extensive experiments showed that the proposed method outperforms other state-of-the-art methods on complex outdoor scenes. It significantly improves the completeness of reconstructed models. Moreover, the method can process the high resolution with faster run-time and lower memory compared to the other MVS methods. | https://openreview.net/pdf/d9996d1650b1f7ea346a668f1a2daf658ec29136.pdf |
Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism | https://openreview.net/forum?id=KLaDXLAzzFT | https://openreview.net/forum?id=KLaDXLAzzFT | Ming Yin,Yaqi Duan,Mengdi Wang,Yu-Xiang Wang | ICLR 2022,Poster | Offline reinforcement learning, which seeks to utilize offline/historical data to optimize sequential decision-making strategies, has gained surging prominence in recent studies. Due to the advantage that appropriate function approximators can help mitigate the sample complexity burden in modern reinforcement learning problems, existing endeavors usually enforce powerful function representation models (e.g. neural networks) to learn the optimal policies. However, a precise understanding of the statistical limits with function representations, remains elusive, even when such a representation is linear.
Towards this goal, we study the statistical limits of offline reinforcement learning with linear model representations. To derive the tight offline learning bound, we design the variance-aware pessimistic value iteration (VAPVI), which adopts the conditional variance information of the value function for time-inhomogeneous episodic linear Markov decision processes (MDPs). VAPVI leverages estimated variances of the value functions to reweight the Bellman residuals in the least-square pessimistic value iteration and provides improved offline learning bounds over the best-known existing results (whereas the Bellman residuals are equally weighted by design). More importantly, our learning bounds are expressed in terms of system quantities, which provide natural instance-dependent characterizations that previous results are short of. We hope our results draw a clearer picture of what offline learning should look like when linear representations are provided.
| https://openreview.net/pdf/4fcfeb93b8187d6d055e04332a5c8b1c37b10970.pdf |
Exploring extreme parameter compression for pre-trained language models | https://openreview.net/forum?id=RftryyYyjiG | https://openreview.net/forum?id=RftryyYyjiG | Benyou Wang,Yuxin Ren,Lifeng Shang,Xin Jiang,Qun Liu | ICLR 2022,Poster | Recent work explored the potential of large-scale Transformer-based pre-trained models, especially Pre-trained Language Models (PLMs) in natural language processing. This raises many concerns from various perspectives, e.g., financial costs and carbon emissions.
Compressing PLMs like BERT with negligible performance loss for faster inference and cheaper deployment has attracted much attention. In this work, we aim to explore larger compression ratios for PLMs, among which tensor decomposition is a potential but under-investigated one. By comparing existing decomposition methods, Tucker decomposition is found to be parameter-efficient for compression. Two decomposition and reconstruction protocols are further proposed to improve the effectiveness and efficiency of Tucker decomposition in parameter compression.
Our compressed BERT with ${1}/{7}$ parameters in Transformer layers performs on-par with, sometimes slightly better than the original BERT in GLUE benchmark. A tiny version achieves 96.7\% performance of BERT-base with $ {1}/{48} $ encoder parameters (i.e., less than 2M parameters excluding the embedding layer) and \textbf{$2.7 \times$} faster on inference. To show that the proposed method is orthogonal to existing compression methods like knowledge distillation, we also explore the benefit of the proposed method on a distilled BERT. | https://openreview.net/pdf/dab5dd8e405bdd89ffafedef9f081622e45d0c61.pdf |
Local Feature Swapping for Generalization in Reinforcement Learning | https://openreview.net/forum?id=Sq0-tgDyHe4 | https://openreview.net/forum?id=Sq0-tgDyHe4 | David Bertoin,Emmanuel Rachelson | ICLR 2022,Poster | Over the past few years, the acceleration of computing resources and research in Deep Learning has led to significant practical successes in a range of tasks, including in particular in computer vision. Building on these advances, reinforcement learning has also seen a leap forward with the emergence of agents capable of making decisions directly from visual observations. Despite these successes, the over-parametrization of neural architectures leads to memorization of the data used during training and thus to a lack of generalization.
Reinforcement learning agents based on visual inputs also suffer from this phenomenon by erroneously correlating rewards with unrelated visual features such as background elements. To alleviate this problem, we introduce a new regularization layer consisting of channel-consistent local permutations (CLOP) of the feature maps. The proposed permutations induce robustness to spatial correlations and help prevent overfitting behaviors in RL. We demonstrate, on the OpenAI Procgen Benchmark, that RL agents trained with the CLOP layer exhibit robustness to visual changes and better generalization properties than agents trained using other state-of-the-art regularization techniques. | https://openreview.net/pdf/679f3a9bf9ae7a8121b4cb0bb53f30887f029b89.pdf |
Open-vocabulary Object Detection via Vision and Language Knowledge Distillation | https://openreview.net/forum?id=lL3lnMbR4WU | https://openreview.net/forum?id=lL3lnMbR4WU | Xiuye Gu,Tsung-Yi Lin,Weicheng Kuo,Yin Cui | ICLR 2022,Poster | We aim at advancing open-vocabulary object detection, which detects objects described by arbitrary text inputs. The fundamental challenge is the availability of training data. It is costly to further scale up the number of classes contained in existing object detection datasets. To overcome this challenge, we propose ViLD, a training method via Vision and Language knowledge Distillation. Our method distills the knowledge from a pretrained open-vocabulary image classification model (teacher) into a two-stage detector (student). Specifically, we use the teacher model to encode category texts and image regions of object proposals. Then we train a student detector, whose region embeddings of detected boxes are aligned with the text and image embeddings inferred by the teacher. We benchmark on LVIS by holding out all rare categories as novel categories that are not seen during training. ViLD obtains 16.1 mask APr with a ResNet-50 backbone, even outperforming the supervised counterpart by 3.8. When trained with a stronger teacher model ALIGN, ViLD achieves 26.3 APr. The model can directly transfer to other datasets without finetuning, achieving 72.2 AP50 on PASCAL VOC, 36.6 AP on COCO and 11.8 AP on Objects365. On COCO, ViLD outperforms the previous state-of-the-art (Zareian et al., 2021) by 4.8 on novel AP and 11.4 on overall AP. Code and demo are open-sourced at https://github.com/tensorflow/tpu/tree/master/models/official/detection/projects/vild. | https://openreview.net/pdf/25cfe8bb2fa27d8a1c86a575dbf3b997754148be.pdf |
Model-Based Offline Meta-Reinforcement Learning with Regularization | https://openreview.net/forum?id=EBn0uInJZWh | https://openreview.net/forum?id=EBn0uInJZWh | Sen Lin,Jialin Wan,Tengyu Xu,Yingbin Liang,Junshan Zhang | ICLR 2022,Poster | Existing offline reinforcement learning (RL) methods face a few major challenges, particularly the distributional shift between the learned policy and the behavior policy. Offline Meta-RL is emerging as a promising approach to address these challenges, aiming to learn an informative meta-policy from a collection of tasks. Nevertheless, as shown in our empirical studies, offline Meta-RL could be outperformed by offline single-task RL methods on tasks with good quality of datasets, indicating that a right balance has to be delicately calibrated between "exploring" the out-of-distribution state-actions by following the meta-policy and "exploiting" the offline dataset by staying close to the behavior policy. Motivated by such empirical analysis, we propose model-based offline $\text{\bf Me}$ta-RL with $\text{\bf r}$egularized $\text{\bf P}$olicy $\text{\bf O}$ptimization (MerPO), which learns a meta-model for efficient task structure inference and an informative meta-policy for safe exploration of out-of-distribution state-actions. In particular, we devise a new meta-Regularized model-based Actor-Critic (RAC) method for within-task policy optimization, as a key building block of MerPO, using both conservative policy evaluation and regularized policy improvement; and the intrinsic tradeoff therein is achieved via striking the right balance between two regularizers, one based on the behavior policy and the other on the meta-policy. We theoretically show that the learnt policy offers guaranteed improvement over both the behavior policy and the meta-policy, thus ensuring the performance improvement on new tasks via offline Meta-RL. Our experiments corroborate the superior performance of MerPO over existing offline Meta-RL methods. | https://openreview.net/pdf/af10847f3163c50846528554895671f12dd3f6bd.pdf |
Scale Mixtures of Neural Network Gaussian Processes | https://openreview.net/forum?id=YVPBh4k78iZ | https://openreview.net/forum?id=YVPBh4k78iZ | Hyungi Lee,Eunggu Yun,Hongseok Yang,Juho Lee | ICLR 2022,Poster | Recent works have revealed that infinitely-wide feed-forward or recurrent neural networks of any architecture correspond to Gaussian processes referred to as NNGP. While these works have extended the class of neural networks converging to Gaussian processes significantly, however, there has been little focus on broadening the class of stochastic processes that such neural networks converge to. In this work, inspired by the scale mixture of Gaussian random variables, we propose the scale mixture of NNGP for which we introduce a prior distribution on the scale of the last-layer parameters. We show that simply introducing a scale prior on the last-layer parameters can turn infinitely-wide neural networks of any architecture into a richer class of stochastic processes. With certain scale priors, we obtain heavy-tailed stochastic processes, and in the case of inverse gamma priors, we recover Student’s $t$ processes. We further analyze the distributions of the neural networks initialized with our prior setting and trained with gradient descents and obtain similar results as for NNGP. We present a practical posterior-inference algorithm for the scale mixture of NNGP and empirically demonstrate its usefulness on regression and classification tasks. In particular, we show that in both tasks, the heavy-tailed stochastic processes obtained from our framework are robust to out-of-distribution data. | https://openreview.net/pdf/2b9ab3a3ccae899d74d3b0dcba493cd95bedad20.pdf |
A Johnson-Lindenstrauss Framework for Randomly Initialized CNNs | https://openreview.net/forum?id=YX0lrvdPQc | https://openreview.net/forum?id=YX0lrvdPQc | Ido Nachum,Jan Hazla,Michael Gastpar,Anatoly Khina | ICLR 2022,Poster | How does the geometric representation of a dataset change after the application of each randomly initialized layer of a neural network? The celebrated Johnson-Lindenstrauss lemma answers this question for linear fully-connected neural networks (FNNs), stating that the geometry is essentially preserved. For FNNs with the ReLU activation, the angle between two input contracts according to a known mapping. The question for non-linear convolutional neural networks (CNNs) becomes much more intricate. To answer this question, we introduce a geometric framework. For linear CNNs, we show that the Johnson--Lindenstrauss lemma continues to hold, namely, that the angle between two inputs is preserved. For CNNs with ReLU activation, on the other hand, the behavior is richer: The angle between the outputs contracts, where the level of contraction depends on the nature of the inputs. In particular, after one layer, the geometry of natural images is essentially preserved, whereas for Gaussian correlated inputs, CNNs exhibit the same contracting behavior as FNNs with ReLU activation. | https://openreview.net/pdf/e5dd1cf8cfc1ee79c13925ef1a7839e92785b7ab.pdf |
Hindsight: Posterior-guided training of retrievers for improved open-ended generation | https://openreview.net/forum?id=Vr_BTpw3wz | https://openreview.net/forum?id=Vr_BTpw3wz | Ashwin Paranjape,Omar Khattab,Christopher Potts,Matei Zaharia,Christopher D Manning | ICLR 2022,Poster | Many text generation systems benefit from retrieving passages from a textual knowledge corpus (e.g., Wikipedia) and using them to generate the output. For open-ended generation tasks, like generating informative utterances in conversations, many varied passages $z$ are relevant to the context $x$ but few are relevant to the observed next utterance $y$ (label). For such tasks, existing methods (that jointly train the retriever and generator) underperform: during training the top-k context-relevant retrieved passages might not contain the label-relevant passage and the generator may hence not learn a preference to ground its generated output in them. We propose using an additional guide-retriever that also conditions on the observed label $y$ and “in hindsight” retrieves label-relevant passages during training. We maximize the evidence lower bound (ELBo) to jointly train the guide-retriever $Q(z|x,y)$ with the standard retriever $P_\eta(z|x)$ and the generator $P_\theta(y|x,z)$ and find that ELBo has better inductive biases than prior work. For informative conversations from the Wizard of Wikipedia dataset, with our posterior-guided training, the retriever finds passages with higher relevance in the top-10 (23% relative improvement), the generator’s responses are more grounded in the retrieved passage (19% relative improvement) and the end-to-end system produces better overall output (6.4% relative improvement). | https://openreview.net/pdf/0801402a22ce82661bf20317c66aeb13527df311.pdf |
Self-Supervised Graph Neural Networks for Improved Electroencephalographic Seizure Analysis | https://openreview.net/forum?id=k9bx1EfHI_- | https://openreview.net/forum?id=k9bx1EfHI_- | Siyi Tang,Jared Dunnmon,Khaled Kamal Saab,Xuan Zhang,Qianying Huang,Florian Dubost,Daniel Rubin,Christopher Lee-Messer | ICLR 2022,Poster | Automated seizure detection and classification from electroencephalography (EEG) can greatly improve seizure diagnosis and treatment. However, several modeling challenges remain unaddressed in prior automated seizure detection and classification studies: (1) representing non-Euclidean data structure in EEGs, (2) accurately classifying rare seizure types, and (3) lacking a quantitative interpretability approach to measure model ability to localize seizures. In this study, we address these challenges by (1) representing the spatiotemporal dependencies in EEGs using a graph neural network (GNN) and proposing two EEG graph structures that capture the electrode geometry or dynamic brain connectivity, (2) proposing a self-supervised pre-training method that predicts preprocessed signals for the next time period to further improve model performance, particularly on rare seizure types, and (3) proposing a quantitative model interpretability approach to assess a model’s ability to localize seizures within EEGs. When evaluating our approach on seizure detection and classification on a large public dataset (5,499 EEGs), we find that our GNN with self-supervised pre-training achieves 0.875 Area Under the Receiver Operating Characteristic Curve on seizure detection and 0.749 weighted F1-score on seizure classification, outperforming previous methods for both seizure detection and classification. Moreover, our self-supervised pre-training strategy significantly improves classification of rare seizure types (e.g. 47 points increase in combined tonic seizure accuracy over baselines). Furthermore, quantitative interpretability analysis shows that our GNN with self-supervised pre-training precisely localizes 25.4% focal seizures, a 21.9 point improvement over existing CNNs. Finally, by superimposing the identified seizure locations on both raw EEG signals and EEG graphs, our approach could provide clinicians with an intuitive visualization of localized seizure regions. | https://openreview.net/pdf/17a7d200331982e9e2906bc6831d4cdc744a6f5c.pdf |
Group-based Interleaved Pipeline Parallelism for Large-scale DNN Training | https://openreview.net/forum?id=cw-EmNq5zfD | https://openreview.net/forum?id=cw-EmNq5zfD | PengCheng Yang,Xiaoming Zhang,Wenpeng Zhang,Ming Yang,Hong Wei | ICLR 2022,Poster | The recent trend of using large-scale deep neural networks (DNN) to boost performance has propelled the development of the parallel pipelining technique for efficient DNN training, which has resulted in the development of several prominent pipelines such as GPipe, PipeDream, and PipeDream-2BW. However, the current leading pipeline PipeDream-2BW still suffers from two major drawbacks, i.e., the excessive memory redundancy and the delayed weight updates across all stages. In this work, we propose a novel pipeline named WPipe, which achieves better memory efficiency and fresher weight updates. WPipe uses a novel pipelining scheme that divides model partitions into two groups. It moves the forward pass of the next period of weight updates to the front of the backward pass of the current period of weight updates in the first group, retains the order in the second group, and updates each group alternatively. This scheme can eliminate half of the delayed gradients and memory redundancy compared to PipeDream-2BW. The experiments, which train large BERT language models, show that compared to PipeDream-2BW, WPipe achieves $1.4\times$ acceleration and reduces the memory footprint by 36%, without nearly sacrificing any final model accuracy. | https://openreview.net/pdf/2322466e5de76b982eaeca16cda0dc1dfd2f5563.pdf |
Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANs | https://openreview.net/forum?id=nc0ETaieux | https://openreview.net/forum?id=nc0ETaieux | Sitan Chen,Jerry Li,Yuanzhi Li,Raghu Meka | ICLR 2022,Poster | Arguably the most fundamental question in the theory of generative adversarial networks (GANs) is to understand when GANs can actually learn the underlying distribution. Theoretical and empirical evidence (see e.g. Arora-Risteski-Zhang '18) suggest local optimality of the empirical training objective is insufficient, yet it does not rule out the possibility that achieving a true population minimax optimal solution might imply distribution learning. In this paper, we show that standard cryptographic assumptions imply that this stronger condition is still insufficient. Namely, we show that if local pseudorandom generators (PRGs) exist, then for a large family of natural target distributions, there are ReLU network generators of constant depth and poly size which take Gaussian random seeds so that (i) the output is far in Wasserstein distance from the target distribution, but (ii) no polynomially large Lipschitz discriminator ReLU network can detect this. This implies that even achieving a population minimax optimal solution to the Wasserstein GAN objective is likely insufficient for distribution learning. Our techniques reveal a deep connection between GANs and PRGs, which we believe will lead to further insights into the computational landscape of GANs. | https://openreview.net/pdf/6ace3e50695ef1af76dc61bfcfb736da932857ce.pdf |
Offline Reinforcement Learning with Value-based Episodic Memory | https://openreview.net/forum?id=RCZqv9NXlZ | https://openreview.net/forum?id=RCZqv9NXlZ | Xiaoteng Ma,Yiqin Yang,Hao Hu,Jun Yang,Chongjie Zhang,Qianchuan Zhao,Bin Liang,Qihan Liu | ICLR 2022,Poster | Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework, which learns the V-function instead of the Q-function to naturally keep the learning procedure within the support of an offline dataset. To enable effective generalization while maintaining proper conservatism in offline learning, we propose Expectile V-Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning. Further, we introduce implicit planning along offline trajectories to enhance learned V-values and accelerate convergence. Together, we present a new offline method called Value-based Episodic Memory (VEM). We provide theoretical analysis for the convergence properties of our proposed VEM method, and empirical results in the D4RL benchmark show that our method achieves superior performance in most tasks, particularly in sparse-reward tasks. | https://openreview.net/pdf/02371afec918d5a82c74580cda7e3efbf18411d5.pdf |
MonoDistill: Learning Spatial Features for Monocular 3D Object Detection | https://openreview.net/forum?id=C54V-xTWfi | https://openreview.net/forum?id=C54V-xTWfi | Zhiyu Chong,Xinzhu Ma,Hong Zhang,Yuxin Yue,Haojie Li,Zhihui Wang,Wanli Ouyang | ICLR 2022,Poster | 3D object detection is a fundamental and challenging task for 3D scene understanding, and the monocular-based methods can serve as an economical alternative to the stereo-based or LiDAR-based methods. However, accurately locating objects in the 3D space from a single image is extremely difficult due to the lack of spatial cues. To mitigate this issue, we propose a simple and effective scheme to introduce the spatial information from LiDAR signals to the monocular 3D detectors, without introducing any extra cost in the inference phase. In particular, we first project the LiDAR signals into the image plane and align them with the RGB images. After that, we use the resulting data to train a 3D detector (LiDAR Net) using the same architecture as the baseline model. Finally, this LiDAR Net can serve as the teacher to transfer the learned knowledge to the baseline model. Experimental results show that the proposed method can significantly boost the performance of the baseline model and ranks the $1^{st}$ place among all monocular-based methods on the KITTI benchmark. Besides, extensive ablation studies are conducted, which further prove the effectiveness of each part of our designs and illustrate what the baseline model has learned from the LiDAR Net. | https://openreview.net/pdf/48f3e34df8a32755defa6cc07e7521b77cb01afc.pdf |
EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression | https://openreview.net/forum?id=vkaMaq95_rX | https://openreview.net/forum?id=vkaMaq95_rX | Zirui Liu,Kaixiong Zhou,Fan Yang,Li Li,Rui Chen,Xia Hu | ICLR 2022,Poster | Training Graph Neural Networks (GNNs) on large graphs is a fundamental challenge due to the high memory usage, which is mainly occupied by activations (e.g., node embeddings). Previous works usually focus on reducing the number of nodes retained in memory.
In parallel, unlike what has been developed for other types of neural networks, training with compressed activation maps is less explored for GNNs. This extension is notoriously difficult to implement due to the miss of necessary tools in common graph learning packages. To unleash the potential of this direction, we provide { an} optimized GPU implementation which supports training GNNs with compressed activations. Based on the implementation, we propose a memory-efficient framework called ``EXACT'', which for the first time demonstrate the potential and evaluate the feasibility of training GNNs with compressed activations. We systematically analyze the trade-off among the memory saving, time overhead, and accuracy drop. In practice, EXACT can reduce the memory footprint of activations by up to $32\times$ with $0.2$-$0.5\%$ accuracy drop and $10$-$25\%$ time overhead across different models and datasets. We implement EXACT as an extension for Pytorch Geometric and Pytorch. In practice, for Pytorch Geometric, EXACT can trim down the hardware requirement of training a three-layer full-batch GraphSAGE on \textit{ogbn-products} from a 48GB GPU to a 12GB GPU. | https://openreview.net/pdf/c4401e62dd352ef175de3558fd2ccb66fd2107e0.pdf |
Provably convergent quasistatic dynamics for mean-field two-player zero-sum games | https://openreview.net/forum?id=MP904TiHqJ- | https://openreview.net/forum?id=MP904TiHqJ- | Chao Ma,Lexing Ying | ICLR 2022,Poster | In this paper, we study the problem of finding mixed Nash equilibrium for mean-field two-player zero-sum games. Solving this problem requires optimizing over two probability distributions. We consider a quasistatic Wasserstein gradient flow dynamics in which one probability distribution follows the Wasserstein gradient flow, while the other one is always at the equilibrium. Theoretical analysis are conducted on this dynamics, showing its convergence to the mixed Nash equilibrium under mild conditions. Inspired by the continuous dynamics of probability distributions, we derive a quasistatic Langevin gradient descent method with inner-outer iterations, and test the method on different problems, including training mixture of GANs. | https://openreview.net/pdf/acb4ac9a6784cfd88f2b6afa489db7cb2af8de79.pdf |
W-CTC: a Connectionist Temporal Classification Loss with Wild Cards | https://openreview.net/forum?id=0RqDp8FCW5Z | https://openreview.net/forum?id=0RqDp8FCW5Z | Xingyu Cai,Jiahong Yuan,Yuchen Bian,Guangxu Xun,Jiaji Huang,Kenneth Church | ICLR 2022,Poster | Connectionist Temporal Classification (CTC) loss is commonly used in sequence learning applications. For example, in Automatic Speech Recognition (ASR) task, the training data consists of pairs of audio (input sequence) and text (output label),without temporal alignment information. Standard CTC computes a loss by aggregating over all possible alignment paths, that map the entire sequence to the entire label (full alignment). However, in practice, there are often cases where the label is incomplete. Specifically, we solve the partial alignment problem where the label only matches a middle part of the sequence. This paper proposes the wild-card CTC (W-CTC) to address this issue, by padding wild-cards at both ends of the labels. Consequently, the proposed W-CTC improves the standard CTC via aggregating over even more alignment paths. Evaluations on a number of tasks in speech and vision domains, show that the proposed W-CTC consistently outperforms the standard CTC by a large margin when label is incomplete. The effectiveness of the proposed method is further confirmed in an ablation study. | https://openreview.net/pdf/037209609dda60fc4dd420af54cfcbfa8a63388e.pdf |
Bandit Learning with Joint Effect of Incentivized Sampling, Delayed Sampling Feedback, and Self-Reinforcing User Preferences | https://openreview.net/forum?id=Q83vFlie_Pr | https://openreview.net/forum?id=Q83vFlie_Pr | Tianchen Zhou,Jia Liu,Chaosheng Dong,Yi Sun | ICLR 2022,Poster | In this paper, we consider a new multi-armed bandit (MAB) framework motivated by three common complications in online recommender systems in practice: (i) the platform (learning agent) cannot sample an intended product directly and has to incentivize customers to select this product (e.g., promotions and coupons); (ii) customer feedbacks are often received later than their selection times; and (iii) customer preferences among products are influenced and reinforced by historical feedbacks. From the platform's perspective, the goal of the MAB framework is to maximize total reward without incurring excessive incentive costs. A major challenge of this MAB framework is that the loss of information caused by feedback delay complicates both user preference evolution and arm incentivizing decisions, both of which are already highly non-trivial even by themselves. Toward this end, we first propose a policy called ``UCB-Filtering-with-Delayed-Feedback'' (UCB-FDF) policy for this new MAB framework. In our analysis, we consider delayed feedbacks that can have either arm-independent or arm-dependent distributions. In both cases, we allow unbounded support for the random delays, i.e., the random delay can be infinite. We show that the delay impacts in both cases can still be upper bounded by an additive penalty on both the regret and total incentive costs. This further implies that logarithmic regret and incentive cost growth rates are achievable under this new MAB framework. Experimental results corroborate our theoretical analysis on both regret and incentive costs.
| https://openreview.net/pdf/d654e6eeca4a1faa806d427c0b60d31e7648ad5f.pdf |
AdaAug: Learning Class- and Instance-adaptive Data Augmentation Policies | https://openreview.net/forum?id=rWXfFogxRJN | https://openreview.net/forum?id=rWXfFogxRJN | Tsz-Him Cheung,Dit-Yan Yeung | ICLR 2022,Poster | Data augmentation is an effective way to improve the generalization capability of modern deep learning models. However, the underlying augmentation methods mostly rely on handcrafted operations. Moreover, an augmentation policy useful to one dataset may not transfer well to other datasets. Therefore, Automated Data Augmentation (AutoDA) methods, like \textit{AutoAugment} and \textit{Population-based Augmentation}, have been proposed recently to automate the process of searching for optimal augmentation policies. However, the augmentation policies found are not adaptive to the dataset used, hindering the effectiveness of these AutoDA methods. In this paper, we propose a novel AutoDA method called \texttt{AdaAug} to efficiently learn adaptive augmentation policies in a class-dependent and potentially instance-dependent manner. Our experiments show that the adaptive augmentation policies learned by our method transfer well to unseen datasets such as the Oxford Flowers, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars datasets when compared with other AutoDA baselines. In addition, our method also achieves state-of-the-art performance on the CIFAR-10, CIFAR-100, and SVHN datasets. | https://openreview.net/pdf/178ea7fe9306b26e1d623abc89a015f272a95bab.pdf |
Unsupervised Semantic Segmentation by Distilling Feature Correspondences | https://openreview.net/forum?id=SaKO6z6Hl0c | https://openreview.net/forum?id=SaKO6z6Hl0c | Mark Hamilton,Zhoutong Zhang,Bharath Hariharan,Noah Snavely,William T. Freeman | ICLR 2022,Poster | Unsupervised semantic segmentation aims to discover and localize semantically meaningful categories within image corpora without any form of annotation. To solve this task, algorithms must produce features for every pixel that are both semantically meaningful and compact enough to form distinct clusters. Unlike previous works which achieve this with a single end-to-end framework, we propose to separate feature learning from cluster compactification. Empirically, we show that current unsupervised feature learning frameworks already generate dense features whose correlations are semantically consistent. This observation motivates us to design STEGO ($\textbf{S}$elf-supervised $\textbf{T}$ransformer with $\textbf{E}$nergy-based $\textbf{G}$raph $\textbf{O}$ptimization), a novel framework that distills unsupervised features into high-quality discrete semantic labels. At the core of STEGO is a novel contrastive loss function that encourages features to form compact clusters while preserving their association pattern. STEGO yields a significant improvement over the prior state of the art, on both the CocoStuff ($\textbf{+14 mIoU}$) and Cityscapes ($\textbf{+9 mIoU}$) semantic segmentation challenges. | https://openreview.net/pdf/585b6a94cde1c9886c51fbaa17688846d5729b69.pdf |
Axiomatic Explanations for Visual Search, Retrieval, and Similarity Learning | https://openreview.net/forum?id=TqNsv1TuCX9 | https://openreview.net/forum?id=TqNsv1TuCX9 | Mark Hamilton,Scott Lundberg,Stephanie Fu,Lei Zhang,William T. Freeman | ICLR 2022,Poster | Visual search, recommendation, and contrastive similarity learning power technologies that impact billions of users worldwide. Modern model architectures can be complex and difficult to interpret, and there are several competing techniques one can use to explain a search engine's behavior. We show that the theory of fair credit assignment provides a unique axiomatic solution that generalizes several existing recommendation- and metric-explainability techniques in the literature. Using this formalism, we show when existing approaches violate "fairness" and derive methods that sidestep these shortcomings and naturally handle counterfactual information. More specifically, we show existing approaches implicitly approximate second-order Shapley-Taylor indices and extend CAM, GradCAM, LIME, SHAP, SBSM, and other methods to search engines. These extensions can extract pairwise correspondences between images from trained opaque-box models. We also introduce a fast kernel-based method for estimating Shapley-Taylor indices that require orders of magnitude fewer function evaluations to converge. Finally, we show that these game-theoretic measures yield more consistent explanations for image similarity architectures. | https://openreview.net/pdf/75f834484ec638f9880a1bd687ced6d577076921.pdf |
Graph-Relational Domain Adaptation | https://openreview.net/forum?id=kcwyXtt7yDJ | https://openreview.net/forum?id=kcwyXtt7yDJ | Zihao Xu,Hao He,Guang-He Lee,Bernie Wang,Hao Wang | ICLR 2022,Poster | Existing domain adaptation methods tend to treat every domain equally and align them all perfectly. Such uniform alignment ignores topological structures among different domains; therefore it may be beneficial for nearby domains, but not necessarily for distant domains. In this work, we relax such uniform alignment by using a domain graph to encode domain adjacency, e.g., a graph of states in the US with each state as a domain and each edge indicating adjacency, thereby allowing domains to align flexibly based on the graph structure. We generalize the existing adversarial learning framework with a novel graph discriminator using encoding-conditioned graph embeddings. Theoretical analysis shows that at equilibrium, our method recovers classic domain adaptation when the graph is a clique, and achieves non-trivial alignment for other types of graphs. Empirical results show that our approach successfully generalizes uniform alignment, naturally incorporates domain information represented by graphs, and improves upon existing domain adaptation methods on both synthetic and real-world datasets. | https://openreview.net/pdf/2b48e401e5c5e2e1cba961a5ef890a969861c810.pdf |
Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions | https://openreview.net/forum?id=LdEhiMG9WLO | https://openreview.net/forum?id=LdEhiMG9WLO | Shaochen Zhong,Guanqun Zhang,Ningjia Huang,Shuai Xu | ICLR 2022,Poster | Structured pruning methods which are capable of delivering a densely pruned network are among the most popular techniques in the realm of neural network pruning, where most methods prune the original network at a filter or layer level. Although such methods may provide immediate compression and acceleration benefits, we argue that the blanket removal of an entire filter or layer may result in undesired accuracy loss. In this paper, we revisit the idea of kernel pruning (to only prune one or several $k \times k$ kernels out of a 3D-filter), a heavily overlooked approach under the context of structured pruning. This is because kernel pruning will naturally introduce sparsity to filters within the same convolutional layer — thus, making the remaining network no longer dense. We address this problem by proposing a versatile grouped pruning framework where we first cluster filters from each convolutional layer into equal-sized groups, prune the grouped kernels we deem unimportant from each filter group, then permute the remaining filters to form a densely grouped convolutional architecture (which also enables the parallel computing capability) for fine-tuning. Specifically, we consult empirical findings from a series of literature regarding $\textit{Lottery Ticket Hypothesis}$ to determine the optimal clustering scheme per layer, and develop a simple yet cost-efficient greedy approximation algorithm to determine which group kernels to keep within each filter group. Extensive experiments also demonstrate our method often outperforms comparable SOTA methods with lesser data augmentation needed, smaller fine-tuning budget required, and sometimes even much simpler procedure executed (e.g., one-shot v. iterative). Please refer to our GitHub repository (https://github.com/choH/lottery_regulated_grouped_kernel_pruning) for code. | https://openreview.net/pdf/5b6a1f9771a20353393d13f0eaa86b05e16c80b2.pdf |
Bi-linear Value Networks for Multi-goal Reinforcement Learning | https://openreview.net/forum?id=LedObtLmCjS | https://openreview.net/forum?id=LedObtLmCjS | Zhang-Wei Hong,Ge Yang,Pulkit Agrawal | ICLR 2022,Poster | Universal value functions are a core component of off-policy multi-goal reinforcement learning.
The de-facto paradigm is to approximate Q(s, a, g) using monolithic neural networks which lack inductive biases to produce complex interactions between the state s and the goal g. In this work, we propose a bilinear decomposition that represents the Q-value via a low-rank approximation in the form of a dot product between two vector fields. The first vector field, f(s, a), captures the environment's local dynamics at the state s; whereas the second component, ϕ(s, g), captures the global relationship between the current state and the goal.
We show that our bilinear decomposition scheme improves sample efficiency over the original monolithic value approximators, and transfer better to unseen goals. We demonstrate significant learning speed-up over a variety of tasks on a simulated robot arm, and the challenging task of dexterous manipulation with a Shadow hand. | https://openreview.net/pdf/278077713254a379481bd2b7e25393ca3e4758b6.pdf |
No One Representation to Rule Them All: Overlapping Features of Training Methods | https://openreview.net/forum?id=BK-4qbGgIE3 | https://openreview.net/forum?id=BK-4qbGgIE3 | Raphael Gontijo-Lopes,Yann Dauphin,Ekin Dogus Cubuk | ICLR 2022,Poster | Despite being able to capture a range of features of the data, high accuracy models trained with supervision tend to make similar predictions. This seemingly implies that high-performing models share similar biases regardless of training methodology, which would limit ensembling benefits and render low-accuracy models as having little practical use. Against this backdrop, recent work has developed quite different training techniques, such as large-scale contrastive learning, yielding competitively high accuracy on generalization and robustness benchmarks. This motivates us to revisit the assumption that models necessarily learn similar functions. We conduct a large-scale empirical study of models across hyper-parameters, architectures, frameworks, and datasets. We find that model pairs that diverge more in training methodology display categorically different generalization behavior, producing increasingly uncorrelated errors. We show these models specialize in subdomains of the data, leading to higher ensemble performance: with just 2 models (each with ImageNet accuracy \~76.5\%), we can create ensembles with 83.4\% (+7\% boost). Surprisingly, we find that even significantly low-accuracy models can be used to improve high-accuracy models. Finally, we show diverging training methodology yield representations that capture overlapping (but not supersetting) feature sets which, when combined, lead to increased downstream performance. | https://openreview.net/pdf/87ad344628d5996211382d2b966667df428d01f8.pdf |
Generalized Kernel Thinning | https://openreview.net/forum?id=IfNu7Dr-3fQ | https://openreview.net/forum?id=IfNu7Dr-3fQ | Raaz Dwivedi,Lester Mackey | ICLR 2022,Poster | The kernel thinning (KT) algorithm of Dwivedi and Mackey (2021) compresses a probability distribution more effectively than independent sampling by targeting a reproducing kernel Hilbert space (RKHS) and leveraging a less smooth square-root kernel. Here we provide four improvements. First, we show that KT applied directly to the target RKHS yields tighter, dimension-free guarantees for any kernel, any distribution, and any fixed function in the RKHS. Second, we show that, for analytic kernels like Gaussian, inverse multiquadric, and sinc, target KT admits maximum mean discrepancy (MMD) guarantees comparable to or better than those of square-root KT without making explicit use of a square-root kernel. Third, we prove that KT with a fractional power kernel yields better-than-Monte-Carlo MMD guarantees for non-smooth kernels, like Laplace and Matern, that do not have square-roots. Fourth, we establish that KT applied to a sum of the target and power kernels (a procedure we call KT+) simultaneously inherits the improved MMD guarantees of power KT and the tighter individual function guarantees of target KT. In our experiments with target KT and KT+, we witness significant improvements in integration error even in 100 dimensions and when compressing challenging differential equation posteriors. | https://openreview.net/pdf/65b13d5e7474f351e4b858aeaad4bb355151f0d6.pdf |
How Much Can CLIP Benefit Vision-and-Language Tasks? | https://openreview.net/forum?id=zf_Ll3HZWgy | https://openreview.net/forum?id=zf_Ll3HZWgy | Sheng Shen,Liunian Harold Li,Hao Tan,Mohit Bansal,Anna Rohrbach,Kai-Wei Chang,Zhewei Yao,Kurt Keutzer | ICLR 2022,Poster | Most existing Vision-and-Language (V&L) models rely on pre-trained visual encoders, using a relatively small set of manually-annotated data (as compared to web-crawled data), to perceive the visual world. However, it has been observed that large-scale pretraining usually can result in better generalization performance, e.g., CLIP (Contrastive Language-Image Pre-training), trained on a massive amount of image-caption pairs, has shown a strong zero-shot capability on various vision tasks. To further study the advantage brought by CLIP, we propose to use CLIP as the visual encoder in various V&L models in two typical scenarios: 1) plugging CLIP into task-specific fine-tuning; 2) combining CLIP with V&L pre-training and transferring to downstream tasks. We show that CLIP significantly outperforms widely-used visual encoders trained with in-domain annotated data, such as BottomUp-TopDown. We achieve competitive or better results on diverse V&L tasks, while establishing new state-of-the-art results on Visual Question Answering, Visual Entailment, and V&L Navigation tasks.
| https://openreview.net/pdf/f0691be3c885b77bb697dff205313230d0be1163.pdf |
Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect | https://openreview.net/forum?id=3tbDrs77LJ5 | https://openreview.net/forum?id=3tbDrs77LJ5 | Yuqing Wang,Minshuo Chen,Tuo Zhao,Molei Tao | ICLR 2022,Poster | Recent empirical advances show that training deep models with large learning rate often improves generalization performance. However, theoretical justifications on the benefits of large learning rate are highly limited, due to challenges in analysis. In this paper, we consider using Gradient Descent (GD) with a large learning rate on a homogeneous matrix factorization problem, i.e., $\min_{X, Y} \|A - XY^\top\|_{\sf F}^2$. We prove a convergence theory for constant large learning rates well beyond $2/L$, where $L$ is the largest eigenvalue of Hessian at the initialization. Moreover, we rigorously establish an implicit bias of GD induced by such a large learning rate, termed `balancing', meaning that magnitudes of $X$ and $Y$ at the limit of GD iterations will be close even if their initialization is significantly unbalanced. Numerical experiments are provided to support our theory. | https://openreview.net/pdf/1e0a0df7af8a11a430889c2866c6de87982e1227.pdf |
Demystifying Limited Adversarial Transferability in Automatic Speech Recognition Systems | https://openreview.net/forum?id=l5aSHXi8jG5 | https://openreview.net/forum?id=l5aSHXi8jG5 | Hadi Abdullah,Aditya Karlekar,Vincent Bindschaedler,Patrick Traynor | ICLR 2022,Poster | The targeted transferability of adversarial samples enables attackers to exploit black-box models in the real-world. The most popular method to produce these adversarial samples is optimization attacks, which have been shown to achieve a high level of transferability in some domains. However, recent research has demonstrated that these attack samples fail to transfer when applied to Automatic Speech Recognition Systems (ASRs). In this paper, we investigate factors preventing this transferability via exhaustive experimentation. To do so, we perform an ablation study on each stage of the ASR pipeline. We discover and quantify six factors (i.e., input type, MFCC, RNN, output type, and vocabulary and sequence sizes) that impact the targeted transferability of optimization attacks against ASRs. Future research can leverage our findings to build ASRs that are more robust to other transferable attack types (e.g., signal processing attacks), or to modify architectures in other domains to reduce their exposure to targeted transferability of optimization attacks. | https://openreview.net/pdf/1a5b08e254c26f966237f4cd25b8f69482718015.pdf |
PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication | https://openreview.net/forum?id=kSwqMH0zn1F | https://openreview.net/forum?id=kSwqMH0zn1F | Cheng Wan,Youjie Li,Cameron R. Wolfe,Anastasios Kyrillidis,Nam Sung Kim,Yingyan Lin | ICLR 2022,Poster | Graph Convolutional Networks (GCNs) is the state-of-the-art method for learning graph-structured data, and training large-scale GCNs requires distributed training across multiple accelerators such that each accelerator is able to hold a partitioned subgraph. However, distributed GCN training incurs prohibitive overhead of communicating node features and feature gradients among partitions for every GCN layer during each training iteration, limiting the achievable training efficiency and model scalability. To this end, we propose PipeGCN, a simple yet effective scheme that hides the communication overhead by pipelining inter-partition communication with intra-partition computation. It is non-trivial to pipeline for efficient GCN training, as communicated node features/gradients will become stale and thus can harm the convergence, negating the pipeline benefit. Notably, little is known regarding the convergence rate of GCN training with both stale features and stale feature gradients. This work not only provides a theoretical convergence analysis but also finds the convergence rate of PipeGCN to be close to that of the vanilla distributed GCN training without any staleness. Furthermore, we develop a smoothing method to further improve PipeGCN's convergence. Extensive experiments show that PipeGCN can largely boost the training throughput (1.7×~28.5×) while achieving the same accuracy as its vanilla counterpart and existing full-graph training methods. The code is available at https://github.com/RICE-EIC/PipeGCN. | https://openreview.net/pdf/ba8be163ee26a6f4bd01d0d635bc721c022fd88a.pdf |
Learning Neural Contextual Bandits through Perturbed Rewards | https://openreview.net/forum?id=7inCJ3MhXt3 | https://openreview.net/forum?id=7inCJ3MhXt3 | Yiling Jia,Weitong ZHANG,Dongruo Zhou,Quanquan Gu,Hongning Wang | ICLR 2022,Poster | Thanks to the power of representation learning, neural contextual bandit algorithms demonstrate remarkable performance improvement against their classical counterparts. But because their exploration has to be performed in the entire neural network parameter space to obtain nearly optimal regret, the resulting computational cost is prohibitively high.
We propose to perturb the rewards when updating the neural network to eliminate the need of explicit exploration and the corresponding computational overhead. We prove that a $\tilde{O}(\tilde{d}\sqrt{T})$ regret upper bound is still achievable under standard regularity conditions, where $T$ is the number of rounds of interactions and $\tilde{d}$ is the effective dimension of a neural tangent kernel matrix.
Extensive comparisons with several benchmark contextual bandit algorithms, including two recent neural contextual bandit models, demonstrate the effectiveness and computational efficiency of our proposed neural bandit algorithm. | https://openreview.net/pdf/03d7a2872a95102e45f49ccf6a1bd08bab2a19d8.pdf |
Adversarial Unlearning of Backdoors via Implicit Hypergradient | https://openreview.net/forum?id=MeeQkFYVbzW | https://openreview.net/forum?id=MeeQkFYVbzW | Yi Zeng,Si Chen,Won Park,Zhuoqing Mao,Ming Jin,Ruoxi Jia | ICLR 2022,Poster | We propose a minimax formulation for removing backdoors from a given poisoned model based on a small set of clean data. This formulation encompasses much of prior work on backdoor removal. We propose the Implicit Backdoor Adversarial Unlearning (I-BAU) algorithm to solve the minimax. Unlike previous work, which breaks down the minimax into separate inner and outer problems, our algorithm utilizes the implicit hypergradient to account for the interdependence between inner and outer optimization. We theoretically analyze its convergence and the generalizability of the robustness gained by solving minimax on clean data to unseen test data. In our evaluation, we compare I-BAU with six state-of-art backdoor defenses on eleven backdoor attacks over two datasets and various attack settings, including the common setting where the attacker targets one class as well as important but underexplored settings where multiple classes are targeted. I-BAU's performance is comparable to and most often significantly better than the best baseline. Particularly, its performance is more robust to the variation on triggers, attack settings, poison ratio, and clean data size. Moreover, I-BAU requires less computation to take effect; particularly, it is more than $13\times$ faster than the most efficient baseline in the single-target attack setting. Furthermore, it can remain effective in the extreme case where the defender can only access 100 clean samples---a setting where all the baselines fail to produce acceptable results. | https://openreview.net/pdf/6aeb6e81c9d0eadbb4cfbefb6caac0f155d561ea.pdf |
Maximizing Ensemble Diversity in Deep Reinforcement Learning | https://openreview.net/forum?id=hjd-kcpDpf2 | https://openreview.net/forum?id=hjd-kcpDpf2 | Hassam Sheikh,Mariano Phielipp,Ladislau Boloni | ICLR 2022,Poster | Modern deep reinforcement learning (DRL) has been successful in solving a range of challenging sequential decision-making problems. Most of these algorithms use an ensemble of neural networks as their backbone structure and benefit from the diversity among the neural networks to achieve optimal results. Unfortunately, the members of the ensemble can converge to the same point either the parametric space or representation space during the training phase, therefore, losing all the leverage of an ensemble. In this paper, we describe Maximize Ensemble Diversity in Reinforcement Learning (MED-RL), a set of regularization methods inspired from the economics and consensus optimization to improve diversity in the ensemble-based deep reinforcement learning methods by encouraging inequality between the networks during training. We integrated MED-RL in five of the most common ensemble-based deep RL algorithms for both continuous and discrete control tasks and evaluated on six Mujoco environments and six Atari games. Our results show that MED-RL augmented algorithms outperform their un-regularized counterparts significantly and in some cases achieved more than 300$\%$ in performance gains. | https://openreview.net/pdf/01f7a1ad9dd4d2d9285af9a2c926b0cc1a282f4f.pdf |
Graph Neural Networks with Learnable Structural and Positional Representations | https://openreview.net/forum?id=wTTjnvGphYj | https://openreview.net/forum?id=wTTjnvGphYj | Vijay Prakash Dwivedi,Anh Tuan Luu,Thomas Laurent,Yoshua Bengio,Xavier Bresson | ICLR 2022,Poster | Graph neural networks (GNNs) have become the standard learning architectures for graphs. GNNs have been applied to numerous domains ranging from quantum chemistry, recommender systems to knowledge graphs and natural language processing. A major issue with arbitrary graphs is the absence of canonical positional information of nodes, which decreases the representation power of GNNs to distinguish e.g. isomorphic nodes and other graph symmetries. An approach to tackle this issue is to introduce Positional Encoding (PE) of nodes, and inject it into the input layer, like in Transformers. Possible graph PE are Laplacian eigenvectors. In this work, we propose to decouple structural and positional representations to make easy for the network to learn these two essential properties. We introduce a novel generic architecture which we call \texttt{LSPE} (Learnable Structural and Positional Encodings). We investigate several sparse and fully-connected (Transformer-like) GNNs, and observe a performance increase for molecular datasets, from $1.79\%$ up to $64.14\%$ when considering learnable PE for both GNN classes. | https://openreview.net/pdf/d2f6438ccb5d7ec7570953e93a19f994a5894c93.pdf |
Zero-Shot Self-Supervised Learning for MRI Reconstruction | https://openreview.net/forum?id=085y6YPaYjP | https://openreview.net/forum?id=085y6YPaYjP | Burhaneddin Yaman,Seyed Amir Hossein Hosseini,Mehmet Akcakaya | ICLR 2022,Poster | Deep learning (DL) has emerged as a powerful tool for accelerated MRI reconstruction, but often necessitates a database of fully-sampled measurements for training. Recent self-supervised and unsupervised learning approaches enable training without fully-sampled data. However, a database of undersampled measurements may not be available in many scenarios, especially for scans involving contrast or translational acquisitions in development. Moreover, recent studies show that database-trained models may not generalize well when the unseen measurements differ in terms of sampling pattern, acceleration rate, SNR, image contrast, and anatomy. Such challenges necessitate a new methodology to enable subject-specific DL MRI reconstruction without external training datasets, since it is clinically imperative to provide high-quality reconstructions that can be used to identify lesions/disease for $\textit{every individual}$. In this work, we propose a zero-shot self-supervised learning approach to perform subject-specific accelerated DL MRI reconstruction to tackle these issues. The proposed approach partitions the available measurements from a single scan into three disjoint sets. Two of these sets are used to enforce data consistency and define loss during training for self-supervision, while the last set serves to self-validate, establishing an early stopping criterion. In the presence of models pre-trained on a database with different image characteristics, we show that the proposed approach can be combined with transfer learning for faster convergence time and reduced computational complexity. | https://openreview.net/pdf/72bc6e074441b3be4d8499c16c02fbae92fad23f.pdf |
Policy Smoothing for Provably Robust Reinforcement Learning | https://openreview.net/forum?id=mwdfai8NBrJ | https://openreview.net/forum?id=mwdfai8NBrJ | Aounon Kumar,Alexander Levine,Soheil Feizi | ICLR 2022,Poster | The study of provable adversarial robustness for deep neural networks (DNNs) has mainly focused on $\textit{static}$ supervised learning tasks such as image classification. However, DNNs have been used extensively in real-world $\textit{adaptive}$ tasks such as reinforcement learning (RL), making such systems vulnerable to adversarial attacks as well. Prior works in provable robustness in RL seek to certify the behaviour of the victim policy at every time-step against a non-adaptive adversary using methods developed for the static setting. But in the real world, an RL adversary can infer the defense strategy used by the victim agent by observing the states, actions, etc. from previous time-steps and adapt itself to produce stronger attacks in future steps (e.g., by focusing more on states critical to the agent's performance). We present an efficient procedure, designed specifically to defend against an adaptive RL adversary, that can directly certify the total reward without requiring the policy to be robust at each time-step. Focusing on randomized smoothing based defenses, our main theoretical contribution is to prove an $\textit{adaptive version}$ of the Neyman-Pearson Lemma -- a key lemma for smoothing-based certificates -- where the adversarial perturbation at a particular time can be a stochastic function of current and previous observations and states as well as previous actions. Building on this result, we propose $\textit{policy smoothing}$ where the agent adds a Gaussian noise to its observation at each time-step before passing it through the policy function. Our robustness certificates guarantee that the final total reward obtained by policy smoothing remains above a certain threshold, even though the actions at intermediate time-steps may change under the attack. We show that our certificates are $\textit{tight}$ by constructing a worst-case scenario that achieves the bounds derived in our analysis. Our experiments on various environments like Cartpole, Pong, Freeway and Mountain Car show that our method can yield meaningful robustness guarantees in practice.
| https://openreview.net/pdf/b1ed375c6d8559126ca3c590cf47feff1ae81aeb.pdf |
The Close Relationship Between Contrastive Learning and Meta-Learning | https://openreview.net/forum?id=gICys3ITSmj | https://openreview.net/forum?id=gICys3ITSmj | Renkun Ni,Manli Shu,Hossein Souri,Micah Goldblum,Tom Goldstein | ICLR 2022,Poster | Contrastive learning has recently taken off as a paradigm for learning from unlabeled data. In this paper, we discuss the close relationship between contrastive learning and meta-learning under a certain task distribution. We complement this observation by showing that established meta-learning methods, such as Prototypical Networks, achieve comparable performance to SimCLR when paired with this task distribution. This relationship can be leveraged by taking established techniques from meta-learning, such as task-based data augmentation, and showing that they benefit contrastive learning as well. These tricks also benefit state-of-the-art self-supervised learners without using negative pairs such as BYOL, which achieves 94.6\% accuracy on CIFAR-10 using a self-supervised ResNet-18 feature extractor trained with our meta-learning tricks. We conclude that existing advances designed for contrastive learning or meta-learning can be exploited to benefit the other, and it is better for contrastive learning researchers to take lessons from the meta-learning literature (and vice-versa) than to reinvent the wheel. | https://openreview.net/pdf/50e04918f102be63b77173d7b13e7c3a95d4d4b7.pdf |
Towards Understanding Generalization via Decomposing Excess Risk Dynamics | https://openreview.net/forum?id=rS9-7AuPKWK | https://openreview.net/forum?id=rS9-7AuPKWK | Jiaye Teng,Jianhao Ma,Yang Yuan | ICLR 2022,Poster | Generalization is one of the fundamental issues in machine learning. However, traditional techniques like uniform convergence may be unable to explain generalization under overparameterization \citep{nagarajan2019uniform}. As alternative approaches, techniques based on stability analyze the training dynamics and derive algorithm-dependent generalization bounds. Unfortunately, the stability-based bounds are still far from explaining the surprising generalization in deep learning since neural networks usually suffer from unsatisfactory stability. This paper proposes a novel decomposition framework to improve the stability-based bounds via a more fine-grained analysis of the signal and noise, inspired by the observation that neural networks converge relatively slowly when fitting noise (which indicates better stability). Concretely, we decompose the excess risk dynamics and apply the stability-based bound only on the noise component. The decomposition framework performs well in both linear regimes (overparameterized linear regression) and non-linear regimes (diagonal matrix recovery). Experiments on neural networks verify the utility of the decomposition framework. | https://openreview.net/pdf/fb22a53f794740f074cfc57fbfeb13cb402a0dc1.pdf |
Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction | https://openreview.net/forum?id=ATUh28lnSuW | https://openreview.net/forum?id=ATUh28lnSuW | Mingyue Tang,Pan Li,Carl Yang | ICLR 2022,Poster | Graph neural networks (GNNs) have drawn significant research attention recently, mostly under the setting of semi-supervised learning. When task-agnostic representations are preferred or supervision is simply unavailable, the auto-encoder framework comes in handy with a natural graph reconstruction objective for unsupervised GNN training. However, existing graph auto-encoders are designed to reconstruct the direct links, so GNNs trained in this way are only optimized towards proximity-oriented graph mining tasks, and will fall short when the topological structures matter. In this work, we revisit the graph encoding process of GNNs which essentially learns to encode the neighborhood information of each node into an embedding vector, and propose a novel graph decoder to reconstruct the entire neighborhood information regarding both proximity and structure via Neighborhood Wasserstein Reconstruction (NWR). Specifically, from the GNN embedding of each node, NWR jointly predicts its node degree and neighbor feature distribution, where the distribution prediction adopts an optimal-transport loss based on the Wasserstein distance. Extensive experiments on both synthetic and real-world network datasets show that the unsupervised node representations learned with NWR have much more advantageous in structure-oriented graph mining tasks, while also achieving competitive performance in proximity-oriented ones. | https://openreview.net/pdf/f6c2facccd48113154042dcd9e300784da586675.pdf |
FairCal: Fairness Calibration for Face Verification | https://openreview.net/forum?id=nRj0NcmSuxb | https://openreview.net/forum?id=nRj0NcmSuxb | Tiago Salvador,Stephanie Cairns,Vikram Voleti,Noah Marshall,Adam M Oberman | ICLR 2022,Poster | Despite being widely used, face recognition models suffer from bias: the probability of a false positive (incorrect face match) strongly depends on sensitive attributes such as the ethnicity of the face. As a result, these models can disproportionately and negatively impact minority groups, particularly when used by law enforcement. The majority of bias reduction methods have several drawbacks: they use an end-to-end retraining approach, may not be feasible due to privacy issues, and often reduce accuracy. An alternative approach is post-processing methods that build fairer decision classifiers using the features of pre-trained models, thus avoiding the cost of retraining. However, they still have drawbacks: they reduce accuracy (AGENDA, FTC), or require retuning for different false positive rates (FSN). In this work, we introduce the Fairness Calibration (FairCal) method, a post-training approach that simultaneously: (i) increases model accuracy (improving the state-of-the-art), (ii) produces fairly-calibrated probabilities, (iii) significantly reduces the gap in the false positive rates, (iv) does not require knowledge of the sensitive attribute, and (v) does not require retraining, training an additional model or retuning. We apply it to the task of Face Verification, and obtain state-of-the-art results with all the above advantages. | https://openreview.net/pdf/ddb357dbb2226fb417398315a7b416c8f611f59b.pdf |
Cross-Lingual Transfer with Class-Weighted Language-Invariant Representations | https://openreview.net/forum?id=k7-s5HSSPE5 | https://openreview.net/forum?id=k7-s5HSSPE5 | Ruicheng Xian,Heng Ji,Han Zhao | ICLR 2022,Poster | Recent advances in neural modeling have produced deep multilingual language models capable of extracting cross-lingual knowledge from non-parallel texts and enabling zero-shot downstream transfer. While their success is often attributed to shared representations, quantitative analyses are limited. Towards a better understanding, through empirical analyses, we show that the invariance of feature representations across languages—an effect of shared representations—strongly correlates with transfer performance. We also observe that distributional shifts in class priors between source and target language task data negatively affect performance, a largely overlooked issue that could cause negative transfer with existing unsupervised approaches. Based on these findings, we propose and evaluate a method for unsupervised transfer, called importance-weighted domain alignment (IWDA), that performs representation alignment with prior shift estimation and correction using unlabeled target language task data. Experiments demonstrate its superiority under large prior shifts, and show further performance gains when combined with existing semi-supervised learning techniques. | https://openreview.net/pdf/c75daaf7c5ca8ed7ab01e92c4cc16d55f5d6aff5.pdf |
ComPhy: Compositional Physical Reasoning of Objects and Events from Videos | https://openreview.net/forum?id=PgNEYaIc81Q | https://openreview.net/forum?id=PgNEYaIc81Q | Zhenfang Chen,Kexin Yi,Yunzhu Li,Mingyu Ding,Antonio Torralba,Joshua B. Tenenbaum,Chuang Gan | ICLR 2022,Poster | Objects' motions in nature are governed by complex interactions and their properties. While some properties, such as shape and material, can be identified via the object's visual appearances, others like mass and electric charge are not directly visible. The compositionality between the visible and hidden properties poses unique challenges for AI models to reason from the physical world, whereas humans can effortlessly infer them with limited observations. Existing studies on video reasoning mainly focus on visually observable elements such as object appearance, movement, and contact interaction. In this paper, we take an initial step to highlight the importance of inferring the hidden physical properties not directly observable from visual appearances, by introducing the Compositional Physical Reasoning (ComPhy) dataset. For a given set of objects, ComPhy includes few videos of them moving and interacting under different initial conditions. The model is evaluated based on its capability to unravel the compositional hidden properties, such as mass and charge, and use this knowledge to answer a set of questions posted on one of the videos. Evaluation results of several state-of-the-art video reasoning models on ComPhy show unsatisfactory performance as they fail to capture these hidden properties. We further propose an oracle neural-symbolic framework named Compositional Physics Learner (CPL), combining visual perception, physical property learning, dynamic prediction, and symbolic execution into a unified framework. CPL can effectively identify objects' physical properties from their interactions and predict their dynamics to answer questions. | https://openreview.net/pdf/2609709ef581b49b8b74d7d8cfd86d47897465ef.pdf |
An Information Fusion Approach to Learning with Instance-Dependent Label Noise | https://openreview.net/forum?id=ecH2FKaARUp | https://openreview.net/forum?id=ecH2FKaARUp | Zhimeng Jiang,Kaixiong Zhou,Zirui Liu,Li Li,Rui Chen,Soo-Hyun Choi,Xia Hu | ICLR 2022,Poster | Instance-dependent label noise (IDN) widely exists in real-world datasets and usually misleads the training of deep neural networks. Noise transition matrix (NTM) (i.e., the probability that clean labels flip into noisy labels) is used to characterize the label noise and can be adopted to bridge the gap between clean and noisy underlying data distributions. However, most instances are long-tail, i.e., the number of occurrences of each instance is usually limited, which leads to the gap between the underlying distribution and the empirical distribution. Therefore, the genuine problem caused by IDN is \emph{empirical}, instead of underlying, \emph{data distribution mismatch} during training. To directly tackle the empirical distribution mismatch problem, we propose \emph{posterior transition matrix} (PTM) to posteriorly model label noise given limited observed noisy labels, which achieves \emph{statistically consistent classifiers}. Note that even if an instance is corrupted by the same NTM, the intrinsic randomness incurs different noisy labels, and thus requires different correction methods. Motivated by this observation, we propose an \textbf{I}nformation \textbf{F}usion (IF) approach to fine-tune the NTM based on the estimated PTM. Specifically, we adopt the noisy labels and model predicted probabilities to estimate the PTM and then correct the NTM in \emph{forward propagation}. Empirical evaluations on synthetic and real-world datasets demonstrate that our method is superior to the state-of-the-art approaches, and achieves more stable training for instance-dependent label noise. | https://openreview.net/pdf/a975e6d9572bb58e97d4302f7bd2b003d925154b.pdf |
On Redundancy and Diversity in Cell-based Neural Architecture Search | https://openreview.net/forum?id=rFJWoYoxrDB | https://openreview.net/forum?id=rFJWoYoxrDB | Xingchen Wan,Binxin Ru,Pedro M Esperança,Zhenguo Li | ICLR 2022,Poster | Searching for the architecture cells is a dominant paradigm in NAS. However, little attention has been devoted to the analysis of the cell-based search spaces even though it is highly important for the continual development of NAS.
In this work, we conduct an empirical post-hoc analysis of architectures from the popular cell-based search spaces and find that the existing search spaces contain a high degree of redundancy: the architecture performance is less sensitive to changes at large parts of the cells, and universally adopted design rules, like the explicit search for a reduction cell, significantly increase the complexities but have very limited impact on the performance.
Across architectures found by a diverse set of search strategies, we consistently find that the parts of the cells that do matter for architecture performance often follow similar and simple patterns. By constraining cells to include these patterns, randomly sampled architectures can match or even outperform the state of the art.
These findings cast doubts into our ability to discover truly novel architectures in the existing cell-based search spaces and, inspire our suggestions for improvement to guide future NAS research.
Code is available at https://github.com/xingchenwan/cell-based-NAS-analysis. | https://openreview.net/pdf/9f2dd3fa246906ee1dfabf21cda76b01d386c017.pdf |
Deep Learning without Shortcuts: Shaping the Kernel with Tailored Rectifiers | https://openreview.net/forum?id=U0k7XNTiFEq | https://openreview.net/forum?id=U0k7XNTiFEq | Guodong Zhang,Aleksandar Botev,James Martens | ICLR 2022,Poster | Training very deep neural networks is still an extremely challenging task. The common solution is to use shortcut connections and normalization layers, which are both crucial ingredients in the popular ResNet architecture. However, there is strong evidence to suggest that ResNets behave more like ensembles of shallower networks than truly deep ones. Recently, it was shown that deep vanilla networks (i.e.~networks without normalization layers or shortcut connections) can be trained as fast as ResNets by applying certain transformations to their activation functions. However, this method (called Deep Kernel Shaping) isn't fully compatible with ReLUs, and produces networks that overfit significantly more than ResNets on ImageNet. In this work, we rectify this situation by developing a new type of transformation that is fully compatible with a variant of ReLUs -- Leaky ReLUs. We show in experiments that our method, which introduces negligible extra computational cost, achieves validation accuracies with deep vanilla networks that are competitive with ResNets (of the same width/depth), and significantly higher than those obtained with the Edge of Chaos (EOC) method. And unlike with EOC, the validation accuracies we obtain do not get worse with depth. | https://openreview.net/pdf/e96e7c563d49e5a3e11edbc1d1106d452cd8f97b.pdf |
Variational autoencoders in the presence of low-dimensional data: landscape and implicit bias | https://openreview.net/forum?id=y_op4lLLaWL | https://openreview.net/forum?id=y_op4lLLaWL | Frederic Koehler,Viraj Mehta,Chenghui Zhou,Andrej Risteski | ICLR 2022,Poster | Variational Autoencoders (VAEs) are one of the most commonly used generative models, particularly for image data. A prominent difficulty in training VAEs is data that is supported on a lower dimensional manifold. Recent work by Dai and Wipf (2020) proposes a two-stage training algorithm for VAEs, based on a conjecture that in standard VAE training the generator will converge to a solution with 0 variance which is correctly supported on the ground truth manifold. They gave partial support for this conjecture by showing that some optima of the VAE loss do satisfy this property, but did not analyze the training dynamics. In this paper, we show that for linear encoders/decoders, the conjecture is true—that is the VAE training does recover a generator with support equal to the ground truth manifold—and does so due to an implicit bias of gradient descent rather than merely the VAE loss itself. In the nonlinear case, we show that VAE training frequently learns a higher-dimensional manifold which is a superset of the ground truth manifold. | https://openreview.net/pdf/5fc178a777c9341b35f8427741eb86911e4b256a.pdf |
No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models | https://openreview.net/forum?id=cuvga_CiVND | https://openreview.net/forum?id=cuvga_CiVND | Chen Liang,Haoming Jiang,Simiao Zuo,Pengcheng He,Xiaodong Liu,Jianfeng Gao,Weizhu Chen,Tuo Zhao | ICLR 2022,Poster | Recent research has shown the existence of significant redundancy in large Transformer models. One can prune the redundant parameters without significantly sacrificing the generalization performance. However, we question whether the redundant parameters could have contributed more if they were properly trained. To answer this question, we propose a novel training strategy that encourages all parameters to be trained sufficiently. Specifically, we adaptively adjust the learning rate for each parameter according to its sensitivity, a robust gradient-based measure reflecting this parameter's contribution to the model performance. A parameter with low sensitivity is redundant, and we improve its fitting by increasing its learning rate. In contrast, a parameter with high sensitivity is well-trained, and we regularize it by decreasing its learning rate to prevent further overfitting. We conduct extensive experiments on natural language understanding, neural machine translation, and image classification to demonstrate the effectiveness of the proposed schedule. Analysis shows that the proposed schedule indeed reduces the redundancy and improves generalization performance. | https://openreview.net/pdf/f29d145db699800c70bb362bb205f16575e30db7.pdf |
SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations | https://openreview.net/forum?id=aBsCjcPu_tE | https://openreview.net/forum?id=aBsCjcPu_tE | Chenlin Meng,Yutong He,Yang Song,Jiaming Song,Jiajun Wu,Jun-Yan Zhu,Stefano Ermon | ICLR 2022,Poster | Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user inputs (e.g., hand-drawn colored strokes) and realism of the synthesized images. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide in a form of manipulating RGB pixels, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing. | https://openreview.net/pdf/e3d44fdd105753b51dcb91a908082d6317713ae9.pdf |
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation | https://openreview.net/forum?id=xNOVfCCvDpM | https://openreview.net/forum?id=xNOVfCCvDpM | Julius Adebayo,Michael Muelly,Harold Abelson,Been Kim | ICLR 2022,Poster | We investigate whether three types of post hoc model explanations–feature attribution, concept activation, and training point ranking–are effective for detecting a model’s reliance on spurious signals in the training data. Specifically, we consider the scenario where the spurious signal to be detected is unknown, at test-time, to the user of the explanation method. We design an empirical methodology that uses semi-synthetic datasets along with pre-specified spurious artifacts to obtain models that verifiably rely on these spurious training signals. We then provide a suite of metrics that assess an explanation method’s reliability for spurious signal detection under various conditions. We find that the post hoc explanation methods tested are ineffective when the spurious artifact is unknown at test-time especially for non-visible artifacts like a background blur. Further, we find that feature attribution methods are susceptible to erroneously indicating dependence on spurious signals even when the model being explained does not rely on spurious artifacts. This finding casts doubt on the utility of these approaches, in the hands of a practitioner, for detecting a model’s reliance on spurious signals. | https://openreview.net/pdf/25d1d4e7a5f415eccf5a40f584ff83b2d54db779.pdf |
Generalizing Few-Shot NAS with Gradient Matching | https://openreview.net/forum?id=_jMtny3sMKU | https://openreview.net/forum?id=_jMtny3sMKU | Shoukang Hu,Ruochen Wang,Lanqing HONG,Zhenguo Li,Cho-Jui Hsieh,Jiashi Feng | ICLR 2022,Poster | Efficient performance estimation of architectures drawn from large search spaces is essential to Neural Architecture Search. One-Shot methods tackle this challenge by training one supernet to approximate the performance of every architecture in the search space via weight-sharing, thereby drastically reducing the search cost. However, due to coupled optimization between child architectures caused by weight-sharing, One-Shot supernet's performance estimation could be inaccurate, leading to degraded search outcomes. To address this issue, Few-Shot NAS reduces the level of weight-sharing by splitting the One-Shot supernet into multiple separated sub-supernets via edge-wise (layer-wise) exhaustive partitioning. Since each partition of the supernet is not equally important, it necessitates the design of a more effective splitting criterion. In this work, we propose a gradient matching score (GM) that leverages gradient information at the shared weight for making informed splitting decisions. Intuitively, gradients from different child models can be used to identify whether they agree on how to update the shared modules, and subsequently to decide if they should share weight. Compared with exhaustive partitioning, the proposed criterion significantly reduces the branching factor per edge. This allows us to split more edges (layers) for a given budget, resulting in substantially improved performance as NAS search spaces usually include dozens of edges (layers). Extensive empirical evaluations of the proposed method on a wide range of search spaces (NASBench-201, DARTS, MobileNet Space), datasets (cifar10, cifar100, ImageNet) and search algorithms (DARTS, SNAS, RSPS, ProxylessNAS, OFA) demonstrate that it significantly outperforms its Few-Shot counterparts while surpassing previous comparable methods in terms of the accuracy of derived architectures.
Our code is available at https://github.com/skhu101/GM-NAS. | https://openreview.net/pdf/ad7965c59fc903823d27880126728fb09cd8e4dd.pdf |
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training | https://openreview.net/forum?id=VBZJ_3tz-t | https://openreview.net/forum?id=VBZJ_3tz-t | Shiwei Liu,Tianlong Chen,Xiaohan Chen,Li Shen,Decebal Constantin Mocanu,Zhangyang Wang,Mykola Pechenizkiy | ICLR 2022,Poster | Random pruning is arguably the most naive way to attain sparsity in neural networks, but has been deemed uncompetitive by either post-training pruning or sparse training. In this paper, we focus on sparse training and highlight a perhaps counter-intuitive finding, that random pruning at initialization can be quite powerful for the sparse training of modern neural networks. Without any delicate pruning criteria or carefully pursued sparsity structures, we empirically demonstrate that sparsely training a randomly pruned network from scratch can match the performance of its dense equivalent. There are two key factors that contribute to this revival: (i) $the network sizes matter$: as the original dense networks grow wider and deeper, the performance of training a randomly pruned sparse network will quickly grow to matching that of its dense equivalent, even at high sparsity ratios; (ii) $appropriate layer-wise sparsity ratios$ can be pre-chosen for sparse training, which shows to be another important performance booster. Simple as it looks, a randomly pruned subnetwork of Wide ResNet-50 can be sparsely trained to outperforming a dense Wide ResNet-50, on ImageNet. We also observed such randomly pruned networks outperform dense counterparts in other favorable aspects, such as out-of-distribution detection, uncertainty estimation, and adversarial robustness. Overall, our results strongly suggest there is larger-than-expected room for sparse training at scale, and the benefits of sparsity might be more universal beyond carefully designed pruning. Our source code can be found at https://github.com/VITA-Group/Random_Pruning.
| https://openreview.net/pdf/92cd52f575aab3aac6f6934474fa0fb6f56652b9.pdf |
switch-GLAT: Multilingual Parallel Machine Translation Via Code-Switch Decoder | https://openreview.net/forum?id=5HvpvYd68b | https://openreview.net/forum?id=5HvpvYd68b | Zhenqiao Song,Hao Zhou,Lihua Qian,Jingjing Xu,Shanbo Cheng,Mingxuan Wang,Lei Li | ICLR 2022,Poster | Multilingual machine translation aims to develop a single model for multiple language directions. However, existing multilingual models based on Transformer are limited in terms of both translation performance and inference speed. In this paper, we propose switch-GLAT, a non-autoregressive multilingual machine translation model with a code-switch decoder. It can generate contextual code-switched translations for a given source sentence, and perform code-switch back-translation, greatly boosting multilingual translation performance. In addition, its inference is highly efficient thanks to its parallel decoder. Experiments show that our proposed switch-GLAT outperform the multilingual Transformer with as much as 0.74 BLEU improvement and 6.2x faster decoding speed in inference.
| https://openreview.net/pdf/24886b6f1aa837364f7c14635d0af1b9dadb0fe7.pdf |
DictFormer: Tiny Transformer with Shared Dictionary | https://openreview.net/forum?id=GWQWAeE9EpB | https://openreview.net/forum?id=GWQWAeE9EpB | Qian Lou,Ting Hua,Yen-Chang Hsu,Yilin Shen,Hongxia Jin | ICLR 2022,Poster | We introduce DictFormer with the efficient shared dictionary to provide a compact, fast, and accurate transformer model. DictFormer significantly reduces the redundancy in the transformer's parameters by replacing the prior transformer's parameters with a compact, shared dictionary, few unshared coefficients, and indices. Also, DictFormer enables faster computations since expensive weights multiplications are converted into cheap shared look-ups on dictionary and few linear projections. Training dictionary and coefficients are not trivial since indices used for looking up dictionary are not differentiable. We adopt a sparse-constraint training with $l_1\,\,norm$ relaxation to learn coefficients and indices in DictFormer. DictFormer is flexible to support different model sizes by dynamically changing dictionary size. Compared to existing lightweight Transformers, DictFormer consistently reduces model size over Transformer on multiple tasks, e.g., machine translation, abstractive summarization, and language modeling. Extensive experiments show that DictFormer reduces $3.6\times$ to $8.9\times$ model size with similar accuracy over multiple tasks, compared to Transformer. | https://openreview.net/pdf/63e0216a03407ae67011d9a68ba92781cd196e13.pdf |
Training Transition Policies via Distribution Matching for Complex Tasks | https://openreview.net/forum?id=6vkzF28Hur8 | https://openreview.net/forum?id=6vkzF28Hur8 | JU-SEUNG BYUN,Andrew Perrault | ICLR 2022,Poster | Humans decompose novel complex tasks into simpler ones to exploit previously learned skills. Analogously, hierarchical reinforcement learning seeks to leverage lower-level policies for simple tasks to solve complex ones. However, because each lower-level policy induces a different distribution of states, transitioning from one lower-level policy to another may fail due to an unexpected starting state. We introduce transition policies that smoothly connect lower-level policies by producing a distribution of states and actions that matches what is expected by the next policy. Training transition policies is challenging because the natural reward signal---whether the next policy can execute its subtask successfully---is sparse. By training transition policies via adversarial inverse reinforcement learning to match the distribution of expected states and actions, we avoid relying on task-based reward. To further improve performance, we use deep Q-learning with a binary action space to determine when to switch from a transition policy to the next pre-trained policy, using the success or failure of the next subtask as the reward. Although the reward is still sparse, the problem is less severe due to the simple binary action space. We demonstrate our method on continuous bipedal locomotion and arm manipulation tasks that require diverse skills. We show that it smoothly connects the lower-level policies, achieving higher success rates than previous methods that search for successful trajectories based on a reward function, but do not match the state distribution.
| https://openreview.net/pdf/b0fad4cf84b44e02b873a928fb82cf77473641fa.pdf |
GDA-AM: ON THE EFFECTIVENESS OF SOLVING MIN-IMAX OPTIMIZATION VIA ANDERSON MIXING | https://openreview.net/forum?id=3YqeuCVwy1d | https://openreview.net/forum?id=3YqeuCVwy1d | Huan He,Shifan Zhao,Yuanzhe Xi,Joyce Ho,Yousef Saad | ICLR 2022,Poster | Many modern machine learning algorithms such as generative adversarial networks (GANs) and adversarial training can be formulated as minimax optimization.Gradient descent ascent (GDA) is the most commonly used algorithm due to its simplicity. However, GDA can converge to non-optimal minimax points. We propose a new minimax optimization framework,GDA-AM, that views the GDA dynamics as a fixed-point iteration and solves it using Anderson Mixing to converge to the local minimax. It addresses the diverging issue of simultaneous GDA and accelerates the convergence of alternating GDA. We show theoretically that the algorithm can achieve global convergence for bilinear problems under mildconditions. We also empirically show that GDA-AM solves a variety of minimax problems and improves GAN training on several datasets | https://openreview.net/pdf/0c1222f9d7edda5670c919e0cbd20b7afc79e30e.pdf |
On feature learning in neural networks with global convergence guarantees | https://openreview.net/forum?id=PQTW3iG4sC- | https://openreview.net/forum?id=PQTW3iG4sC- | Zhengdao Chen,Eric Vanden-Eijnden,Joan Bruna | ICLR 2022,Poster | We study the gradient flow optimization of over-parameterized neural networks (NNs) in a setup that allows feature learning while admitting non-asymptotic global convergence guarantees. First, we prove that for wide shallow NNs under the mean-field (MF) scaling and with a general class of activation functions, when the input dimension is at least the size of the training set, the training loss converges to zero at a linear rate under gradient flow. Building upon this analysis, we study a model of wide multi-layer NNs with random and untrained weights in earlier layers, and also prove a linear-rate convergence of the training loss to zero, regardless of the input dimension. We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart. | https://openreview.net/pdf/7ae8138e2ef895c84726f09da2670108d7ddfa81.pdf |
The Three Stages of Learning Dynamics in High-dimensional Kernel Methods | https://openreview.net/forum?id=EQmAP4F859 | https://openreview.net/forum?id=EQmAP4F859 | Nikhil Ghosh,Song Mei,Bin Yu | ICLR 2022,Poster | To understand how deep learning works, it is crucial to understand the training dynamics of neural networks. Several interesting hypotheses about these dynamics have been made based on empirically observed phenomena, but there exists a limited theoretical understanding of when and why such phenomena occur.
In this paper, we consider the training dynamics of gradient flow on kernel least-squares objectives, which is a limiting dynamics of SGD trained neural networks. Using precise high-dimensional asymptotics, we characterize the dynamics of the fitted model in two “worlds”: in the Oracle World the model is trained on the population distribution and in the Empirical World the model is trained on an i.i.d finite dataset. We show that under mild conditions on the kernel and $L^2$ target regression function the training dynamics have three stages that are based on the behaviors of the models in the two worlds. Our theoretical results also mathematically formalize some interesting deep learning phenomena. Specifically, in our setting we show that SGD progressively learns more complex functions and that there is a "deep bootstrap" phenomenon: during the second stage, the test error of both worlds remain close despite the empirical training error being much smaller. Finally, we give a concrete example comparing the dynamics of two different kernels which shows that faster training is not necessary for better generalization. | https://openreview.net/pdf/273ff13b48f42a02263556d771e28712207508bc.pdf |
When Can We Learn General-Sum Markov Games with a Large Number of Players Sample-Efficiently? | https://openreview.net/forum?id=6MmiS0HUJHR | https://openreview.net/forum?id=6MmiS0HUJHR | Ziang Song,Song Mei,Yu Bai | ICLR 2022,Poster | Multi-agent reinforcement learning has made substantial empirical progresses in solving games with a large number of players. However, theoretically, the best known sample complexity for finding a Nash equilibrium in general-sum games scales exponentially in the number of players due to the size of the joint action space, and there is a matching exponential lower bound. This paper investigates what learning goals admit better sample complexities in the setting of $m$-player general-sum Markov games with $H$ steps, $S$ states, and $A_i$ actions per player. First, we design algorithms for learning an $\epsilon$-Coarse Correlated Equilibrium (CCE) in $\widetilde{\mathcal{O}}(H^5S\max_{i\le m} A_i / \epsilon^2)$ episodes, and an $\epsilon$-Correlated Equilibrium (CE) in $\widetilde{\mathcal{O}}(H^6S\max_{i\le m} A_i^2 / \epsilon^2)$ episodes. This is the first line of results for learning CCE and CE with sample complexities polynomial in $\max_{i\le m} A_i$. Our algorithm for learning CE integrates an adversarial bandit subroutine which minimizes a weighted swap regret, along with several novel designs in the outer loop. Second, we consider the important special case of Markov Potential Games, and design an algorithm that learns an $\epsilon$-approximate Nash equilibrium within $\widetilde{\mathcal{O}}(S\sum_{i\le m} A_i / \epsilon^3)$ episodes (when only highlighting the dependence on $S$, $A_i$, and $\epsilon$), which only depends linearly in $\sum_{i\le m} A_i$ and significantly improves over the existing efficient algorithm in the $\epsilon$ dependence. Overall, our results shed light on what equilibria or structural assumptions on the game may enable sample-efficient learning with many players. | https://openreview.net/pdf/4897aa8eae5efc42c6f0c8f6dcc2f6ce0989de42.pdf |
Neural Networks as Kernel Learners: The Silent Alignment Effect | https://openreview.net/forum?id=1NvflqAdoom | https://openreview.net/forum?id=1NvflqAdoom | Alexander Atanasov,Blake Bordelon,Cengiz Pehlevan | ICLR 2022,Poster | Neural networks in the lazy training regime converge to kernel machines. Can neural networks in the rich feature learning regime learn a kernel machine with a data-dependent kernel? We demonstrate that this can indeed happen due to a phenomenon we term silent alignment, which requires that the tangent kernel of a network evolves in eigenstructure while small and before the loss appreciably decreases, and grows only in overall scale afterwards. We show that such an effect takes place in homogenous neural networks with small initialization and whitened data. We provide an analytical treatment of this effect in the linear network case. In general, we find that the kernel develops a low-rank contribution in the early phase of training, and then evolves in overall scale, yielding a function equivalent to a kernel regression solution with the final network's tangent kernel. The early spectral learning of the kernel depends on the depth. We also demonstrate that non-whitened data can weaken the silent alignment effect. | https://openreview.net/pdf/06f8c9a443cfe06f78852aa3be34a2ff89b75293.pdf |
Learning Object-Oriented Dynamics for Planning from Text | https://openreview.net/forum?id=B6EIcyp-Rb7 | https://openreview.net/forum?id=B6EIcyp-Rb7 | Guiliang Liu,Ashutosh Adhikari,Amir-massoud Farahmand,Pascal Poupart | ICLR 2022,Poster | The advancement of dynamics models enables model-based planning in complex environments. Existing dynamics models commonly study image-based games with fully observable states. Generalizing these models to Text-Based Games (TBGs), which commonly describe the partially observable states with noisy text observations, is challenging. In this work, we propose an Object-Oriented Text Dynamics (OOTD) model that enables planning algorithms to solve decision-making problems in text domains. OOTD predicts a memory graph that dynamically remembers the history of object observations and filters object-irrelevant information. To facilitate the robustness of dynamics, our OOTD model identifies the objects influenced by input actions and predicts the belief of object states with independently parameterized transition layers. We develop variational objectives under the object-supervised and self-supervised settings to model the stochasticity of predicted dynamics. Empirical results show OOTD-based planner significantly outperforms model-free baselines in terms of sample efficiency and running scores. | https://openreview.net/pdf/af2da968af3f640735e0884e14b4c9f80134a9cf.pdf |
An Operator Theoretic View On Pruning Deep Neural Networks | https://openreview.net/forum?id=pWBNOgdeURp | https://openreview.net/forum?id=pWBNOgdeURp | William T Redman,MARIA FONOBEROVA,Ryan Mohr,Yannis Kevrekidis,Igor Mezic | ICLR 2022,Poster | The discovery of sparse subnetworks that are able to perform as well as full models has found broad applied and theoretical interest. While many pruning methods have been developed to this end, the naïve approach of removing parameters based on their magnitude has been found to be as robust as more complex, state-of-the-art algorithms. The lack of theory behind magnitude pruning's success, especially pre-convergence, and its relation to other pruning methods, such as gradient based pruning, are outstanding open questions in the field that are in need of being addressed. We make use of recent advances in dynamical systems theory, namely Koopman operator theory, to define a new class of theoretically motivated pruning algorithms. We show that these algorithms can be equivalent to magnitude and gradient based pruning, unifying these seemingly disparate methods, and find that they can be used to shed light on magnitude pruning's performance during the early part of training. | https://openreview.net/pdf/732335090e9773ee61b1e0375641a2c765eeaf33.pdf |
Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views? | https://openreview.net/forum?id=_4GFbtOuWq- | https://openreview.net/forum?id=_4GFbtOuWq- | Matthew Farrell,Blake Bordelon,Shubhendu Trivedi,Cengiz Pehlevan | ICLR 2022,Poster | Equivariance has emerged as a desirable property of representations of objects subject to identity-preserving transformations that constitute a group, such as translations and rotations. However, the expressivity of a representation constrained by group equivariance is still not fully understood. We address this gap by providing a generalization of Cover's Function Counting Theorem that quantifies the number of linearly separable and group-invariant binary dichotomies that can be assigned to equivariant representations of objects. We find that the fraction of separable dichotomies is determined by the dimension of the space that is fixed by the group action. We show how this relation extends to operations such as convolutions, element-wise nonlinearities, and global and local pooling. While other operations do not change the fraction of separable dichotomies, local pooling decreases the fraction, despite being a highly nonlinear operation. Finally, we test our theory on intermediate representations of randomly initialized and fully trained convolutional neural networks and find perfect agreement. | https://openreview.net/pdf/6d3dd8ced80d47d56918b5e4c597c76e355e44fb.pdf |
Tuformer: Data-driven Design of Transformers for Improved Generalization or Efficiency | https://openreview.net/forum?id=V0A5g83gdQ_ | https://openreview.net/forum?id=V0A5g83gdQ_ | Xiaoyu Liu,Jiahao Su,Furong Huang | ICLR 2022,Poster | Transformers are neural network architectures that achieve remarkable performance in many areas. However, the core component of Transformers, multi-head self-attention (MHSA), is mainly derived from heuristics, and the interactions across its components are not well understood. To address the problem, we first introduce a mathematically rigorous and yet intuitive tensor diagram representation of MHSA. Guided by tensor diagram representations, we propose a novel design, namely Tunable Transformers (Tuformers), by allowing data-driven weights across heads, whereas MHSA adopts pre-defined and fixed weights across heads, as will be explained in our paper. Tuformers naturally reveal a flexible design space that a user, depending on the needs, can choose a structure that has either improved performance (generalization error) or higher model efficiency. Any pre-trained Transformer can be an initialization of the corresponding Tuformer with trainable number of heads for efficient training and fine-tuning. Tuformers universally outperform Transformers on various tasks across multiple domains under a wide range of model sizes. | https://openreview.net/pdf/aab5b494ebbbe94524322c8629b269dd1c4a75fe.pdf |
Learning Weakly-supervised Contrastive Representations | https://openreview.net/forum?id=MSwEFaztwkE | https://openreview.net/forum?id=MSwEFaztwkE | Yao-Hung Hubert Tsai,Tianqin Li,Weixin Liu,Peiyuan Liao,Ruslan Salakhutdinov,Louis-Philippe Morency | ICLR 2022,Poster | We argue that a form of the valuable information provided by the auxiliary information is its implied data clustering information. For instance, considering hashtags as auxiliary information, we can hypothesize that an Instagram image will be semantically more similar with the same hashtags. With this intuition, we present a two-stage weakly-supervised contrastive learning approach. The first stage is to cluster data according to its auxiliary information. The second stage is to learn similar representations within the same cluster and dissimilar representations for data from different clusters. Our empirical experiments suggest the following three contributions. First, compared to conventional self-supervised representations, the auxiliary-information-infused representations bring the performance closer to the supervised representations, which use direct downstream labels as supervision signals. Second, our approach performs the best in most cases, when comparing our approach with other baseline representation learning methods that also leverage auxiliary data information. Third, we show that our approach also works well with unsupervised constructed clusters (e.g., no auxiliary information), resulting in a strong unsupervised representation learning approach. | https://openreview.net/pdf/08e660baa5f1b4d95675e564853ce70378ebe5fd.pdf |
Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model Compression | https://openreview.net/forum?id=Vs5NK44aP9P | https://openreview.net/forum?id=Vs5NK44aP9P | Bae Seong Park,Se Jung Kwon,Daehwan Oh,Byeongwook Kim,Dongsoo Lee | ICLR 2022,Poster | Even though fine-grained pruning techniques achieve a high compression ratio, conventional sparsity representations (such as CSR) associated with irregular sparsity degrade parallelism significantly. Practical pruning methods, thus, usually lower pruning rates (by structured pruning) to improve parallelism. In this paper, we study fixed-to-fixed (lossless) encoding architecture/algorithm to support fine-grained pruning methods such that sparse neural networks can be stored in a highly regular structure. We first estimate the maximum compression ratio of encoding-based compression using entropy. Then, as an effort to push the compression ratio to the theoretical maximum (by entropy), we propose a sequential fixed-to-fixed encoding scheme. We demonstrate that our proposed compression scheme achieves almost the maximum compression ratio for the Transformer and ResNet-50 pruned by various fine-grained pruning methods. | https://openreview.net/pdf/ab9b739fc11bc2802199a375c6146e8512c81fa5.pdf |
An Experimental Design Perspective on Model-Based Reinforcement Learning | https://openreview.net/forum?id=0no8Motr-zO | https://openreview.net/forum?id=0no8Motr-zO | Viraj Mehta,Biswajit Paria,Jeff Schneider,Stefano Ermon,Willie Neiswanger | ICLR 2022,Poster | In many practical applications of RL, it is expensive to observe state transitions from the environment. For example, in the problem of plasma control for nuclear fusion, computing the next state for a given state-action pair requires querying an expensive transition function which can lead to many hours of computer simulation or dollars of scientific research. Such expensive data collection prohibits application of standard RL algorithms which usually require a large number of observations to learn. In this work, we address the problem of efficiently learning a policy while making a minimal number of state-action queries to the transition function. In particular, we leverage ideas from Bayesian optimal experimental design to guide the selection of state-action queries for efficient learning. We propose an \emph{acquisition function} that quantifies how much information a state-action pair would provide about the optimal solution to a Markov decision process. At each iteration, our algorithm maximizes this acquisition function, to choose the most informative state-action pair to be queried, thus yielding a data-efficient RL approach. We experiment with a variety of simulated continuous control problems and show that our approach learns an optimal policy with up to $5$ -- $1,000\times$ less data than model-based RL baselines and $10^3$ -- $10^5\times$ less data than model-free RL baselines. We also provide several ablated comparisons which point to substantial improvements arising from the principled method of obtaining data. | https://openreview.net/pdf/63dd95d2070f3aa102826f2f0581987c13f5d0cf.pdf |
BAM: Bayes with Adaptive Memory | https://openreview.net/forum?id=NdOoQnYPj_ | https://openreview.net/forum?id=NdOoQnYPj_ | Josue Nassar,Jennifer Rogers Brennan,Ben Evans,Kendall Lowrey | ICLR 2022,Poster | Online learning via Bayes' theorem allows new data to be continuously integrated into an agent's current beliefs. However, a naive application of Bayesian methods in non-stationary environments leads to slow adaptation and results in state estimates that may converge confidently to the wrong parameter value. A common solution when learning in changing environments is to discard/downweight past data; however, this simple mechanism of "forgetting" fails to account for the fact that many real-world environments involve revisiting similar states. We propose a new framework, Bayes with Adaptive Memory (BAM), that takes advantage of past experience by allowing the agent to choose which past observations to remember and which to forget. We demonstrate that BAM generalizes many popular Bayesian update rules for non-stationary environments. Through a variety of experiments, we demonstrate the ability of BAM to continuously adapt in an ever-changing world. | https://openreview.net/pdf/570533db54609ccaa349705c913de9ad4439ee1d.pdf |
Unsupervised Learning of Full-Waveform Inversion: Connecting CNN and Partial Differential Equation in a Loop | https://openreview.net/forum?id=izvwgBic9q | https://openreview.net/forum?id=izvwgBic9q | Peng Jin,Xitong Zhang,Yinpeng Chen,Sharon X Huang,Zicheng Liu,Youzuo Lin | ICLR 2022,Poster | This paper investigates unsupervised learning of Full-Waveform Inversion (FWI), which has been widely used in geophysics to estimate subsurface velocity maps from seismic data. This problem is mathematically formulated by a second order partial differential equation (PDE), but is hard to solve. Moreover, acquiring velocity map is extremely expensive, making it impractical to scale up a supervised approach to train the mapping from seismic data to velocity maps with convolutional neural networks (CNN).We address these difficulties by $\textit{integrating PDE and CNN in a loop}$, thus shifting the paradigm to unsupervised learning that only requires seismic data. In particular, we use finite difference to approximate the forward modeling of PDE as a differentiable operator (from velocity map to seismic data) and model its inversion by CNN (from seismic data to velocity map). Hence, we transform the supervised inversion task into an unsupervised seismic data reconstruction task. We also introduce a new large-scale dataset $\textit{OpenFWI}$, to establish a more challenging benchmark for the community. Experiment results show that our model (using seismic data alone) yields comparable accuracy to the supervised counterpart (using both seismic data and velocity map). Furthermore, it outperforms the supervised model when involving more seismic data. | https://openreview.net/pdf/2b601798926b31583beb679577563de37154b91f.pdf |
Conditional Contrastive Learning with Kernel | https://openreview.net/forum?id=AAJLBoGt0XM | https://openreview.net/forum?id=AAJLBoGt0XM | Yao-Hung Hubert Tsai,Tianqin Li,Martin Q. Ma,Han Zhao,Kun Zhang,Louis-Philippe Morency,Ruslan Salakhutdinov | ICLR 2022,Poster | Conditional contrastive learning frameworks consider the conditional sampling procedure that constructs positive or negative data pairs conditioned on specific variables. Fair contrastive learning constructs negative pairs, for example, from the same gender (conditioning on sensitive information), which in turn reduces undesirable information from the learned representations; weakly supervised contrastive learning constructs positive pairs with similar annotative attributes (conditioning on auxiliary information), which in turn are incorporated into the representations. Although conditional contrastive learning enables many applications, the conditional sampling procedure can be challenging if we cannot obtain sufficient data pairs for some values of the conditioning variable. This paper presents Conditional Contrastive Learning with Kernel (CCL-K) that converts existing conditional contrastive objectives into alternative forms that mitigate the insufficient data problem. Instead of sampling data according to the value of the conditioning variable, CCL-K uses the Kernel Conditional Embedding Operator that samples data from all available data and assigns weights to each sampled data given the kernel similarity between the values of the conditioning variable. We conduct experiments using weakly supervised, fair, and hard negatives contrastive learning, showing CCL-K outperforms state-of-the-art baselines.
| https://openreview.net/pdf/b1faf52b79128f7491658b68cc175c78d5313668.pdf |
ConFeSS: A Framework for Single Source Cross-Domain Few-Shot Learning | https://openreview.net/forum?id=zRJu6mU2BaE | https://openreview.net/forum?id=zRJu6mU2BaE | Debasmit Das,Sungrack Yun,Fatih Porikli | ICLR 2022,Poster | Most current few-shot learning methods train a model from abundantly labeled base category data and then transfer and adapt the model to sparsely labeled novel category data. These methods mostly generalize well on novel categories from the same domain as the base categories but perform poorly for distant domain categories. In this paper, we propose a framework for few-shot learning coined as ConFeSS (Contrastive Learning and Feature Selection System) that tackles large domain shift between base and novel categories. The first step of our framework trains a feature extracting backbone with the contrastive loss on the base category data. Since the contrastive loss does not use supervision, the features can generalize better to distant target domains. For the second step, we train a masking module to select relevant features that are more suited to target domain classification. Finally, a classifier is fine-tuned along with the backbone such that the backbone produces features similar to the relevant ones. To evaluate our framework, we tested it on a recently introduced cross-domain few-shot learning benchmark. Experimental results demonstrate that our framework outperforms all meta-learning approaches and produces competitive results against recent cross-domain methods. Additional analyses are also performed to better understand our framework. | https://openreview.net/pdf/9a6c5c7a1d3338348f0f985794c41857eabbb501.pdf |
Granger causal inference on DAGs identifies genomic loci regulating transcription | https://openreview.net/forum?id=nZOUYEN6Wvy | https://openreview.net/forum?id=nZOUYEN6Wvy | Alexander P Wu,Rohit Singh,Bonnie Berger | ICLR 2022,Poster | When a dynamical system can be modeled as a sequence of observations, Granger causality is a powerful approach for detecting predictive interactions between its variables. However, traditional Granger causal inference has limited utility in domains where the dynamics need to be represented as directed acyclic graphs (DAGs) rather than as a linear sequence, such as with cell differentiation trajectories. Here, we present GrID-Net, a framework based on graph neural networks with lagged message passing for Granger causal inference on DAG-structured systems. Our motivating application is the analysis of single-cell multimodal data to identify genomic loci that mediate the regulation of specific genes. To our knowledge, GrID-Net is the first single-cell analysis tool that accounts for the temporal lag between a genomic locus becoming accessible and its downstream effect on a target gene's expression. We applied GrID-Net on multimodal single-cell assays that profile chromatin accessibility (ATAC-seq) and gene expression (RNA-seq) in the same cell and show that it dramatically outperforms existing methods for inferring regulatory locus-gene links, achieving up to 71% greater agreement with independent population genetics-based estimates. By extending Granger causality to DAG-structured dynamical systems, our work unlocks new domains for causal analyses and, more specifically, opens a path towards elucidating gene regulatory interactions relevant to cellular differentiation and complex human diseases at unprecedented scale and resolution. | https://openreview.net/pdf/85cdf969129ce01319414cf3e7f1ffc801bb8db0.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.