title
stringlengths 18
162
| url
stringlengths 42
44
| detail_url
stringlengths 42
44
| authors
stringlengths 10
429
| tags
stringclasses 3
values | abstract
stringlengths 400
2.37k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Behavior Proximal Policy Optimization | https://openreview.net/forum?id=3c13LptpIph | https://openreview.net/forum?id=3c13LptpIph | Zifeng Zhuang,Kun LEI,Jinxin Liu,Donglin Wang,Yilang Guo | ICLR 2023,Poster | Offline reinforcement learning (RL) is a challenging setting where existing off-policy actor-critic methods perform poorly due to overestimating of out-of-distribution state-action pairs. Thus, various additional augmentations are proposed to keep the learned policy close to the offline dataset (or the behavior policy). In this work, starting from the analysis of offline monotonic policy improvement, we reach a surprising conclusion that online on-policy algorithms are naturally able to solve offline RL. Specifically, the inherent conservatism of these on-policy algorithms is exactly what the offline RL method needs to overcome the overestimation. Based on this, we propose Behavior Proximal Policy Optimization (BPPO), which solves offline RL without any extra constraint or regularization introduced compared to PPO. Extensive experiments on the D4RL benchmark empirically show this extremely succinct method outperforms state-of-the-art offline RL algorithms. Our implementation is available at https://github.com/Dragon-Zhuang/BPPO. | https://openreview.net/pdf/bfe504f95073a72f5a9a90c36e6d53b9b5f87dd4.pdf |
Actionable Neural Representations: Grid Cells from Minimal Constraints | https://openreview.net/forum?id=xfqDe72zh41 | https://openreview.net/forum?id=xfqDe72zh41 | Will Dorrell,Peter E. Latham,Timothy E. J. Behrens,James C. R. Whittington | ICLR 2023,Poster | To afford flexible behaviour, the brain must build internal representations that mirror the structure of variables in the external world. For example, 2D space obeys rules: the same set of actions combine in the same way everywhere (step north, then south, and you won't have moved, wherever you start). We suggest the brain must represent this consistent meaning of actions across space, as it allows you to find new short-cuts and navigate in unfamiliar settings. We term this representation an `actionable representation'. We formulate actionable representations using group and representation theory, and show that, when combined with biological and functional constraints - non-negative firing, bounded neural activity, and precise coding - multiple modules of hexagonal grid cells are the optimal representation of 2D space. We support this claim with intuition, analytic justification, and simulations. Our analytic results normatively explain a set of surprising grid cell phenomena, and make testable predictions for future experiments. Lastly, we highlight the generality of our approach beyond just understanding 2D space. Our work characterises a new principle for understanding and designing flexible internal representations: they should be actionable, allowing animals and machines to predict the consequences of their actions, rather than just encode. | https://openreview.net/pdf/da785b626e7697b7ef2446f8cd28e584a45bc5b9.pdf |
Mole-BERT: Rethinking Pre-training Graph Neural Networks for Molecules | https://openreview.net/forum?id=jevY-DtiZTR | https://openreview.net/forum?id=jevY-DtiZTR | Jun Xia,Chengshuai Zhao,Bozhen Hu,Zhangyang Gao,Cheng Tan,Yue Liu,Siyuan Li,Stan Z. Li | ICLR 2023,Poster | Recent years have witnessed the prosperity of pre-training graph neural networks (GNNs) for molecules. Typically, atom types as node attributes are randomly masked, and GNNs are then trained to predict masked types as in AttrMask \citep{hu2020strategies}, following the Masked Language Modeling (MLM) task of BERT~\citep{devlin2019bert}. However, unlike MLM with a large vocabulary, the AttrMask pre-training does not learn informative molecular representations due to small and unbalanced atom `vocabulary'. To amend this problem, we propose a variant of VQ-VAE~\citep{van2017neural} as a context-aware tokenizer to encode atom attributes into chemically meaningful discrete codes. This can enlarge the atom vocabulary size and mitigate the quantitative divergence between dominant (e.g., carbons) and rare atoms (e.g., phosphorus). With the enlarged atom `vocabulary', we propose a novel node-level pre-training task, dubbed Masked Atoms Modeling (\textbf{MAM}), to mask some discrete codes randomly and then pre-train GNNs to predict them. MAM also mitigates another issue of AttrMask, namely the negative transfer. It can be easily combined with various pre-training tasks to improve their performance. Furthermore, we propose triplet masked contrastive learning (\textbf{TMCL}) for graph-level pre-training to model the heterogeneous semantic similarity between molecules for effective molecule retrieval. MAM and TMCL constitute a novel pre-training framework, \textbf{Mole-BERT}, which can match or outperform state-of-the-art methods in a fully data-driven manner. We release the code at \textcolor{magenta}{\url{https://github.com/junxia97/Mole-BERT}}. | https://openreview.net/pdf/21b1918178090348ffb159460ee696cfe8360dd2.pdf |
Geometrically regularized autoencoders for non-Euclidean data | https://openreview.net/forum?id=_q7A0m3vXH0 | https://openreview.net/forum?id=_q7A0m3vXH0 | Cheongjae Jang,Yonghyeon Lee,Yung-Kyun Noh,Frank C. Park | ICLR 2023,Poster | Regularization is almost {\it de rigueur} when designing autoencoders that are sparse and robust to noise. Given the recent surge of interest in machine learning problems involving non-Euclidean data, in this paper we address the regularization of autoencoders on curved spaces. We show that by ignoring the underlying geometry of the data and applying standard vector space regularization techniques, autoencoder performance can be severely degraded, or worse, training can fail to converge. Assuming that both the data space and latent space can be modeled as Riemannian manifolds, we show how to construct regularization terms in a coordinate-invariant way, and develop geometric generalizations of the denoising autoencoder and reconstruction contractive autoencoder such that the essential properties that enable the estimation of the derivative of the log-probability density are preserved. Drawing upon various non-Euclidean data sets, we show that our geometric autoencoder regularization techniques can have important performance advantages over vector-spaced methods while avoiding other breakdowns that can result from failing to account for the underlying geometry. | https://openreview.net/pdf/285c31b2f066fb69e4b9845688dc7dcb84db2bd7.pdf |
A Message Passing Perspective on Learning Dynamics of Contrastive Learning | https://openreview.net/forum?id=VBTJqqWjxMv | https://openreview.net/forum?id=VBTJqqWjxMv | Yifei Wang,Qi Zhang,Tianqi Du,Jiansheng Yang,Zhouchen Lin,Yisen Wang | ICLR 2023,Poster | In recent years, contrastive learning achieves impressive results on self-supervised visual representation learning, but there still lacks a rigorous understanding of its learning dynamics. In this paper, we show that if we cast a contrastive objective equivalently into the feature space, then its learning dynamics admits an interpretable form. Specifically, we show that its gradient descent corresponds to a specific message passing scheme on the corresponding augmentation graph. Based on this perspective, we theoretically characterize how contrastive learning gradually learns discriminative features with the alignment update and the uniformity update. Meanwhile, this perspective also establishes an intriguing connection between contrastive learning and Message Passing Graph Neural Networks (MP-GNNs). This connection not only provides a unified understanding of many techniques independently developed in each community, but also enables us to borrow techniques from MP-GNNs to design new contrastive learning variants, such as graph attention, graph rewiring, jumpy knowledge techniques, etc. We believe that our message passing perspective not only provides a new theoretical understanding of contrastive learning dynamics, but also bridges the two seemingly independent areas together, which could inspire more interleaving studies to benefit from each other. The code is available at https://github.com/PKU-ML/Message-Passing-Contrastive-Learning. | https://openreview.net/pdf/2fc5d0468a10ff4c201bd357964c545383fa6b3d.pdf |
Zeroth-Order Optimization with Trajectory-Informed Derivative Estimation | https://openreview.net/forum?id=n1bLgxHW6jW | https://openreview.net/forum?id=n1bLgxHW6jW | Yao Shu,Zhongxiang Dai,Weicong Sng,Arun Verma,Patrick Jaillet,Bryan Kian Hsiang Low | ICLR 2023,Poster | Zeroth-order (ZO) optimization, in which the derivative is unavailable, has recently succeeded in many important machine learning applications. Existing algorithms rely on finite difference (FD) methods for derivative estimation and gradient descent (GD)-based approaches for optimization. However, these algorithms suffer from query inefficiency because many additional function queries are required for derivative estimation in their every GD update, which typically hinders their deployment in real-world applications where every function query is expensive. To this end, we propose a trajectory-informed derivative estimation method which only employs the optimization trajectory (i.e., the history of function queries during optimization) and hence can eliminate the need for additional function queries to estimate a derivative. Moreover, based on our derivative estimation, we propose the technique of dynamic virtual updates, which allows us to reliably perform multiple steps of GD updates without reapplying derivative estimation. Based on these two contributions, we introduce the zeroth-order optimization with trajectory-informed derivative estimation (ZoRD) algorithm for query-efficient ZO optimization. We theoretically demonstrate that our trajectory-informed derivative estimation and our ZoRD algorithm improve over existing approaches, which is then supported by our real-world experiments such as black-box adversarial attack, non-differentiable metric optimization, and derivative-free reinforcement learning. | https://openreview.net/pdf/83be469cfb05989fdaa09e9ae079b8dd86eecccc.pdf |
Uniform-in-time propagation of chaos for the mean-field gradient Langevin dynamics | https://openreview.net/forum?id=_JScUk9TBUn | https://openreview.net/forum?id=_JScUk9TBUn | Taiji Suzuki,Atsushi Nitanda,Denny Wu | ICLR 2023,Poster | The mean-field Langevin dynamics is characterized by a stochastic differential equation that arises from (noisy) gradient descent on an infinite-width two-layer neural network, which can be viewed as an interacting particle system. In this work, we establish a quantitative weak propagation of chaos result for the system, with a finite-particle discretization error of $\mathcal{O}(1/N)$ \textit{uniformly over time}, where $N$ is the width of the neural network. This allows us to directly transfer the optimization guarantee for infinite-width networks to practical finite-width models without excessive overparameterization. On the technical side, our analysis differs from most existing studies on similar mean field dynamics in that we do not require the interaction between particles to be sufficiently weak to obtain a uniform propagation of chaos, because such assumptions may not be satisfied in neural network optimization. Instead, we make use of a logarithmic Sobolev-type condition which can be verified in appropriate regularized risk minimization settings. | https://openreview.net/pdf/cc7a07e8f312e9e4d0dfcf0899b4a9ed00163f0b.pdf |
Asynchronous Distributed Bilevel Optimization | https://openreview.net/forum?id=_i0-12XqVJZ | https://openreview.net/forum?id=_i0-12XqVJZ | Yang Jiao,Kai Yang,Tiancheng Wu,Dongjin Song,Chengtao Jian | ICLR 2023,Poster | Bilevel optimization plays an essential role in many machine learning tasks, ranging from hyperparameter optimization to meta-learning. Existing studies on bilevel optimization, however, focus on either centralized or synchronous distributed setting. The centralized bilevel optimization approaches require collecting massive amount of data to a single server, which inevitably incur significant communication expenses and may give rise to data privacy risks. Synchronous distributed bilevel optimization algorithms, on the other hand, often face the straggler problem and will immediately stop working if a few workers fail to respond. As a remedy, we propose Asynchronous Distributed Bilevel Optimization (ADBO) algorithm. The proposed ADBO can tackle bilevel optimization problems with both nonconvex upper-level and lower-level objective functions, and its convergence is theoretically guaranteed. Furthermore, it is revealed through theoretic analysis that the iteration complexity of ADBO to obtain the $\epsilon$-stationary point is upper bounded by $\mathcal{O}(\frac{1}{{{\epsilon ^2}}})$. Thorough empirical studies on public datasets have been conducted to elucidate the effectiveness and efficiency of the proposed ADBO. | https://openreview.net/pdf/7751c6dceeca5e43226f9ae20bb2a2ab9915aa40.pdf |
Confidence-Based Feature Imputation for Graphs with Partially Known Features | https://openreview.net/forum?id=YPKBIILy-Kt | https://openreview.net/forum?id=YPKBIILy-Kt | Daeho Um,Jiwoong Park,Seulki Park,Jin young Choi | ICLR 2023,Poster | This paper investigates a missing feature imputation problem for graph learning tasks. Several methods have previously addressed learning tasks on graphs with missing features. However, in cases of high rates of missing features, they were unable to avoid significant performance degradation. To overcome this limitation, we introduce a novel concept of channel-wise confidence in a node feature, which is assigned to each imputed channel feature of a node for reflecting the certainty of the imputation. We then design pseudo-confidence using the channel-wise shortest path distance between a missing-feature node and its nearest known-feature node to replace unavailable true confidence in an actual learning process. Based on the pseudo-confidence, we propose a novel feature imputation scheme that performs channel-wise inter-node diffusion and node-wise inter-channel propagation. The scheme can endure even at an exceedingly high missing rate (e.g., 99.5\%) and it achieves state-of-the-art accuracy for both semi-supervised node classification and link prediction on various datasets containing a high rate of missing features. Codes are available at https://github.com/daehoum1/pcfi. | https://openreview.net/pdf/6b7818868746d16b35135bb54aff95b4fca6cb8c.pdf |
LiftedCL: Lifting Contrastive Learning for Human-Centric Perception | https://openreview.net/forum?id=WHlt5tLz12T | https://openreview.net/forum?id=WHlt5tLz12T | Ziwei Chen,Qiang Li,Xiaofeng Wang,Wankou Yang | ICLR 2023,Poster | Human-centric perception targets for understanding human body pose, shape and segmentation. Pre-training the model on large-scale datasets and fine-tuning it on specific tasks has become a well-established paradigm in human-centric perception. Recently, self-supervised learning methods have re-investigated contrastive learning to achieve superior performance on various downstream tasks. When handling human-centric perception, there still remains untapped potential since 3D human structure information is neglected during the task-agnostic pre-training. In this paper, we propose the Lifting Contrastive Learning (LiftedCL) to obtain 3D-aware human-centric representations which absorb 3D human structure information. In particular, to induce the learning process, a set of 3D skeletons is randomly sampled by resorting to 3D human kinematic prior. With this set of generic 3D samples, 3D human structure information can be learned into 3D-aware representations through adversarial learning. Empirical results demonstrate that LiftedCL outperforms state-of-the-art self-supervised methods on four human-centric downstream tasks, including 2D and 3D human pose estimation (0.4% mAP and 1.8 mm MPJPE improvement on COCO 2D pose estimation and Human3.6M 3D pose estimation), human shape recovery and human parsing. | https://openreview.net/pdf/004de323e98dfc211dd5355353b56d5ead9d130b.pdf |
Individual Privacy Accounting with Gaussian Differential Privacy | https://openreview.net/forum?id=JmC_Tld3v-f | https://openreview.net/forum?id=JmC_Tld3v-f | Antti Koskela,Marlon Tobaben,Antti Honkela | ICLR 2023,Poster | Individual privacy accounting enables bounding differential privacy (DP) loss individually for each participant involved in the analysis. This can be informative as often the individual privacy losses are considerably smaller than those indicated by the DP bounds that are based on considering worst-case bounds at each data access. In order to account for the individual losses in a principled manner, we need a privacy accountant for adaptive compositions of mechanisms, where the loss incurred at a given data access is allowed to be smaller than the worst-case loss. This kind of analysis has been carried out for the Rényi differential privacy by Feldman and Zrnic (2021), however not yet for the so-called optimal privacy accountants. We make first steps in this direction by providing a careful analysis using the Gaussian differential privacy which gives optimal bounds for the Gaussian mechanism, one of the most versatile DP mechanisms. This approach is based on determining a certain supermartingale for the hockey-stick divergence and on extending the Rényi divergence-based fully adaptive composition results by Feldman and Zrnic (2021). We also consider measuring the individual $(\varepsilon,\delta)$-privacy losses using the so-called privacy loss distributions. Using the Blackwell theorem, we can then use the results of Feldman and Zrnic (2021) to construct an approximative individual $(\varepsilon,\delta)$-accountant. We also show how to speed up the FFT-based individual DP accounting using the Plancherel theorem. | https://openreview.net/pdf/61df3e0808cc0744f590bb515d62b73a91d529a0.pdf |
Evolving Populations of Diverse RL Agents with MAP-Elites | https://openreview.net/forum?id=CBfYffLqWqb | https://openreview.net/forum?id=CBfYffLqWqb | Thomas PIERROT,Arthur Flajolet | ICLR 2023,Poster | Quality Diversity (QD) has emerged as a powerful alternative optimization paradigm that aims at generating large and diverse collections of solutions, notably with its flagship algorithm MAP-ELITES (ME) which evolves solutions through mutations and crossovers. While very effective for some unstructured problems, early ME implementations relied exclusively on random search to evolve the population of solutions, rendering them notoriously sample-inefficient for high-dimensional problems, such as when evolving neural networks. Follow-up works considered exploiting gradient information to guide the search in order to address these shortcomings through techniques borrowed from either Black-Box Optimization (BBO) or Reinforcement Learning (RL). While mixing RL techniques with ME unlocked state-of-the-art performance for robotics control problems that require a good amount of exploration, it also plagued these ME variants with limitations common among RL algorithms that ME was free of, such as hyperparameter sensitivity, high stochasticity as well as training instability, including when the population size increases as some components are shared across the population in recent approaches. Furthermore, existing approaches mixing ME with RL tend to be tied to a specific RL algorithm, which effectively prevents their use on problems where the corresponding RL algorithm fails. To address these shortcomings, we introduce a flexible framework that allows the use of any RL algorithm and alleviates the aforementioned limitations by evolving populations of agents (whose definition include hyperparameters and all learnable parameters) instead of just policies. We demonstrate the benefits brought about by our framework through extensive numerical experiments on a number of robotics control problems, some of which with deceptive rewards, taken from the QD-RL literature. We open source an efficient JAX-based implementation of our algorithm in the QDax library. | https://openreview.net/pdf/7647093a5e985828f881513eb78bf7d36bde7a04.pdf |
Gray-Box Gaussian Processes for Automated Reinforcement Learning | https://openreview.net/forum?id=rmoMvptXK7M | https://openreview.net/forum?id=rmoMvptXK7M | Gresa Shala,André Biedenkapp,Frank Hutter,Josif Grabocka | ICLR 2023,Poster | Despite having achieved spectacular milestones in an array of important real-world applications, most Reinforcement Learning (RL) methods are very brittle concerning their hyperparameters. Notwithstanding the crucial importance of setting the hyperparameters in training state-of-the-art agents, the task of hyperparameter optimization (HPO) in RL is understudied. In this paper, we propose a novel gray-box Bayesian Optimization technique for HPO in RL, that enriches Gaussian Processes with reward curve estimations based on generalized logistic functions. In a very large-scale experimental protocol, comprising 5 popular RL methods (DDPG, A2C, PPO, SAC, TD3), dozens of environments (Atari, Mujoco), and 7 HPO baselines, we demonstrate that our method significantly outperforms current HPO practices in RL. | https://openreview.net/pdf/07c32d1c893bba5b63fecd20c32849da78820520.pdf |
Protein Sequence and Structure Co-Design with Equivariant Translation | https://openreview.net/forum?id=pRCMXcfdihq | https://openreview.net/forum?id=pRCMXcfdihq | Chence Shi,Chuanrui Wang,Jiarui Lu,Bozitao Zhong,Jian Tang | ICLR 2023,Poster | Proteins are macromolecules that perform essential functions in all living organisms. Designing novel proteins with specific structures and desired functions has been a long-standing challenge in the field of bioengineering. Existing approaches generate both protein sequence and structure using either autoregressive models or diffusion models, both of which suffer from high inference costs. In this paper, we propose a new approach capable of protein sequence and structure co-design, which iteratively translates both protein sequence and structure into the desired state from random initialization, based on context features given a priori. Our model consists of a trigonometry-aware encoder that reasons geometrical constraints and interactions from context features, and a roto-translation equivariant decoder that translates protein sequence and structure interdependently. Notably, all protein amino acids are updated in one shot in each translation step, which significantly accelerates the inference process. Experimental results across multiple tasks show that our model outperforms previous state-of-the-art baselines by a large margin, and is able to design proteins of high fidelity as regards both sequence and structure, with running time orders of magnitude less than sampling-based methods. | https://openreview.net/pdf/eb4c9a122dd34e10a2a9eb419a4ba937bd7b32e5.pdf |
Learning in temporally structured environments | https://openreview.net/forum?id=z0_V5O9cmNw | https://openreview.net/forum?id=z0_V5O9cmNw | Matt Jones,Tyler R. Scott,Mengye Ren,Gamaleldin Fathy Elsayed,Katherine Hermann,David Mayo,Michael Curtis Mozer | ICLR 2023,Poster | Natural environments have temporal structure at multiple timescales. This property is reflected in biological learning and memory but typically not in machine learning systems. We advance a multiscale learning method in which each weight in a neural network is decomposed as a sum of subweights with different learning and decay rates. Thus knowledge becomes distributed across different timescales, enabling rapid adaptation to task changes while avoiding catastrophic interference. First, we prove previous models that learn at multiple timescales, but with complex coupling between timescales, are equivalent to multiscale learning via a reparameterization that eliminates this coupling. The same analysis yields a new characterization of momentum learning, as a fast weight with a negative learning rate. Second, we derive a model of Bayesian inference over $1/f$ noise, a common temporal pattern in many online learning domains that involves long-range (power law) autocorrelations. The generative side of the model expresses $1/f$ noise as a sum of diffusion processes at different timescales, and the inferential side tracks these latent processes using a Kalman filter. We then derive a variational approximation to the Bayesian model and show how it is an extension of the multiscale learner. The result is an optimizer that can be used as a drop-in replacement in an arbitrary neural network architecture. Third, we evaluate the ability of these methods to handle nonstationarity by testing them in online prediction tasks characterized by $1/f$ noise in the latent parameters. We find that the Bayesian model significantly outperforms online stochastic gradient descent and two batch heuristics that rely preferentially or exclusively on more recent data. Moreover, the variational approximation performs nearly as well as the full Bayesian model, and with memory requirements that are linear in the size of the network.
| https://openreview.net/pdf/6d432541cac1d0dc3c94ca91d2c13b4555f6c19b.pdf |
RandProx: Primal-Dual Optimization Algorithms with Randomized Proximal Updates | https://openreview.net/forum?id=cB4N3G5udUS | https://openreview.net/forum?id=cB4N3G5udUS | Laurent Condat,Peter Richtárik | ICLR 2023,Poster | Proximal splitting algorithms are well suited to solving large-scale nonsmooth optimization problems, in particular those arising in machine learning. We propose a new primal–dual algorithm, in which the dual update is randomized; equivalently, the proximity operator of one of the function in the problem is replaced by a stochastic oracle. For instance, some randomly chosen dual variables, instead of all, are updated at each iteration. Or, the proximity operator of a function is called with some small probability only. A nonsmooth variance-reduction technique is implemented so that the algorithm finds an exact minimizer of the general problem involving smooth and nonsmooth functions, possibly composed with linear operators. We derive linear convergence results in presence of strong convexity; these results are new even in the deterministic case, when our algorithms reverts to the recently proposed Primal–Dual Davis–Yin algorithm. Some randomized algorithms of the literature are also recovered as particular cases (e.g., Point-SAGA). But our randomization technique is general and encompasses many unbiased mechanisms beyond sampling and probabilistic updates, including compression. Since the convergence speed depends on the slowest among the primal and dual contraction mechanisms, the iteration complexity might remain the same when randomness is used. On the other hand, the computation complexity can be significantly reduced. Overall, randomness helps getting faster algorithms. This has long been known for stochastic-gradient-type algorithms, and our work shows that this fully applies in the more general primal–dual setting as well. | https://openreview.net/pdf/5e50256872fcd53bc8d464a9685777de7f78b1ba.pdf |
Preserving Pre-trained Features Helps Calibrate Fine-tuned Language Models | https://openreview.net/forum?id=NI7StoWHJPT | https://openreview.net/forum?id=NI7StoWHJPT | Guande He,Jianfei Chen,Jun Zhu | ICLR 2023,Poster | Large pre-trained language models (PLMs) have demonstrated strong performance on natural language understanding (NLU) tasks through fine-tuning. However, fine-tuned models still suffer from overconfident predictions, especially in out-of-domain settings. In this paper, we tackle the problem of calibrating fine-tuned language models. We demonstrate that the PLMs are well-calibrated on the masked language modeling task with robust predictive confidence under domain shift, yet the fine-tuned models fail to retain such property due to catastrophic forgetting, which impacts the calibration on the downstream classification task. In light of these observations, we evaluate the calibration of several methods that preserve pre-trained features and show that preserving pre-trained features can improve the calibration of fine-tuned language models. Among these methods, our proposed method that encourages the fine-tuned model to learn generative representations with auxiliary language modeling objective achieves competitive accuracy and the lowest expected calibration error compared to several strong baselines under both in-domain and out-of-domain settings on three downstream NLU tasks. | https://openreview.net/pdf/032565002e39f747d69e618dba2713aea4a88f39.pdf |
Fast Nonlinear Vector Quantile Regression | https://openreview.net/forum?id=UxqUgchwXkK | https://openreview.net/forum?id=UxqUgchwXkK | Aviv A. Rosenberg,Sanketh Vedula,Yaniv Romano,Alexander Bronstein | ICLR 2023,Poster | $$
\newcommand{\rvar}[1]{\mathrm {#1}}
\newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}}
$$
Quantile regression (QR) is a powerful tool for estimating one or more conditional quantiles of a target variable $\rvar{Y}$ given explanatory features $\rvec{X}$.
A limitation of QR is that it is only defined for scalar target variables, due to the formulation of its objective function, and since the notion of quantiles has no standard definition for multivariate distributions.
Recently, vector quantile regression (VQR) was proposed as an extension of QR for vector-valued target variables, thanks to a meaningful generalization of the notion of quantiles to multivariate distributions via optimal transport.
Despite its elegance, VQR is arguably not applicable in practice due to several limitations:
(i) it assumes a linear model for the quantiles of the target $\rvec{Y}$ given the features $\rvec{X}$;
(ii) its exact formulation is intractable even for modestly-sized problems in terms of target dimensions, number of regressed quantile levels, or number of features, and its relaxed dual formulation may violate the monotonicity of the estimated quantiles;
(iii) no fast or scalable solvers for VQR currently exist.
In this work we fully address these limitations, namely:
(i) We extend VQR to the non-linear case, showing substantial improvement over linear VQR;
(ii) We propose {vector monotone rearrangement}, a method which ensures the quantile functions estimated by VQR are monotone functions;
(iii) We provide fast, GPU-accelerated solvers for linear and nonlinear VQR which maintain a fixed memory footprint, and demonstrate that they scale to millions of samples and thousands of quantile levels;
(iv) We release an optimized python package of our solvers as to widespread the use of VQR in real-world applications. | https://openreview.net/pdf/b8d396ce8d5574aa62871ab0dcbee49063a352b2.pdf |
Leveraging Large Language Models for Multiple Choice Question Answering | https://openreview.net/forum?id=yKbprarjc5B | https://openreview.net/forum?id=yKbprarjc5B | Joshua Robinson,David Wingate | ICLR 2023,Poster | While large language models (LLMs) like GPT-3 have achieved impressive results on multiple choice question answering (MCQA) tasks in the zero, one, and few-shot settings, they generally lag behind the MCQA state of the art (SOTA). MCQA tasks have traditionally been presented to LLMs like cloze tasks. An LLM is conditioned on a question (without the associated answer options) and its chosen option is the one assigned the highest probability after normalization (for length, etc.). A more natural prompting approach is to present the question and answer options to the LLM jointly and have it output the symbol (e.g., “A”) associated with its chosen answer option. This approach allows the model to explicitly compare answer options, reduces computational costs, and mitigates the effects of tokenization scheme and answer option representations on answer selection. For the natural approach to be effective, the LLM it is used with must be able to associate answer options with the symbols that represent them. The LLM needs what we term multiple choice symbol binding (MCSB) ability. This ability varies greatly by model. We show that a model with high MCSB ability performs much better with the natural approach than with the traditional approach across 20 diverse datasets and largely closes the gap with the SOTA, suggesting that the MCQA ability of LLMs has been previously underestimated. | https://openreview.net/pdf/1ae9b0968aa2c0f898d5082be403c3070b0c09fc.pdf |
Regression with Label Differential Privacy | https://openreview.net/forum?id=h9O0wsmL-cT | https://openreview.net/forum?id=h9O0wsmL-cT | Badih Ghazi,Pritish Kamath,Ravi Kumar,Ethan Leeman,Pasin Manurangsi,Avinash Varadarajan,Chiyuan Zhang | ICLR 2023,Poster | We study the task of training regression models with the guarantee of _label_ differential privacy (DP). Based on a global prior distribution of label values, which could be obtained privately, we derive a label DP randomization mechanism that is optimal under a given regression loss function. We prove that the optimal mechanism takes the form of a "randomized response on bins", and propose an efficient algorithm for finding the optimal bin values. We carry out a thorough experimental evaluation on several datasets demonstrating the efficacy of our algorithm. | https://openreview.net/pdf/3b8644d8c7fbebf7b519a2e229d90c1dffc71a6b.pdf |
Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement | https://openreview.net/forum?id=fGG6vHp3W9W | https://openreview.net/forum?id=fGG6vHp3W9W | Michael Chang,Alyssa Li Dayan,Franziska Meier,Thomas L. Griffiths,Sergey Levine,Amy Zhang | ICLR 2023,Poster | Object rearrangement is a challenge for embodied agents because solving these tasks requires generalizing across a combinatorially large set of configurations of entities and their locations. Worse, the representations of these entities are unknown and must be inferred from sensory percepts. We present a hierarchical abstraction approach to uncover these underlying entities and achieve combinatorial generalization from unstructured visual inputs. By constructing a factorized transition graph over clusters of entity representations inferred from pixels, we show how to learn a correspondence between intervening on states of entities in the agent's model and acting on objects in the environment. We use this correspondence to develop a method for control that generalizes to different numbers and configurations of objects, which outperforms current offline deep RL methods when evaluated on simulated rearrangement tasks. | https://openreview.net/pdf/3fe199764971a79f6ca90f8e713263343dea0eec.pdf |
Selective Frequency Network for Image Restoration | https://openreview.net/forum?id=tyZ1ChGZIKO | https://openreview.net/forum?id=tyZ1ChGZIKO | Yuning Cui,Yi Tao,Zhenshan Bing,Wenqi Ren,Xinwei Gao,Xiaochun Cao,Kai Huang,Alois Knoll | ICLR 2023,Poster | Image restoration aims to reconstruct the latent sharp image from its corrupted counterpart. Besides dealing with this long-standing task in the spatial domain, a few approaches seek solutions in the frequency domain in consideration of the large discrepancy between spectra of sharp/degraded image pairs. However, these works commonly utilize transformation tools, e.g., wavelet transform, to split features into several frequency parts, which is not flexible enough to select the most informative frequency component to recover. In this paper, we exploit a multi-branch and content-aware module to decompose features into separate frequency subbands dynamically and locally, and then accentuate the useful ones via channel-wise attention weights. In addition, to handle large-scale degradation blurs, we propose an extremely simple decoupling and modulation module to enlarge the receptive field via global and window-based average pooling. Integrating two developed modules into a U-Net backbone, the proposed Selective Frequency Network (SFNet) performs favorably against state-of-the-art algorithms on five image restoration tasks, including single-image defocus deblurring, image dehazing, image motion deblurring, image desnowing, and image deraining. | https://openreview.net/pdf/71e16caeaf36c1dec29e526029de3709d632d4bd.pdf |
Improving Differentiable Neural Architecture Search by Encouraging Transferability | https://openreview.net/forum?id=Tl8OmiibP99 | https://openreview.net/forum?id=Tl8OmiibP99 | Parth Sheth,Pengtao Xie | ICLR 2023,Poster | Differentiable neural architecture search methods are increasingly popular due to their computational efficiency. However, these methods have unsatisfactory generalizability and stability. Their searched architectures are often degenerate with a dominant number of skip connections and perform unsatisfactorily on test data. Existing methods for solving this problem have a variety of limitations, such as cannot prevent the happening of architecture degeneration, being excessively restrictive in setting the number of skip connections, etc. To address these limitations, we propose a new approach for improving the generalizability and stability of differentiable NAS, by developing a transferability-encouraging tri-level optimization framework which improves the architecture of a main model by encouraging good transferability to an auxiliary model. Our framework involves three stages performed end-to-end: 1) train network weights of a main model; 2) transfer knowledge from the main model to an auxiliary model; 3) optimize the architecture of the main model by maximizing its transferability to the auxiliary model. We propose a new knowledge transfer approach based on matching quadruple relative similarities. Experiments on several datasets demonstrate the effectiveness of our method. | https://openreview.net/pdf/f8c3c5ed3e60eaf450623930b9a0fff29ac63fd5.pdf |
MA-BERT: Towards Matrix Arithmetic-only BERT Inference by Eliminating Complex Non-Linear Functions | https://openreview.net/forum?id=HtAfbHa7LAL | https://openreview.net/forum?id=HtAfbHa7LAL | Neo Wei Ming,Zhehui Wang,Cheng Liu,Rick Siow Mong Goh,Tao Luo | ICLR 2023,Poster | Due to their superior results, Transformer-based models such as BERT have become de facto standards in many Natural Language Processing (NLP) applications. However, the intensive use of complex non-linear functions within the Transformer architecture impairs its computing efficiency and complicates corresponding accelerator designs, because non-linear functions are generally computation-intensive and require special hardware support. In light of this, we propose MA-BERT, which allows matrix arithmetic-only operations in Transformer-based NLP models and achieves efficient inference with negligible accuracy loss. Specifically, we propose four correlated techniques that include approximating softmax with a two-layer neural network, replacing GELU with ReLU, fusing normalization layers with adjacent linear layers, and leveraging knowledge transfer from baseline models. Through these techniques, we are able to eliminate the major non-linear functions in Transformer-based models and obtain MA-BERT with only matrix arithmetic and trivial ReLU operations without compromising on accuracy. With mainly regular matrix arithmetic operations, MA-BERT enables hardware-friendly processing on various computing engines, including CPUs and GPUs. Our experimental results show that MA-BERT achieves up to 27% and 41% reduction in inference time on CPU and GPU, respectively, with comparable accuracy on many downstream tasks compared to the baseline BERT models. | https://openreview.net/pdf/f33a68790cb4eec0f661b655af2303d9e9058d26.pdf |
Efficient Certified Training and Robustness Verification of Neural ODEs | https://openreview.net/forum?id=KyoVpYvWWnK | https://openreview.net/forum?id=KyoVpYvWWnK | Mustafa Zeqiri,Mark Niklas Mueller,Marc Fischer,Martin Vechev | ICLR 2023,Poster | Neural Ordinary Differential Equations (NODEs) are a novel neural architecture, built around initial value problems with learned dynamics which are solved during inference. Thought to be inherently more robust against adversarial perturbations, they were recently shown to be vulnerable to strong adversarial attacks, highlighting the need for formal guarantees. However, despite significant progress in robustness verification for standard feed-forward architectures, the verification of high dimensional NODEs remains an open problem. In this work we address this challenge and propose GAINS, an analysis framework for NODEs combining three key ideas: (i) a novel class of ODE solvers, based on variable but discrete time steps, (ii) an efficient graph representation of solver trajectories, and (iii) a novel abstraction algorithm operating on this graph representation. Together, these advances enable the efficient analysis and certified training of high-dimensional NODEs, by reducing the runtime from an intractable $\mathcal{O}(\exp(d)+\exp(T))$ to $\mathcal{O}(d+T^2\log^2T)$ in the dimensionality $d$ and integration time $T$. In an extensive evaluation on computer vision (MNIST and Fashion-MNIST) and time-series forecasting (Physio-Net) problems, we demonstrate the effectiveness of both our certified training and verification methods. | https://openreview.net/pdf/0b92d93ffeb2c0fc6a508ddf561204f7187a8a69.pdf |
Arbitrary Virtual Try-on Network: Characteristics Representation and Trade-off between Body and Clothing | https://openreview.net/forum?id=d8mr8lKIZ3n | https://openreview.net/forum?id=d8mr8lKIZ3n | Yu Liu,Mingbo Zhao,Zhao Zhang,Jicong Fan,Yang Lou,Shuicheng Yan | ICLR 2023,Poster | Deep learning based virtual try-on system has achieved some encouraging progress recently, but there still remain several big challenges that need to be solved, such as trying on arbitrary clothes of all types, trying on the clothes from one category to another and generating image-realistic results with few artifacts. To handle this issue, we propose the Arbitrary Virtual Try-On Network (AVTON) that is utilized for all-type clothes, which can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person. Our approach includes three modules: 1) Limbs Prediction Module, which is utilized for predicting the human body parts by preserving the characteristics of the reference person. This is especially good for handling cross-category try-on task (e.g., long sleeves \(\leftrightarrow\) short sleeves or long pants \(\leftrightarrow\) skirts, etc.), where the exposed arms or legs with the skin colors and details can be reasonably predicted; 2) Improved Geometric Matching Module, which is designed to warp clothes according to the geometry of the target person. We improve the TPS-based warping method with a compactly supported radial function (Wendland's \(\Psi\)-function); 3) Trade-Off Fusion Module, which is to trade off the characteristics of the warped clothes and the reference person. This module is to make the generated try-on images look more natural and realistic based on a fine-tuning symmetry of the network structure. Extensive simulations are conducted and our approach can achieve better performance compared with the state-of-the-art virtual try-on methods. | https://openreview.net/pdf/b555725c914e1f9fb64a37dce8fe336643c34b60.pdf |
UL2: Unifying Language Learning Paradigms | https://openreview.net/forum?id=6ruVLB727MC | https://openreview.net/forum?id=6ruVLB727MC | Yi Tay,Mostafa Dehghani,Vinh Q. Tran,Xavier Garcia,Jason Wei,Xuezhi Wang,Hyung Won Chung,Dara Bahri,Tal Schuster,Steven Zheng,Denny Zhou,Neil Houlsby,Donald Metzler | ICLR 2023,Poster | Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization. Finally, we show that UL2 20B works well with chain-of-thought prompting and reasoning, making it an appealing choice for research into reasoning at a small to medium scale of 20B parameters. We release Flax-based T5X model checkpoints for the 20B model publicly.
| https://openreview.net/pdf/8ac86c3590d8d02420c1aec52a1ee763b2f5166d.pdf |
CASR: Generating Complex Sequences with Autoregressive Self-Boost Refinement | https://openreview.net/forum?id=SVl1w1u3InX | https://openreview.net/forum?id=SVl1w1u3InX | Hongwei Han,Mengyu Zhou,Shi Han,Xiu Li,Dongmei Zhang | ICLR 2023,Poster | There are sequence generation tasks where the best order to generate the target sequence is not left-to-right. For example, an answer to the Sudoku game, a structured code like s-expression, and even a logical natural language answer where the analysis may be generated after the decision. We define the target sequences of those tasks as complex sequences. Obviously, a complex sequence should be constructed with multiple logical steps, and has dependencies among each part of itself (e.g. decisions depend on analyses). It's a great challenge for the classic left-to-right autoregressive generation system to generate complex sequences. Current approaches improve one-pass left-to-right generation on NLG tasks by generating different heuristic intermediate sequences in multiple stages. However, for complex sequences, the heuristic rules to break down them may hurt performance, and increase additional exposure bias. To tackle these challenges, we propose a PLM-friendly autoregressive self-boost refinement framework, CASR. When training, CASR inputs the predictions generated by the model itself at the previous refinement step (instead of those produced by heuristic rules). To find an optimal design, we also discuss model architecture, parameter efficiency and initialization strategy. By evaluating CASR on Sudoku, WebQSP, MTOP and KVRET through controlled experiments and empirical studies, we find that CASR produces high-quality outputs. CASR also improves Accuracy on Sudoku (70.93% --> 97.28%) and achieves state-of-the-art performance on KVRET with Micro F1 score (67.88% --> 70.00%). | https://openreview.net/pdf/46ef25cefed16d70e9ebd2d445c5e3a4aa7f054e.pdf |
Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts | https://openreview.net/forum?id=2QzNuaRHn4Z | https://openreview.net/forum?id=2QzNuaRHn4Z | Amrith Setlur,Don Dennis,Benjamin Eysenbach,Aditi Raghunathan,Chelsea Finn,Virginia Smith,Sergey Levine | ICLR 2023,Poster | Training machine learning models robust to distribution shifts is critical for real-world applications. Some robust training algorithms (e.g., Group DRO) specialize to group shifts and require group information on all training points. Other methods (e.g., CVaR DRO) that do not need group annotations can be overly conservative, since they naively upweight high loss points which may form a contrived set that does not correspond to any meaningful group in the real world (e.g., when the high loss points are randomly mislabeled training points). In this work, we address limitations in prior approaches by assuming a more nuanced form of group shift: conditioned on the label, we assume that the true group function (indicator over group) is simple. For example, we may expect that group shifts occur along low bitrate features (e.g., image background, lighting). Thus, we aim to learn a model that maintains high accuracy on simple group functions realized by these low bitrate features, that need not spend valuable model capacity achieving high accuracy on contrived groups of examples. Based on this, we consider the two-player game formulation of DRO where the adversary's capacity is bitrate-constrained. Our resulting practical algorithm, Bitrate-Constrained DRO (\bdro), does not require group information on training samples yet matches the performance of Group DRO on datasets that have training group annotations and that of CVaR DRO on long-tailed distributions. Our theoretical analysis reveals that in some settings \bdro objective can provably yield statistically efficient and less conservative solutions than unconstrained CVaR DRO. | https://openreview.net/pdf/0aecebad8cc0de12f0d43f94fdcf97c873962ac4.pdf |
Feature selection and low test error in shallow low-rotation ReLU networks | https://openreview.net/forum?id=swEskiem99 | https://openreview.net/forum?id=swEskiem99 | Matus Telgarsky | ICLR 2023,Poster | This work establishes low test error of gradient flow (GF) and stochastic gradient descent (SGD) on two-layer ReLU networks with standard initialization scale, in three regimes where key sets of weights rotate little (either naturally due to GF and SGD, or due to an artificial constraint), and making use of margins as the core analysis technique. The first regime is near initialization, specifically until the weights have moved by $\mathcal{O}(\sqrt m)$, where $m$ denotes the network width, which is in sharp contrast to the $\mathcal{O}(1)$ weight motion allowed by the Neural Tangent Kernel (NTK); here it is shown that GF and SGD only need a network width and number of samples inversely proportional to the NTK margin, and moreover that GF attains at least the NTK margin itself and in particular escapes bad KKT points of the margin objective, whereas prior work could only establish nondecreasing but arbitrarily small margins. The second regime is the Neural Collapse (NC) setting, where data lies in well-separated groups, and the sample complexity scales with the number of groups; here the contribution over prior work is an analysis of the entire GF trajectory from initialization. Lastly, if the inner layer weights are constrained to change in norm only and can not rotate, then GF with large widths achieves globally maximal margins, and its sample complexity scales with their inverse; this is in contrast to prior work, which required infinite width and a tricky dual convergence assumption.
| https://openreview.net/pdf/fcbfd7f8d46076e87300ed63c39d60a6e8c11a7e.pdf |
Backpropagation through Combinatorial Algorithms: Identity with Projection Works | https://openreview.net/forum?id=JZMR727O29 | https://openreview.net/forum?id=JZMR727O29 | Subham Sekhar Sahoo,Anselm Paulus,Marin Vlastelica,Vít Musil,Volodymyr Kuleshov,Georg Martius | ICLR 2023,Poster | Embedding discrete solvers as differentiable layers has given modern deep learning architectures combinatorial expressivity and discrete reasoning capabilities. The derivative of these solvers is zero or undefined, therefore a meaningful replacement is crucial for effective gradient-based learning. Prior works rely on smoothing the solver with input perturbations, relaxing the solver to continuous problems, or interpolating the loss landscape with techniques that typically require additional solver calls, introduce extra hyper-parameters, or compromise performance. We propose a principled approach to exploit the geometry of the discrete solution space to treat the solver as a negative identity on the backward pass and further provide a theoretical justification. Our experiments demonstrate that such a straightforward hyper-parameter-free approach is able to compete with previous more complex methods on numerous experiments such as backpropagation through discrete samplers, deep graph matching, and image retrieval. Furthermore, we substitute the previously proposed problem-specific and label-dependent margin with a generic regularization procedure that prevents cost collapse and increases robustness. | https://openreview.net/pdf/1fd744076d10f6b78516e5b369737d7e4ce6811f.pdf |
Coupled Multiwavelet Operator Learning for Coupled Differential Equations | https://openreview.net/forum?id=kIo_C6QmMOM | https://openreview.net/forum?id=kIo_C6QmMOM | Xiongye Xiao,Defu Cao,Ruochen Yang,Gaurav Gupta,Gengshuo Liu,Chenzhong Yin,Radu Balan,Paul Bogdan | ICLR 2023,Poster | Coupled partial differential equations (PDEs) are key tasks in modeling the complex dynamics of many physical processes. Recently, neural operators have shown the ability to solve PDEs by learning the integral kernel directly in Fourier/Wavelet space, so the difficulty of solving the coupled PDEs depends on dealing with the coupled mappings between the functions. Towards this end, we propose a \textit{coupled multiwavelets neural operator} (CMWNO) learning scheme by decoupling the coupled integral kernels during the multiwavelet decomposition and reconstruction procedures in the Wavelet space. The proposed model achieves significantly higher accuracy compared to previous learning-based solvers in solving the coupled PDEs including Gray-Scott (GS) equations and the non-local mean field game (MFG) problem. According to our experimental results, the proposed model exhibits a $2X-4X$ improvement relative $L$2 error compared to the best results from the state-of-the-art models. | https://openreview.net/pdf/ee0dc763a2dfe94ef4a6a84b6f02e67537508fbb.pdf |
Mid-Vision Feedback | https://openreview.net/forum?id=4oLK1_k71Tz | https://openreview.net/forum?id=4oLK1_k71Tz | Michael Maynord,Eadom T Dessalene,Cornelia Fermuller,Yiannis Aloimonos | ICLR 2023,Poster | Feedback plays a prominent role in biological vision, where perception is modulated based on agents' evolving expectations and world model. We introduce a novel mechanism which modulates perception based on high level categorical expectations: Mid-Vision Feedback (MVF). MVF associates high level contexts with linear transformations. When a context is "expected" its associated linear transformation is applied over feature vectors in a mid level of a network. The result is that mid-level network representations are biased towards conformance with high level expectations, improving overall accuracy and contextual consistency. Additionally, during training mid-level feature vectors are biased through introduction of a loss term which increases the distance between feature vectors associated with different contexts. MVF is agnostic as to the source of contextual expectations, and can serve as a mechanism for top down integration of symbolic systems with deep vision architectures. We show the superior performance of MVF to post-hoc filtering for incorporation of contextual knowledge, and show superior performance of configurations using predicted context (when no context is known a priori) over configurations with no context awareness. | https://openreview.net/pdf/15ea78c27dbe1409283f448e49cd22140b311618.pdf |
Safe Reinforcement Learning From Pixels Using a Stochastic Latent Representation | https://openreview.net/forum?id=b39dQt_uffW | https://openreview.net/forum?id=b39dQt_uffW | Yannick Hogewind,Thiago D. Simão,Tal Kachman,Nils Jansen | ICLR 2023,Poster | We address the problem of safe reinforcement learning from pixel observations. Inherent challenges in such settings are (1) a trade-off between reward optimization and adhering to safety constraints, (2) partial observability, and (3) high-dimensional observations. We formalize the problem in a constrained, partially observable Markov decision process framework, where an agent obtains distinct reward and safety signals. To address the curse of dimensionality, we employ a novel safety critic using the stochastic latent actor-critic (SLAC) approach. The latent variable model predicts rewards and safety violations, and we use the safety critic to train safe policies. Using well-known benchmark environments, we demonstrate competitive performance over existing approaches regarding computational requirements, final reward return, and satisfying the safety constraints. | https://openreview.net/pdf/9c4b07fd4a61d7c3cbe4abfa26f9ce1b7f127301.pdf |
TrojText: Test-time Invisible Textual Trojan Insertion | https://openreview.net/forum?id=ja4Lpp5mqc2 | https://openreview.net/forum?id=ja4Lpp5mqc2 | Qian Lou,Yepeng Liu,Bo Feng | ICLR 2023,Poster | In Natural Language Processing (NLP), intelligent neuron models can be susceptible to textual Trojan attacks. Such attacks occur when Trojan models behave normally for standard inputs but generate malicious output for inputs that contain a specific trigger. Syntactic-structure triggers, which are invisible, are becoming more popular for Trojan attacks because they are difficult to detect and defend against. However, these types of attacks require a large corpus of training data to generate poisoned samples with the necessary syntactic structures for Trojan insertion. Obtaining such data can be difficult for attackers, and the process of generating syntactic poisoned triggers and inserting Trojans can be time-consuming. This paper proposes a solution called TrojText, which aims to determine whether invisible textual Trojan attacks can be performed more efficiently and cost-effectively without training data. The proposed approach, called the Representation-Logit Trojan Insertion (RLI) algorithm, uses smaller sampled test data instead of large training data to achieve the desired attack. The paper also introduces two additional techniques, namely the accumulated gradient ranking (AGR) and Trojan Weights Pruning (TWP), to reduce the number of tuned parameters and the attack overhead. The TrojText approach was evaluated on three datasets (AG’s News, SST-2, and OLID) using three NLP models (BERT, XLNet, and DeBERTa). The experiments demonstrated that the TrojText approach achieved a 98.35% classification accuracy for test sentences in the target class on the BERT model for the AG’s News dataset. The source code for TrojText is available at https://github.com/UCF-ML-Research/TrojText. | https://openreview.net/pdf/090c1fa0cc728fa6eb032fe3c74b8b5125be7e94.pdf |
Improved Training of Physics-Informed Neural Networks Using Energy-Based Priors: a Study on Electrical Impedance Tomography | https://openreview.net/forum?id=zqkfJA6R1-r | https://openreview.net/forum?id=zqkfJA6R1-r | Akarsh Pokkunuru,Pedram Rooshenas,Thilo Strauss,Anuj Abhishek,Taufiquar Khan | ICLR 2023,Poster | Physics-informed neural networks (PINNs) are attracting significant attention for solving partial differential equation (PDE) based inverse problems, including electrical impedance tomography (EIT). EIT is non-linear and especially its inverse problem is highly ill-posed. Therefore, successful training of PINN is extremely sensitive to interplay between different loss terms and hyper-parameters, including the learning rate. In this work, we propose a Bayesian approach through data-driven energy-based model (EBM) as a prior, to improve the overall accuracy and quality of tomographic reconstruction. In particular, the EBM is trained over the possible solutions of the PDEs with different boundary conditions. By imparting such prior onto physics-based training, PINN convergence is expedited by more than ten times faster to the PDE’s solution. Evaluation outcome shows that our proposed method is more robust for solving the EIT problem. Our code is available at: https://rooshenasgroup.github.io/eit_ebprior. | https://openreview.net/pdf/606fb09bd84f729bc308ee0e859bca508d2d5c14.pdf |
Ordered GNN: Ordering Message Passing to Deal with Heterophily and Over-smoothing | https://openreview.net/forum?id=wKPmPBHSnT6 | https://openreview.net/forum?id=wKPmPBHSnT6 | Yunchong Song,Chenghu Zhou,Xinbing Wang,Zhouhan Lin | ICLR 2023,Poster | Most graph neural networks follow the message passing mechanism. However, it faces the over-smoothing problem when multiple times of message passing is applied to a graph, causing indistinguishable node representations and prevents the model to effectively learn dependencies between farther-away nodes. On the other hand, features of neighboring nodes with different labels are likely to be falsely mixed, resulting in the heterophily problem. In this work, we propose to order the messages passing into the node representation, with specific blocks of neurons targeted for message passing within specific hops. This is achieved by aligning the hierarchy of the rooted-tree of a central node with the ordered neurons in its node representation. Experimental results on an extensive set of datasets show that our model can simultaneously achieve the state-of-the-art in both homophily and heterophily settings, without any targeted design. Moreover, its performance maintains pretty well while the model becomes really deep, effectively preventing the over-smoothing problem. Finally, visualizing the gating vectors shows that our model learns to behave differently between homophily and heterophily settings, providing an explainable graph neural model. | https://openreview.net/pdf/eebe1da00d2578a09fe08d67012037b1bcf8deea.pdf |
Sparse Distributed Memory is a Continual Learner | https://openreview.net/forum?id=JknGeelZJpHP | https://openreview.net/forum?id=JknGeelZJpHP | Trenton Bricken,Xander Davies,Deepak Singh,Dmitry Krotov,Gabriel Kreiman | ICLR 2023,Poster | Continual learning is a problem for artificial neural networks that their biological counterparts are adept at solving. Building on work using Sparse Distributed Memory (SDM) to connect a core neural circuit with the powerful Transformer model, we create a modified Multi-Layered Perceptron (MLP) that is a strong continual learner. We find that every component of our MLP variant translated from biology is necessary for continual learning. Our solution is also free from any memory replay or task information, and introduces novel methods to train sparse networks that may be broadly applicable. | https://openreview.net/pdf/56ddb3d15021d5342d872d0058e6754318b2a8cf.pdf |
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning | https://openreview.net/forum?id=Xo2E217_M4n | https://openreview.net/forum?id=Xo2E217_M4n | Kaiyuan Zhang,Guanhong Tao,Qiuling Xu,Siyuan Cheng,Shengwei An,Yingqi Liu,Shiwei Feng,Guangyu Shen,Pin-Yu Chen,Shiqing Ma,Xiangyu Zhang | ICLR 2023,Poster | Federated Learning (FL) is a distributed learning paradigm that enables different parties to train a model together for high quality and strong privacy protection. In this scenario, individual participants may get compromised and perform backdoor attacks by poisoning the data (or gradients). Existing work on robust aggregation and certified FL robustness does not study how hardening benign clients can affect the global model (and the malicious clients). In this work, we theoretically analyze the connection among cross-entropy loss, attack success rate, and clean accuracy in this setting. Moreover, we propose a trigger reverse engineering based defense and show that our method can achieve robustness improvement with guarantee (i.e., reducing the attack success rate) without affecting benign accuracy. We conduct comprehensive experiments across different datasets and attack settings. Our results on nine competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks. Code is available at https://github.com/KaiyuanZh/FLIP. | https://openreview.net/pdf/6731b5784520aedd43f4da6cb01e5587b66819be.pdf |
UniMax: Fairer and More Effective Language Sampling for Large-Scale Multilingual Pretraining | https://openreview.net/forum?id=kXwdL1cWOAi | https://openreview.net/forum?id=kXwdL1cWOAi | Hyung Won Chung,Xavier Garcia,Adam Roberts,Yi Tay,Orhan Firat,Sharan Narang,Noah Constant | ICLR 2023,Poster | Pretrained multilingual large language models have typically used heuristic temperature-based sampling to balance between different languages. However previous work has not systematically evaluated the efficacy of different pretraining language distributions across model scales. In this paper, we propose a new sampling method, UniMax, that delivers more uniform coverage of head languages while mitigating overfitting on tail languages by explicitly capping the number of repeats over each language's corpus. We perform an extensive series of ablations testing a range of sampling strategies on a suite of multilingual benchmarks, while varying model scale. We find that UniMax outperforms standard temperature-based sampling, and the benefits persist as scale increases. As part of our contribution, we release: (i) an improved and refreshed mC4 multilingual corpus consisting of 29 trillion characters across 107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained with UniMax sampling. | https://openreview.net/pdf/f9db93a34d7f56ce156cb253feba6c638acc2b21.pdf |
GNNInterpreter: A Probabilistic Generative Model-Level Explanation for Graph Neural Networks | https://openreview.net/forum?id=rqq6Dh8t4d | https://openreview.net/forum?id=rqq6Dh8t4d | Xiaoqi Wang,Han Wei Shen | ICLR 2023,Poster | Recently, Graph Neural Networks (GNNs) have significantly advanced the performance of machine learning tasks on graphs. However, this technological breakthrough makes people wonder: how does a GNN make such decisions, and can we trust its prediction with high confidence? When it comes to some critical fields, such as biomedicine, where making wrong decisions can have severe consequences, it is crucial to interpret the inner working mechanisms of GNNs before applying them. In this paper, we propose a model-agnostic model-level explanation method for different GNNs that follow the message passing scheme, GNNInterpreter, to explain the high-level decision-making process of the GNN model. More specifically, GNNInterpreter learns a probabilistic generative graph distribution that produces the most discriminative graph pattern the GNN tries to detect when making a certain prediction by optimizing a novel objective function specifically designed for the model-level explanation for GNNs. Compared to existing works, GNNInterpreter is more flexible and computationally efficient in generating explanation graphs with different types of node and edge features, without introducing another blackbox or requiring manually specified domain-specific rules. In addition, the experimental studies conducted on four different datasets demonstrate that the explanation graphs generated by GNNInterpreter match the desired graph pattern if the model is ideal; otherwise, potential model pitfalls can be revealed by the explanation. | https://openreview.net/pdf/11a78bf7dcfad6dafbcf6d00ea84bbb4382f962e.pdf |
Rethinking Symbolic Regression: Morphology and Adaptability in the Context of Evolutionary Algorithms | https://openreview.net/forum?id=OPGy07PojsZ | https://openreview.net/forum?id=OPGy07PojsZ | Kei Sen Fong,Shelvia Wongso,Mehul Motani | ICLR 2023,Poster | Symbolic Regression (SR) is the well-studied problem of finding closed-form analytical expressions that describe the relationship between variables in a measurement dataset. In this paper, we rethink SR from two perspectives: morphology and adaptability. Morphology: Current SR algorithms typically use several man-made heuristics to influence the morphology (or structure) of the expressions in the search space. These man-made heuristics may introduce unintentional bias and data leakage, especially with the relatively few equation-recovery benchmark problems available for evaluating SR approaches. To address this, we formulate a novel minimalistic approach, based on constructing a depth-aware mathematical language model trained on terminal walks of expression trees, as a replacement to these heuristics. Adaptability: Current SR algorithms tend to select expressions based on only a single fitness function (e.g., MSE on the training set). We promote the use of an adaptability framework in evolutionary SR which uses fitness functions that alternate across generations. This leads to robust expressions that perform well on the training set and are close to the true functional form. We demonstrate this by alternating fitness functions that quantify faithfulness to values (via MSE) and empirical derivatives (via a novel theoretically justified fitness metric coined MSEDI). Proof-of-concept: We combine these ideas into a minimalistic evolutionary SR algorithm that outperforms all benchmark and state of-the-art SR algorithms in problems with unknown constants added, which we claim are more reflective of SR performance for real-world applications. Our claim is then strengthened by reproducing the superior performance on real-world regression datasets from SRBench. For researchers interested in equation-recovery problems, we also propose a set of conventions that can be used to promote fairness in comparison across SR methods and to reduce unintentional bias. | https://openreview.net/pdf/7af58da043c44659d490a741c427285fdc8346d6.pdf |
On Pre-training Language Model for Antibody | https://openreview.net/forum?id=zaq4LV55xHl | https://openreview.net/forum?id=zaq4LV55xHl | Danqing Wang,Fei YE,Hao Zhou | ICLR 2023,Poster | Antibodies are vital proteins offering robust protection for the human body from pathogens. The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks. However, there have been limited studies that comprehensively explore the representation capability of distinct pre-trained language models on different antibody tasks. To investigate the problem, we aim to answer several key questions in this paper, such as how pre-trained language models perform in antibody tasks with different specificity and how introducing specific biological mechanisms to the pre-training process can benefit the model. Additionally, we evaluate if the learned antibody pre-trained representations can be applied to real-world antibody problems, like drug discovery and immune process understanding. Previously, no benchmark available largely hindered the study to answer these questions. To aid in our investigation, we provide an AnTibody Understanding Evaluation (ATUE) benchmark. We comprehensively evaluate the performance of protein pre-trained language models by empirical study along with conclusions and new insights. Our ATUE and code are released at https://github.com/dqwang122/EATLM. | https://openreview.net/pdf/c0deacf9e860ac34392346ea2fea727ac90bec4a.pdf |
Learning to reason over visual objects | https://openreview.net/forum?id=uR6x8Be7o_M | https://openreview.net/forum?id=uR6x8Be7o_M | Shanka Subhra Mondal,Taylor Whittington Webb,Jonathan Cohen | ICLR 2023,Poster | A core component of human intelligence is the ability to identify abstract patterns inherent in complex, high-dimensional perceptual data, as exemplified by visual reasoning tasks such as Raven’s Progressive Matrices (RPM). Motivated by the goal of designing AI systems with this capacity, recent work has focused on evaluating whether neural networks can learn to solve RPM-like problems. Previous work has generally found that strong performance on these problems requires the incorporation of inductive biases that are specific to the RPM problem format, raising the question of whether such models might be more broadly useful. Here, we investigated the extent to which a general-purpose mechanism for processing visual scenes in terms of objects might help promote abstract visual reasoning. We found that a simple model, consisting only of an object-centric encoder and a transformer reasoning module, achieved state-of-the-art results on both of two challenging RPM-like benchmarks (PGM and I-RAVEN), as well as a novel benchmark with greater visual complexity (CLEVR-Matrices). These results suggest that an inductive bias for object-centric processing may be a key component of abstract visual reasoning, obviating the need for problem-specific inductive biases. | https://openreview.net/pdf/a254cbbe48d67b9d3a5f5a0c75a1f6f6d56e99f7.pdf |
Imitating Graph-Based Planning with Goal-Conditioned Policies | https://openreview.net/forum?id=6lUEy1J5R7p | https://openreview.net/forum?id=6lUEy1J5R7p | Junsu Kim,Younggyo Seo,Sungsoo Ahn,Kyunghwan Son,Jinwoo Shin | ICLR 2023,Poster | Recently, graph-based planning algorithms have gained much attention to solve goal-conditioned reinforcement learning (RL) tasks: they provide a sequence of subgoals to reach the target-goal, and the agents learn to execute subgoal-conditioned policies. However, the sample-efficiency of such RL schemes still remains a challenge, particularly for long-horizon tasks. To address this issue, we present a simple yet effective self-imitation scheme which distills a subgoal-conditioned policy into the target-goal-conditioned policy. Our intuition here is that to reach a target-goal, an agent should pass through a subgoal, so target-goal- and subgoal- conditioned policies should be similar to each other. We also propose a novel scheme of stochastically skipping executed subgoals in a planned path, which further improves performance. Unlike prior methods that only utilize graph-based planning in an execution phase, our method transfers knowledge from a planner along with a graph into policy learning. We empirically show that our method can significantly boost the sample-efficiency of the existing goal-conditioned RL methods under various long-horizon control tasks. | https://openreview.net/pdf/fa60437d007be2312f49bfe0e1aebfab94534575.pdf |
A theoretical study of inductive biases in contrastive learning | https://openreview.net/forum?id=AuEgNlEAmed | https://openreview.net/forum?id=AuEgNlEAmed | Jeff Z. HaoChen,Tengyu Ma | ICLR 2023,Poster | Understanding self-supervised learning is important but challenging. Previous theoretical works study the role of pretraining losses, and view neural networks as general black boxes. However, the recent work of [Saunshi et al.] argues that the model architecture --- a component largely ignored by previous works --- also has significant influences on the downstream performance of self-supervised learning. In this work, we provide the first theoretical analysis of self-supervised learning that incorporates the effect of inductive biases originating from the model class. In particular, we focus on contrastive learning --- a popular self-supervised learning method that is widely used in the vision domain. We show that when the model has limited capacity, contrastive representations would recover certain special clustering structures that are compatible with the model architecture, but ignore many other clustering structures in the data distribution. As a result, our theory can capture the more realistic setting where contrastive representations have much lower dimensionality than the number of clusters in the data distribution. We instantiate our theory on several synthetic data distributions, and provide empirical evidence to support the theory. | https://openreview.net/pdf/c0e1dd5361d9fbb78aee85364df2ab49854653b4.pdf |
Combinatorial Pure Exploration of Causal Bandits | https://openreview.net/forum?id=pBBsrPzq7aF | https://openreview.net/forum?id=pBBsrPzq7aF | Nuoya Xiong,Wei Chen | ICLR 2023,Poster | The combinatorial pure exploration of causal bandits is the following online learning task: given a causal graph with unknown causal inference distributions, in each round we choose a subset of variables to intervene or do no intervention, and observe the random outcomes of all random variables, with the goal that using as few rounds as possible, we can output an intervention that gives the best (or almost best) expected outcome on the reward variable $Y$ with probability at least $1-\delta$, where $\delta$ is a given confidence level. We provide the first gap-dependent and fully adaptive pure exploration algorithms on two types of causal models --- the binary generalized linear model (BGLM) and general graphs. For BGLM, our algorithm is the first to be designed specifically for this setting and achieves polynomial sample complexity, while all existing algorithms for general graphs have either sample complexity exponential to the graph size or some unreasonable assumptions. For general graphs, our algorithm provides a significant improvement on sample complexity, and it nearly matches the lower bound we prove. Our algorithms achieve such improvement by a novel integration of prior causal bandit algorithms and prior adaptive pure exploration algorithms, the former of which utilize the rich observational feedback in causal bandits but are not adaptive to reward gaps, while the latter of which have the issue in reverse. | https://openreview.net/pdf/ce0582529b21e3a09fd78c91ca34e1a860955e4c.pdf |
Computational Language Acquisition with Theory of Mind | https://openreview.net/forum?id=C2ulri4duIs | https://openreview.net/forum?id=C2ulri4duIs | Andy Liu,Hao Zhu,Emmy Liu,Yonatan Bisk,Graham Neubig | ICLR 2023,Poster | Unlike current state-of-the-art language models, young children actively acquire language through interactions with their surrounding environment and caretakers. One mechanism that has been argued to be critical to language learning is the ability to infer the mental states of other agents in social environments, coined Theory of Mind (ToM) by Premack & Woodruff (1978). Drawing inspiration from the modern operationalized versions of ToM implemented in Rabinowitz et al. (2018) and Zhu et al. (2021), we build language-learning agents equipped with ToM, and measure its effects on the learning process. We model ToM by giving the speaker agent an internal listener model that is trained alongside the speaker and used to rerank potential utterances. We experiment with varying task difficulty, hypothesizing that models will acquire more complex language to adapt to stronger environmental pressures. We find that training speakers with a highly weighted ToM listener component leads to performance gains in our image referential game setting. We also find some evidence that increasing task difficulty in the training process results in more fluent and precise utterances in evaluation. This suggests the potential utility of further incorporating ToM, as well as other insights from child language acquisition, into computational models of language acquisition. | https://openreview.net/pdf/b8215e9ec231405a7f97d58eb05eb515dbef7abe.pdf |
Pareto Invariant Risk Minimization: Towards Mitigating the Optimization Dilemma in Out-of-Distribution Generalization | https://openreview.net/forum?id=esFxSb_0pSL | https://openreview.net/forum?id=esFxSb_0pSL | Yongqiang Chen,Kaiwen Zhou,Yatao Bian,Binghui Xie,Bingzhe Wu,Yonggang Zhang,MA KAILI,Han Yang,Peilin Zhao,Bo Han,James Cheng | ICLR 2023,Poster | Recently, there has been a growing surge of interest in enabling machine learning systems to generalize well to Out-of-Distribution (OOD) data. Most efforts are devoted to advancing optimization objectives that regularize models to capture the underlying invariance; however, there often are compromises in the optimization process of these OOD objectives: i) Many OOD objectives have to be relaxed as penalty terms of Empirical Risk Minimization (ERM) for the ease of optimization, while the relaxed forms can weaken the robustness of the original objective; ii) The penalty terms also require careful tuning of the penalty weights due to the intrinsic conflicts between ERM and OOD objectives. Consequently, these compromises could easily lead to suboptimal performance of either the ERM or OOD objective. To address these issues, we introduce a multi-objective optimization (MOO) perspective to understand the OOD optimization process, and propose a new optimization scheme called PAreto Invariant Risk Minimization (PAIR). PAIR improves the robustness of OOD objectives by cooperatively optimizing with other OOD objectives, thereby bridging the gaps caused by the relaxations. Then PAIR approaches a Pareto optimal solution that trades off the ERM and OOD objectives properly. Extensive experiments on challenging benchmarks, WILDS, show that PAIR alleviates the compromises and yields top OOD performances. | https://openreview.net/pdf/df932dac4f2586ee97ae1e6e9371a256d7a727e2.pdf |
What Makes Convolutional Models Great on Long Sequence Modeling? | https://openreview.net/forum?id=TGJSPbRpJX- | https://openreview.net/forum?id=TGJSPbRpJX- | Yuhong Li,Tianle Cai,Yi Zhang,Deming Chen,Debadeepta Dey | ICLR 2023,Poster | Convolutional models have been widely used in multiple domains. However, most existing models only use local convolution, making the model unable to handle long-range dependencies efficiently. Attention overcomes this problem by aggregating global information based on the pair-wise attention score but also makes the computational complexity quadratic to the sequence length. Recently, Gu et al. proposed a model called S4 inspired by the state space model. S4 can be efficiently implemented as a global convolutional model whose kernel size equals the input sequence length. With Fast Fourier Transform, S4 can model much longer sequences than Transformers and achieve significant gains over SoTA on several long-range tasks. Despite its empirical success, S4 is involved. It requires sophisticated parameterization and initialization schemes that combine the wisdom from several prior works. As a result, S4 is less intuitive and hard to use for researchers with limited prior knowledge. Here we aim to demystify S4 and extract basic principles that contribute to the success of S4 as a global convolutional model. We focus on the structure of the convolution kernel and identify two critical but intuitive principles enjoyed by S4 that are sufficient to make up an effective global convolutional model: 1) The parameterization of the convolutional kernel needs to be efficient in the sense that the number of parameters should scale sub-linearly with sequence length. 2) The kernel needs to satisfy a decaying structure that the weights for convolving with closer neighbors are larger than the more distant ones. Based on the two principles, we propose a simple yet effective convolutional model called Structured Global Convolution (SGConv). SGConv exhibits strong empirical performance over several tasks: 1) With faster speed, SGConv surpasses the previous SoTA on Long Range Arena and Speech Command datasets. 2) When plugging SGConv into standard language and vision models, it shows the potential to improve both efficiency and performance. | https://openreview.net/pdf/fdfaa06c7ace0e9ad63349721d8d79419929c11f.pdf |
Editing models with task arithmetic | https://openreview.net/forum?id=6t0Kwf8-jrj | https://openreview.net/forum?id=6t0Kwf8-jrj | Gabriel Ilharco,Marco Tulio Ribeiro,Mitchell Wortsman,Ludwig Schmidt,Hannaneh Hajishirzi,Ali Farhadi | ICLR 2023,Poster | Changing how pre-trained models behave---e.g., improving their performance on a downstream task or mitigating biases learned during pre-training---is a common practice when developing machine learning systems. In this work, we propose a new paradigm for steering the behavior of neural networks, centered around task vectors. A task vector specifies a direction in the weight space of a pre-trained model, such that movement in that direction improves performance on the task. We build task vectors by subtracting the weights of a pre-trained model from the weights of the same model after fine-tuning on a task. We show that these task vectors can be modified and combined together through arithmetic operations such as negation and addition, and the behavior of the resulting model is steered accordingly. Moreover, task vectors can be added together to improve performance on multiple tasks at once. Finally, when tasks are linked by an analogy relationship of the form ``A is to B as C is to D", combining task vectors from three of the tasks can improve performance on the fourth, even when no data from the fourth task is used for training. | https://openreview.net/pdf/0776550849b74d70586738db037bf1c9e9707c63.pdf |
Neural Systematic Binder | https://openreview.net/forum?id=ZPHE4fht19t | https://openreview.net/forum?id=ZPHE4fht19t | Gautam Singh,Yeongbin Kim,Sungjin Ahn | ICLR 2023,Poster | The key to high-level cognition is believed to be the ability to systematically manipulate and compose knowledge pieces. While token-like structured knowledge representations are naturally provided in text, it is elusive how to obtain them for unstructured modalities such as scene images. In this paper, we propose a neural mechanism called Neural Systematic Binder or SysBinder for constructing a novel structured representation called Block-Slot Representation. In Block-Slot Representation, object-centric representations known as slots are constructed by composing a set of independent factor representations called blocks, to facilitate systematic generalization. SysBinder obtains this structure in an unsupervised way by alternatingly applying two different binding principles: spatial binding for spatial modularity across the full scene and factor binding for factor modularity within an object. SysBinder is a simple, deterministic, and general-purpose layer that can be applied as a drop-in module in any arbitrary neural network and on any modality. In experiments, we find that SysBinder provides significantly better factor disentanglement within the slots than the conventional object-centric methods, including, for the first time, in visually complex scene images such as CLEVR-Tex. Furthermore, we demonstrate factor-level systematicity in controlled scene generation by decoding unseen factor combinations. | https://openreview.net/pdf/f75c56d6770f4153c94168fa78ea11746a264c4f.pdf |
Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis | https://openreview.net/forum?id=PUIqjT4rzq7 | https://openreview.net/forum?id=PUIqjT4rzq7 | Weixi Feng,Xuehai He,Tsu-Jui Fu,Varun Jampani,Arjun Reddy Akula,Pradyumna Narayana,Sugato Basu,Xin Eric Wang,William Yang Wang | ICLR 2023,Poster | Large-scale diffusion models have achieved state-of-the-art results on text-to-image synthesis (T2I) tasks. Despite their ability to generate high-quality yet creative images, we observe that attribution-binding and compositional capabilities are still considered major challenging issues, especially when involving multiple objects. Attribute-binding requires the model to associate objects with the correct attribute descriptions, and compositional skills require the model to combine and generate multiple concepts into a single image. In this work, we improve these two aspects of T2I models to achieve more accurate image compositions. To do this, we incorporate linguistic structures with the diffusion guidance process based on the controllable properties of manipulating cross-attention layers in diffusion-based T2I models. We observe that keys and values in cross-attention layers have strong semantic meanings associated with object layouts and content. Therefore, by manipulating the cross-attention representations based on linguistic insights, we can better preserve the compositional semantics in the generated image. Built upon Stable Diffusion, a SOTA T2I model, our structured cross-attention design is efficient that requires no additional training samples. We achieve better compositional skills in qualitative and quantitative results, leading to a significant 5-8\% advantage in head-to-head user comparison studies. Lastly, we conduct an in-depth analysis to reveal potential causes of incorrect image compositions and justify the properties of cross-attention layers in the generation process. | https://openreview.net/pdf/e1ae37e998417bc8a2fe61c08e82494f2db8b53e.pdf |
Can Agents Run Relay Race with Strangers? Generalization of RL to Out-of-Distribution Trajectories | https://openreview.net/forum?id=ipflrGaf7ry | https://openreview.net/forum?id=ipflrGaf7ry | Li-Cheng Lan,Huan Zhang,Cho-Jui Hsieh | ICLR 2023,Poster | In this paper, we evaluate and improve the generalization performance for reinforcement learning (RL) agents on the set of ``controllable'' states, where good policies exist on these states to achieve the goal. An RL agent that generally masters a task should reach its goal starting from any controllable state of the environment instead of memorizing a small set of trajectories. To practically evaluate this type of generalization, we propose relay evaluation, which starts the test agent from the middle of other independently well-trained stranger agents' trajectories. With extensive experimental evaluation, we show the prevalence of generalization failure on controllable states from stranger agents. For example, in the Humanoid environment, we observed that a well-trained Proximal Policy Optimization (PPO) agent, with only 3.9\% failure rate during regular testing, failed on 81.6\% of the states generated by well-trained stranger PPO agents. To improve "relay generalization," we propose a novel method called Self-Trajectory Augmentation (STA), which will reset the environment to the agent's old states according to the Q function during training. After applying STA to the Soft Actor Critic's (SAC) training procedure, we reduced the failure rate of SAC under relay-evaluation by more than three times in most settings without impacting agent performance and increasing the needed number of environment interactions. Our code is available at https://github.com/lan-lc/STA. | https://openreview.net/pdf/80245884d3c21d7b21166281784b35962b9f3e1f.pdf |
CktGNN: Circuit Graph Neural Network for Electronic Design Automation | https://openreview.net/forum?id=NE2911Kq1sp | https://openreview.net/forum?id=NE2911Kq1sp | Zehao Dong,Weidong Cao,Muhan Zhang,Dacheng Tao,Yixin Chen,Xuan Zhang | ICLR 2023,Poster | The electronic design automation of analog circuits has been a longstanding challenge in the integrated circuit field due to the huge design space and complex design trade-offs among circuit specifications. In the past decades, intensive research efforts have only been paid to automate the transistor sizing with a given circuit topology. By recognizing the graph nature of circuits, this paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing based on the encoder-dependent optimization subroutines. Particularly, CktGNN encodes circuit graphs using a two-level GNN framework (of nested GNN) where circuits are represented as combinations of subgraphs in a known subgraph basis. In this way, it significantly improves efficiency by reducing the number of subgraphs to perform message passing.
Nonetheless, another critical roadblock to advancing learning-assisted circuit design automation is a lack of public benchmarks to perform canonical assessment and reproducible research. To tackle the challenge, we introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains $10$K distinct operational amplifiers with carefully-extracted circuit specifications from physical implementations. OCB also equips with communicative circuit generation and evaluation capabilities such that it can be used to generalize the applicability of CktGNN to design various analog circuits by efficiently producing corresponding datasets. Experiments on OCB show the extraordinary advantages of CktGNN through representation-based optimization frameworks over other recent powerful GNN baselines and manual design from human experts. Our work paves the way toward a learning-based open-sourced design automation flow for analog circuits. | https://openreview.net/pdf/bb4750b65866d1d9d4d1088f8e0de8b60e3d554c.pdf |
Specformer: Spectral Graph Neural Networks Meet Transformers | https://openreview.net/forum?id=0pdSt3oyJa1 | https://openreview.net/forum?id=0pdSt3oyJa1 | Deyu Bo,Chuan Shi,Lele Wang,Renjie Liao | ICLR 2023,Poster | Spectral graph neural networks (GNNs) learn graph representations via spectral-domain graph convolutions. However, most existing spectral graph filters are scalar-to-scalar functions, i.e., mapping a single eigenvalue to a single filtered value, thus ignoring the global pattern of the spectrum. Furthermore, these filters are often constructed based on some fixed-order polynomials, which have limited expressiveness and flexibility. To tackle these issues, we introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain, leading to a learnable set-to-set spectral filter. We also design a decoder with learnable bases to enable non-local graph convolution. Importantly, Specformer is equivariant to permutation. By stacking multiple Specformer layers, one can build a powerful spectral GNN. On synthetic datasets, we show that our Specformer can better recover ground-truth spectral filters than other spectral GNNs. Extensive experiments of both node-level and graph-level tasks on real-world graph datasets show that our Specformer outperforms state-of-the-art GNNs and learns meaningful spectrum patterns. Code and data are available at https://github.com/bdy9527/Specformer. | https://openreview.net/pdf/4bf614d9277dc7ba7ec87eb0ccfccf6b765d3979.pdf |
Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought | https://openreview.net/forum?id=qFVVBzXxR2V | https://openreview.net/forum?id=qFVVBzXxR2V | Abulhair Saparov,He He | ICLR 2023,Poster | Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought prompts (examples with intermediate reasoning steps). Existing benchmarks measure reasoning ability indirectly, by evaluating accuracy on downstream tasks such as mathematical reasoning. However, it is unclear how these models obtain the answers and whether they rely on simple heuristics rather than the generated chain-of-thought. To enable systematic exploration of the reasoning ability of LLMs, we present a new synthetic question-answering dataset called PrOntoQA, where each example is generated from a synthetic world model represented in first-order logic. This allows us to parse the generated chain-of-thought into symbolic proofs for formal analysis. Our analysis on InstructGPT and GPT-3 shows that LLMs are quite capable of making correct individual deduction steps, and so are generally capable of reasoning, even in fictional contexts. However, they have difficulty with proof planning: When multiple valid deduction steps are available, they are not able to systematically explore the different options. | https://openreview.net/pdf/e73172f359a19430928855ff049b5dd1e7a4d987.pdf |
Recursive Time Series Data Augmentation | https://openreview.net/forum?id=5lgD4vU-l24s | https://openreview.net/forum?id=5lgD4vU-l24s | Amine Mohamed Aboussalah,Minjae Kwon,Raj G Patel,Cheng Chi,Chi-Guhn Lee | ICLR 2023,Poster | Time series observations can be seen as realizations of an underlying dynamical system governed by rules that we typically do not know. For time series learning tasks we create our model using available data. Training on available realizations, where data is limited, often induces severe over-fitting thereby preventing generalization. To address this issue, we introduce a general recursive framework for time series augmentation, which we call the Recursive Interpolation Method (RIM). New augmented time series are generated using a recursive interpolation function from the original time series for use in training. We perform theoretical analysis to characterize the proposed RIM and to guarantee its performance under certain conditions. We apply RIM to diverse synthetic and real-world time series cases to achieve strong performance over non-augmented data on a variety of learning tasks. Our method is also computationally more efficient and leads to better performance when compared to state of the art time series data augmentation.
| https://openreview.net/pdf/59b64e2daa15ab42b1105a5c3ae7a533e5612b26.pdf |
Auto-Encoding Goodness of Fit | https://openreview.net/forum?id=JjCAdMUlu9v | https://openreview.net/forum?id=JjCAdMUlu9v | Aaron Palmer,Zhiyi Chi,Derek Aguiar,Jinbo Bi | ICLR 2023,Poster | For generative autoencoders to learn a meaningful latent representation for data generation, a careful balance must be achieved between reconstruction error and how close the distribution in the latent space is to the prior. However, this balance is challenging to achieve due to a lack of criteria that work both at the mini-batch (local) and aggregated posterior (global) level. In this work, we develop the Goodness of Fit Autoencoder (GoFAE), which incorporates hypothesis tests at two levels. At the mini-batch level, it uses GoF test statistics as regularization objectives. At a more global level, it selects a regularization coefficient based on higher criticism, i.e., a test on the uniformity of the local GoF p-values. We justify the use of GoF tests by providing a relaxed $L_2$-Wasserstein bound on the distance between the latent distribution and target prior. We propose to use GoF tests and prove that optimization based on these tests can be done with stochastic gradient (SGD) descent on a compact Riemannian manifold. Empirically, we show that our higher criticism parameter selection procedure balances reconstruction and generation using mutual information and uniformity of p-values respectively. Finally, we show that GoFAE achieves comparable FID scores and mean squared errors with competing deep generative models while retaining statistical indistinguishability from Gaussian in the latent space based on a variety of hypothesis tests. | https://openreview.net/pdf/c3b00c5ffafb88c2f16c3b4e5f0dda8db2bc9653.pdf |
Understanding the Covariance Structure of Convolutional Filters | https://openreview.net/forum?id=WGApODQvwRg | https://openreview.net/forum?id=WGApODQvwRg | Asher Trockman,Devin Willmott,J Zico Kolter | ICLR 2023,Poster | Neural network weights are typically initialized at random from univariate distributions, controlling just the variance of individual weights even in highly-structured operations like convolutions. Recent ViT-inspired convolutional networks such as ConvMixer and ConvNeXt use large-kernel depthwise convolutions whose learned filters have notable structure; this presents an opportunity to study their empirical covariances. In this work, we first observe that such learned filters have highly-structured covariance matrices, and moreover, we find that covariances calculated from small networks may be used to effectively initialize a variety of larger networks of different depths, widths, patch sizes, and kernel sizes, indicating a degree of model-independence to the covariance structure. Motivated by these findings, we then propose a learning-free multivariate initialization scheme for convolutional filters using a simple, closed-form construction of their covariance. Models using our initialization outperform those using traditional univariate initializations, and typically meet or exceed the performance of those initialized from the covariances of learned filters; in some cases, this improvement can be achieved without training the depthwise convolutional filters at all. Our code is available at https://github.com/locuslab/convcov. | https://openreview.net/pdf/304a7da98b79c8c15e8baa9f038951976a9ef764.pdf |
Masked Distillation with Receptive Tokens | https://openreview.net/forum?id=mWRngkvIki3 | https://openreview.net/forum?id=mWRngkvIki3 | Tao Huang,Yuan Zhang,Shan You,Fei Wang,Chen Qian,Jian Cao,Chang Xu | ICLR 2023,Poster | Distilling from the feature maps can be fairly effective for dense prediction tasks since both the feature discriminability and localization information can be well transferred. However, not every pixel contributes equally to the performance, and a good student should learn from what really matters to the teacher. In this paper, we introduce a learnable embedding dubbed receptive token to locate the pixels of interests (PoIs) in the feature map, with a distillation mask generated via pixel-wise attention. Then the masked distillation will be performed via the pixel-wise reconstruction. In this way, a distillation mask refers to a pattern of pixel dependencies. We thus adopt multiple receptive tokens to investigate more sophisticated and informative pixel dependencies within feature maps to enhance the distillation. To obtain a group of masks, the receptive tokens are learned via the regular task loss but with teacher fixed, and we also leverage a Dice loss to enrich the diversity of obtained masks. Our method dubbed MasKD is simple and practical, and needs no priors of ground-truth labels, which can apply to various dense prediction tasks. Experiments show that our MasKD can achieve state-of-the-art performance consistently on object detection and semantic segmentation benchmarks. | https://openreview.net/pdf/d920dd32aa477f7dff9d60f655f87639d56705b5.pdf |
Robust Multivariate Time-Series Forecasting: Adversarial Attacks and Defense Mechanisms | https://openreview.net/forum?id=ctmLBs8lITa | https://openreview.net/forum?id=ctmLBs8lITa | Linbo Liu,Youngsuk Park,Trong Nghia Hoang,Hilaf Hasson,Luke Huan | ICLR 2023,Poster | This work studies the threats of adversarial attack on multivariate probabilistic forecasting models and viable defense mechanisms. Our studies discover a new attack pattern that negatively impact the forecasting of a target time series via making strategic, sparse (imperceptible) modifications to the past observations of a small number of other time series. To mitigate the impact of such attack, we have developed two defense strategies. First, we extend a previously developed randomized smoothing technique in classification to multivariate forecasting scenarios. Second, we develop an adversarial training algorithm that learns to create adversarial examples and at the same time optimizes the forecasting model to improve its robustness against such adversarial simulation. Extensive experiments on real-world datasets confirm that our attack schemes are powerful and our defense algorithms are more effective compared with baseline defense mechanisms.
| https://openreview.net/pdf/8d7106908729c2d36678b74879e04ee16d9e147b.pdf |
TextShield: Beyond Successfully Detecting Adversarial Sentences in text classification | https://openreview.net/forum?id=xIWfWvKM7aQ | https://openreview.net/forum?id=xIWfWvKM7aQ | Lingfeng Shen,Ze Zhang,Haiyun Jiang,Ying Chen | ICLR 2023,Poster | Adversarial attack serves as a major challenge for neural network models in NLP, which precludes the model's deployment in safety-critical applications. A recent line of work, detection-based defense, aims to distinguish adversarial sentences from benign ones. However, {the core limitation of previous detection methods is being incapable of giving correct predictions on adversarial sentences unlike defense methods from other paradigms.} To solve this issue, this paper proposes TextShield: (1) we discover a link between text attack and saliency information, and then we propose a saliency-based detector, which can effectively detect whether an input sentence is adversarial or not. (2) We design a saliency-based corrector, which converts the detected adversary sentences to benign ones. By combining the saliency-based detector and corrector, TextShield extends the detection-only paradigm to a detection-correction paradigm, thus filling the gap in the existing detection-based defense. Comprehensive experiments show that (a) TextShield consistently achieves higher or comparable performance than state-of-the-art defense methods across various attacks on different benchmarks. (b) our saliency-based detector outperforms existing detectors for detecting adversarial sentences. | https://openreview.net/pdf/40ef2e2daf7fd2712ff9dc55e56eb90b342fdaba.pdf |
Efficient Deep Reinforcement Learning Requires Regulating Overfitting | https://openreview.net/forum?id=14-kr46GvP- | https://openreview.net/forum?id=14-kr46GvP- | Qiyang Li,Aviral Kumar,Ilya Kostrikov,Sergey Levine | ICLR 2023,Poster | Deep reinforcement learning algorithms that learn policies by trial-and-error must learn from limited amounts of data collected by actively interacting with the environment. While many prior works have shown that proper regularization techniques are crucial for enabling data-efficient RL, a general understanding of the bottlenecks in data-efficient RL has remained unclear. Consequently, it has been difficult to devise a universal technique that works well across all domains. In this paper, we attempt to understand the primary bottleneck in sample-efficient deep RL by examining several potential hypotheses such as non-stationarity, excessive action distribution shift, and overfitting. We perform thorough empirical analysis on state-based DeepMind control suite (DMC) tasks in a controlled and systematic way to show that high temporal-difference (TD) error on the validation set of transitions is the main culprit that severely affects the performance of deep RL algorithms, and prior methods that lead to good performance do in fact, control the validation TD error to be low. This observation gives us a robust principle for making deep RL efficient: we can hill-climb on the validation TD error by utilizing any form of regularization techniques from supervised learning. We show that a simple online model selection method that targets the validation TD error is effective across state-based DMC and Gym tasks. | https://openreview.net/pdf/971c9a6302832063d4fc1590b444b3ccd8f33e44.pdf |
Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient | https://openreview.net/forum?id=6jfbOWzWTcE | https://openreview.net/forum?id=6jfbOWzWTcE | Ming Yin,Mengdi Wang,Yu-Xiang Wang | ICLR 2023,Poster | Offline reinforcement learning, which aims at optimizing sequential decision-making strategies with historical data, has been extensively applied in real-life applications. State-Of-The-Art algorithms usually leverage powerful function approximators (e.g. neural networks) to alleviate the sample complexity hurdle for better empirical performances. Despite the successes, a more systematic under- standing of the statistical complexity for function approximation remains lacking. Towards bridging the gap, we take a step by considering offline reinforcement learning with differentiable function class approximation (DFA). This function class naturally incorporates a wide range of models with nonlinear/nonconvex structures. We show offline RL with differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning (PFQL) algorithm, and our results provide the theoretical basis for understanding a variety of practical heuristics that rely on Fitted Q-Iteration style design. In addition, we further im- prove our guarantee with a tighter instance-dependent characterization. We hope our work could draw interest in studying reinforcement learning with differentiable function approximation beyond the scope of current research.
| https://openreview.net/pdf/b38ef477c7170fbb93d3cc004ca6860e5537faa9.pdf |
Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks | https://openreview.net/forum?id=oGDKSt9JrZi | https://openreview.net/forum?id=oGDKSt9JrZi | Jesse Farebrother,Joshua Greaves,Rishabh Agarwal,Charline Le Lan,Ross Goroshin,Pablo Samuel Castro,Marc G Bellemare | ICLR 2023,Poster | Auxiliary tasks improve the representations learned by deep reinforcement learning agents. Analytically, their effect is reasonably well-understood; in practice, how-ever, their primary use remains in support of a main learning objective, rather than as a method for learning representations. This is perhaps surprising given that many auxiliary tasks are defined procedurally, and hence can be treated as an essentially infinite source of information about the environment. Based on this observation, we study the effectiveness of auxiliary tasks for learning rich representations, focusing on the setting where the number of tasks and the size of the agent’s network are simultaneously increased. For this purpose, we derive a new family of auxiliary tasks based on the successor measure. These tasks are easy to implement and have appealing theoretical properties. Combined with a suitable off-policy learning rule, the result is a representation learning algorithm that can be understood as extending Mahadevan & Maggioni (2007)’s proto-value functions to deep reinforcement learning – accordingly, we call the resulting object proto-value networks. Through a series of experiments on the Arcade Learning Environment, we demonstrate that proto-value networks produce rich features that may be used to obtain performance comparable to established algorithms, using only linear approximation and a small number (~4M) of interactions with the environment’s reward function. | https://openreview.net/pdf/53dd973b091c37ecf6340230fe3534cbd485e01c.pdf |
Robust Algorithms on Adaptive Inputs from Bounded Adversaries | https://openreview.net/forum?id=I29Kt0RwChs | https://openreview.net/forum?id=I29Kt0RwChs | Yeshwanth Cherapanamjeri,Sandeep Silwal,David Woodruff,Fred Zhang,Qiuyi Zhang,Samson Zhou | ICLR 2023,Poster | We study dynamic algorithms robust to adaptive input generated from sources with bounded capabilities, such as sparsity or limited interaction. For example, we consider robust linear algebraic algorithms when the updates to the input are sparse but given by an adversary with access to a query oracle. We also study robust algorithms in the standard centralized setting, where an adversary queries an algorithm in an adaptive manner, but the number of interactions between the adversary and the algorithm is bounded. We first recall a unified framework of (Hassidim et al., 2020; Beimel et al., 2022; Attias et al., 2023) for answering $Q$ adaptive queries that incurs $\widetilde{\mathcal{O}}(\sqrt{Q})$ overhead in space, which is roughly a quadratic improvement over the na\"{i}ve implementation, and only incurs a logarithmic overhead in query time. Although the general framework has diverse applications in machine learning and data science, such as adaptive distance estimation, kernel density estimation, linear regression, range queries, point queries, and serves as a preliminary benchmark, we demonstrate even better algorithmic improvements for (1) reducing the pre-processing time for adaptive distance estimation and (2) permitting an unlimited number of adaptive queries for kernel density estimation. Finally, we complement our theoretical results with additional empirical evaluations. | https://openreview.net/pdf/cdf4cf6994619ced75a82c2d7b911908931db7f4.pdf |
Chasing All-Round Graph Representation Robustness: Model, Training, and Optimization | https://openreview.net/forum?id=7jk5gWjC18M | https://openreview.net/forum?id=7jk5gWjC18M | Chunhui Zhang,Yijun Tian,Mingxuan Ju,Zheyuan Liu,Yanfang Ye,Nitesh Chawla,Chuxu Zhang | ICLR 2023,Poster | Graph Neural Networks (GNNs) have achieved state-of-the-art results on a variety of graph learning tasks, however, it has been demonstrated that they are vulnerable to adversarial attacks, raising serious security concerns. A lot of studies have been developed to train GNNs in a noisy environment and increase their robustness against adversarial attacks. However, existing methods have not uncovered a principled difficulty: the convoluted mixture distribution between clean and attacked data samples, which leads to sub-optimal model design and limits their frameworks’ robustness. In this work, we first begin by identifying the root cause of mixture distribution, then, for tackling it, we propose a novel method GAME - Graph Adversarial Mixture of Experts to enlarge the model capacity and enrich the representation diversity of adversarial samples, from three perspectives of model, training, and optimization. Specifically, we first propose a plug-and- play GAME layer that can be easily incorporated into any GNNs and enhance their adversarial learning capabilities. Second, we design a decoupling-based graph adversarial training in which the component of the model used to generate adversarial graphs is separated from the component used to update weights. Third, we introduce a graph diversity regularization that enables the model to learn diverse representation and further improves model performance. Extensive experiments demonstrate the effectiveness and advantages of GAME over the state-of-the-art adversarial training methods across various datasets given different attacks. | https://openreview.net/pdf/72be5211ae144eba1ed976ac26d8d57be9b21b56.pdf |
On Representing Mixed-Integer Linear Programs by Graph Neural Networks | https://openreview.net/forum?id=4gc3MGZra1d | https://openreview.net/forum?id=4gc3MGZra1d | Ziang Chen,Jialin Liu,Xinshang Wang,Wotao Yin | ICLR 2023,Poster | While Mixed-integer linear programming (MILP) is NP-hard in general, practical MILP has received roughly 100--fold speedup in the past twenty years. Still, many classes of MILPs quickly become unsolvable as their sizes increase, motivating researchers to seek new acceleration techniques for MILPs. With deep learning, they have obtained strong empirical results, and many results were obtained by applying graph neural networks (GNNs) to making decisions in various stages of MILP solution processes. This work discovers a fundamental limitation: there exist feasible and infeasible MILPs that all GNNs will, however, treat equally, indicating GNN's lacking power to express general MILPs. Then, we show that, by restricting the MILPs to unfoldable ones or by adding random features, there exist GNNs that can reliably predict MILP feasibility, optimal objective values, and optimal solutions up to prescribed precision. We conducted small-scale numerical experiments to validate our theoretical findings. | https://openreview.net/pdf/f99badef1c0c3cf54868baa13faa171eca3b2f50.pdf |
On the Importance and Applicability of Pre-Training for Federated Learning | https://openreview.net/forum?id=fWWFv--P0xP | https://openreview.net/forum?id=fWWFv--P0xP | Hong-You Chen,Cheng-Hao Tu,Ziwei Li,Han Wei Shen,Wei-Lun Chao | ICLR 2023,Poster | Pre-training is prevalent in nowadays deep learning to improve the learned model's performance. However, in the literature on federated learning (FL), neural networks are mostly initialized with random weights. These attract our interest in conducting a systematic study to explore pre-training for FL. Across multiple visual recognition benchmarks, we found that pre-training can not only improve FL, but also close its accuracy gap to the counterpart centralized learning, especially in the challenging cases of non-IID clients' data. To make our findings applicable to situations where pre-trained models are not directly available, we explore pre-training with synthetic data or even with clients' data in a decentralized manner, and found that they can already improve FL notably. Interestingly, many of the techniques we explore are complementary to each other to further boost the performance, and we view this as a critical result toward scaling up deep FL for real-world applications. We conclude our paper with an attempt to understand the effect of pre-training on FL. We found that pre-training enables the learned global models under different clients' data conditions to converge to the same loss basin, and makes global aggregation in FL more stable. Nevertheless, pre-training seems to not alleviate local model drifting, a fundamental problem in FL under non-IID data. | https://openreview.net/pdf/d1a6a32bc6d3abc4be477317cdca0f63fe17a19b.pdf |
Simple initialization and parametrization of sinusoidal networks via their kernel bandwidth | https://openreview.net/forum?id=yVqC6gCNf4d | https://openreview.net/forum?id=yVqC6gCNf4d | Filipe de Avila Belbute-Peres,J Zico Kolter | ICLR 2023,Poster | Neural networks with sinusoidal activations have been proposed as an alternative to networks with traditional activation functions. Despite their promise, particularly for learning implicit models, their training behavior is not yet fully understood, leading to a number of empirical design choices that are not well justified. In this work, we first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis. We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth. Finally, we utilize these insights to inform the sinusoidal network initialization, optimizing their performance for each of a series of tasks, including learning implicit models and solving differential equations. | https://openreview.net/pdf/4fd4637304bd7aded6822c08645dc5d8a48e38b7.pdf |
The Best of Both Worlds: Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation | https://openreview.net/forum?id=29V3AWjVAFi | https://openreview.net/forum?id=29V3AWjVAFi | Huancheng Chen,Chaining Wang,Haris Vikalo | ICLR 2023,Poster | Heterogeneity of data distributed across clients limits the performance of global models trained through federated learning, especially in the settings with highly imbalanced class distributions of local datasets. In recent years, personalized federated learning (pFL) has emerged as a potential solution to the challenges presented by heterogeneous data. However, existing pFL methods typically enhance performance of local models at the expense of the global model's accuracy. We propose FedHKD (Federated Hyper-Knowledge Distillation), a novel FL algorithm in which clients rely on knowledge distillation (KD) to train local models. In particular, each client extracts and sends to the server the means of local data representations and the corresponding soft predictions -- information that we refer to as ``hyper-knowledge". The server aggregates this information and broadcasts it to the clients in support of local training. Notably, unlike other KD-based pFL methods, FedHKD does not rely on a public dataset nor it deploys a generative model at the server. We analyze convergence of FedHKD and conduct extensive experiments on visual datasets in a variety of scenarios, demonstrating that FedHKD provides significant improvement in both personalized as well as global model performance compared to state-of-the-art FL methods designed for heterogeneous data settings. | https://openreview.net/pdf/333b4b6fba8d961b0d290a1fc082f7c5c2899ddc.pdf |
Over-Training with Mixup May Hurt Generalization | https://openreview.net/forum?id=JmkjrlVE-DG | https://openreview.net/forum?id=JmkjrlVE-DG | Zixuan Liu,Ziqiao Wang,Hongyu Guo,Yongyi Mao | ICLR 2023,Poster | Mixup, which creates synthetic training instances by linearly interpolating random sample pairs, is a simple and yet effective regularization technique to boost the performance of deep models trained with SGD. In this work, we report a previously unobserved phenomenon in Mixup raining: on a number of standard datasets, the performance of Mixup-trained models starts to decay after training for a large number of epochs, giving rise to a U-shaped generalization curve. This behavior is further aggravated when the size of original dataset is reduced. To help understand such a behavior of Mixup, we show theoretically that Mixup training may introduce undesired data-dependent label noises to the synthesized data. Via analyzing a least-square regression problem with a random feature model, we explain why noisy labels may cause the U-shaped curve to occur: Mixup improves generalization through fitting the clean patterns at the early training stage, but as training progresses, Mixup becomes over-fitting to the noise in the synthetic data. Extensive experiments are performed on a variety of benchmark datasets, validating this explanation. | https://openreview.net/pdf/9b1abc99da7f78ec360a4288ad20cecabb1eed6a.pdf |
HiCLIP: Contrastive Language-Image Pretraining with Hierarchy-aware Attention | https://openreview.net/forum?id=0eTTKOOOQkV | https://openreview.net/forum?id=0eTTKOOOQkV | Shijie Geng,Jianbo Yuan,Yu Tian,Yuxiao Chen,Yongfeng Zhang | ICLR 2023,Poster | The success of large-scale contrastive vision-language pretraining (CLIP) has benefited both visual recognition and multimodal content understanding. The concise design brings CLIP the advantage in inference efficiency against other vision-language models with heavier cross-attention fusion layers, making it a popular choice for a wide spectrum of downstream tasks. However, CLIP does not explicitly capture the hierarchical nature of high-level and fine-grained semantics conveyed in images and texts, which is arguably critical to vision-language understanding and reasoning. To this end, we equip both the visual and language branches in CLIP with hierarchy-aware attentions, namely Hierarchy-aware CLIP (HiCLIP), to progressively discover semantic hierarchies layer-by-layer from both images and texts in an unsupervised manner. As a result, such hierarchical aggregation significantly improves the cross-modal alignment. To demonstrate the advantages of HiCLIP, we conduct qualitative analysis on its unsupervised hierarchy induction during inference, as well as extensive quantitative experiments on both visual recognition and vision-language downstream tasks. | https://openreview.net/pdf/2e082778e9c948cf856dc93b13cc4d0734583c61.pdf |
Quantile Risk Control: A Flexible Framework for Bounding the Probability of High-Loss Predictions | https://openreview.net/forum?id=p6jsTidUkPx | https://openreview.net/forum?id=p6jsTidUkPx | Jake Snell,Thomas P Zollo,Zhun Deng,Toniann Pitassi,Richard Zemel | ICLR 2023,Poster | Rigorous guarantees about the performance of predictive algorithms are necessary in order to ensure their responsible use. Previous work has largely focused on bounding the expected loss of a predictor, but this is not sufficient in many risk-sensitive applications where the distribution of errors is important. In this work, we propose a flexible framework to produce a family of bounds on quantiles of the loss distribution incurred by a predictor. Our method takes advantage of the order statistics of the observed loss values rather than relying on the sample mean alone. We show that a quantile is an informative way of quantifying predictive performance, and that our framework applies to a variety of quantile-based metrics, each targeting important subsets of the data distribution. We analyze the theoretical properties of our proposed method and demonstrate its ability to rigorously control loss quantiles on several real-world datasets. | https://openreview.net/pdf/982a7ad6a22caf7e77aba86eae6cfd1395299bab.pdf |
The Tilted Variational Autoencoder: Improving Out-of-Distribution Detection | https://openreview.net/forum?id=YlGsTZODyjz | https://openreview.net/forum?id=YlGsTZODyjz | Griffin Floto,Stefan Kremer,Mihai Nica | ICLR 2023,Poster | A problem with using the Gaussian distribution as a prior for the variational autoencoder (VAE) is that the set on which Gaussians have high probability density is small as the latent dimension increases. This is an issue because VAEs try to attain both a high likelihood with respect to a prior distribution and at the same time, separation between points for better reconstruction. Therefore, a small volume in the high-density region of the prior is problematic because it restricts the separation of latent points. To ameliorate this, we propose a simple generalization of the Gaussian distribution, called the tilted Gaussian, which has a maximum probability density occurring on a sphere instead of a single point. The tilted Gaussian has exponentially more volume in high-density regions than the standard Gaussian as a function of the distribution dimension. We empirically demonstrate that this simple change in the prior distribution improves VAE performance on the task of detecting unsupervised out-of-distribution (OOD) samples. We also introduce a new OOD testing procedure, called the Will-It-Move test, where the tilted Gaussian achieves remarkable OOD performance. | https://openreview.net/pdf/ddcfada43c19a37098f4bac6359901c459890a8e.pdf |
Stateful Active Facilitator: Coordination and Environmental Heterogeneity in Cooperative Multi-Agent Reinforcement Learning | https://openreview.net/forum?id=B4maZQLLW0_ | https://openreview.net/forum?id=B4maZQLLW0_ | Dianbo Liu,Vedant Shah,Oussama Boussif,Cristian Meo,Anirudh Goyal,Tianmin Shu,Michael Curtis Mozer,Nicolas Heess,Yoshua Bengio | ICLR 2023,Poster | In cooperative multi-agent reinforcement learning, a team of agents works together
to achieve a common goal. Different environments or tasks may require varying
degrees of coordination among agents in order to achieve the goal in an optimal
way. The nature of coordination will depend on properties of the environment—its
spatial layout, distribution of obstacles, dynamics, etc. We term this variation
of properties within an environment as heterogeneity. Existing literature has not
sufficiently addressed the fact that different environments may have different levels
of heterogeneity. We formalize the notions of coordination level and heterogeneity
level of an environment and present HECOGrid, a suite of multi-agent RL
environments that facilitates empirical evaluation of different MARL approaches
across different levels of coordination and environmental heterogeneity by providing
a quantitative control over coordination and heterogeneity levels of the
environment. Further, we propose a Centralized Training Decentralized Execution
learning approach called Stateful Active Facilitator (SAF) that enables agents to
work efficiently in high-coordination and high-heterogeneity environments through
a differentiable and shared knowledge source used during training and dynamic
selection from a shared pool of policies. We evaluate SAF and compare its performance
against baselines IPPO and MAPPO on HECOGrid. Our results show
that SAF consistently outperforms the baselines across different tasks and different
heterogeneity and coordination levels. | https://openreview.net/pdf/e91b82d1a670376c4dd37b3ff6ed712eff719b12.pdf |
Learning Achievement Structure for Structured Exploration in Domains with Sparse Reward | https://openreview.net/forum?id=NDWl9qcUpvy | https://openreview.net/forum?id=NDWl9qcUpvy | Zihan Zhou,Animesh Garg | ICLR 2023,Poster | We propose Structured Exploration with Achievements (SEA), a multi-stage reinforcement learning algorithm designed for achievement-based environments, a particular type of environment with an internal achievement set. SEA first uses offline data to learn a representation of the known achievements with a determinant loss function, then recovers the dependency graph of the learned achievements with a heuristic algorithm, and finally interacts with the environment online to learn policies that master known achievements and explore new ones with a controller built with the recovered dependency graph. We empirically demonstrate that SEA can recover the achievement structure accurately and improve exploration in hard domains such as Crafter that are procedurally generated with high-dimensional observations like images. | https://openreview.net/pdf/8462e2f1cef7cf3f3aede9507b7197a763a436ed.pdf |
PINTO: Faithful Language Reasoning Using Prompt-Generated Rationales | https://openreview.net/forum?id=WBXbRs63oVu | https://openreview.net/forum?id=WBXbRs63oVu | PeiFeng Wang,Aaron Chan,Filip Ilievski,Muhao Chen,Xiang Ren | ICLR 2023,Poster | Neural language models (LMs) have achieved impressive results on various language-based reasoning tasks by utilizing latent knowledge encoded in their own pretrained parameters. To make this reasoning process more explicit, recent works retrieve a rationalizing LM's internal knowledge by training or prompting it to generate free-text rationales, which can be used to guide task predictions made by either the same LM or a separate reasoning LM. However, rationalizing LMs require expensive rationale annotation and/or computation, without any assurance that their generated rationales improve LM task performance or faithfully reflect LM decision-making. In this paper, we propose PINTO, an LM pipeline that rationalizes via prompt-based learning, and learns to faithfully reason over rationales via counterfactual regularization. First, PINTO maps out a suitable reasoning process for the task input by prompting a frozen rationalizing LM to generate a free-text rationale. Second, PINTO's reasoning LM is fine-tuned to solve the task using the generated rationale as context, while regularized to output less confident predictions when the rationale is perturbed. Across four datasets, we show that PINTO significantly improves the generalization ability of the reasoning LM, yielding higher performance on both in-distribution and out-of-distribution test sets. Also, we find that PINTO's rationales are more faithful to its task predictions than those generated by competitive baselines. | https://openreview.net/pdf/7e3d881a1ec0910d26a1dcbaea914860cb610c81.pdf |
Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods | https://openreview.net/forum?id=6doXHqwMayf | https://openreview.net/forum?id=6doXHqwMayf | Shunta Akiyama,Taiji Suzuki | ICLR 2023,Poster | While deep learning has outperformed other methods for various tasks, theoretical frameworks that explain its reason have not been fully established. We investigate the excess risk of two-layer ReLU neural networks in a teacher-student regression model, in which a student network learns an unknown teacher network through its outputs. Especially, we consider the student network that has the same width as the teacher network and is trained in two phases: first by noisy gradient descent and then by the vanilla gradient descent. Our result shows that the student network provably reaches a near-global optimal solution and outperforms any kernel methods estimator (more generally, linear estimators), including neural tangent kernel approach, random feature model, and other kernel methods, in a sense of the minimax optimal rate. The key concept inducing this superiority is the non-convexity of the neural network models. Even though the loss landscape is highly non-convex, the student network adaptively learns the teacher neurons. | https://openreview.net/pdf/3e8fe254871b5629d8668c996c29e58c0a2415c4.pdf |
Linearly Mapping from Image to Text Space | https://openreview.net/forum?id=8tYRqb05pVn | https://openreview.net/forum?id=8tYRqb05pVn | Jack Merullo,Louis Castricato,Carsten Eickhoff,Ellie Pavlick | ICLR 2023,Poster | The extent to which text-only language models (LMs) learn to represent the physical, non-linguistic world is an open question. Prior work has shown that pretrained LMs can be taught to ``understand'' visual inputs when the models' parameters are updated on image captioning tasks. We test a stronger hypothesis: that the conceptual representations learned by text-only models are functionally equivalent (up to a linear transformation) to those learned by models trained on vision tasks. Specifically, we show that the image representations from vision models can be transferred as continuous prompts to frozen LMs by training only a single linear projection. Using these to prompt the LM achieves competitive performance on captioning and visual question answering tasks compared to models that tune both the image encoder and text decoder (such as the MAGMA model). We compare three image encoders with increasing amounts of linguistic supervision seen during pretraining: BEIT (no linguistic information), NF-ResNET (lexical category information), and CLIP (full natural language descriptions). We find that all three encoders perform equally well at transferring visual property information to the language model (e.g., whether an animal is large or small), but that image encoders pretrained with linguistic supervision more saliently encode category information (e.g., distinguishing hippo vs.\ elephant) and thus perform significantly better on benchmark language-and-vision tasks. Our results indicate that LMs encode conceptual information structurally similarly to vision-based models, even those that are solely trained on images. | https://openreview.net/pdf/bb73f5907bc91ecfb1c8ee44e7e84b62e3f33c49.pdf |
Characterizing intrinsic compositionality in transformers with Tree Projections | https://openreview.net/forum?id=sAOOeI878Ns | https://openreview.net/forum?id=sAOOeI878Ns | Shikhar Murty,Pratyusha Sharma,Jacob Andreas,Christopher D Manning | ICLR 2023,Poster | When trained on language data, do transformers learn some arbitrary computation that utilizes the full capacity of the architecture or do they learn a simpler, tree-like computation, hypothesized to underlie compositional meaning systems like human languages? There is an apparent tension between compositional accounts of human language understanding, which are based on a restricted bottom-up computational process, and the enormous success of neural models like transformers, which can route information arbitrarily between different parts of their input. One possibility is that these models, while extremely flexible in principle, in practice learn to interpret language hierarchically, ultimately building sentence representations close to those predictable by a bottom-up, tree-structured model. To evaluate this possibility, we describe an unsupervised and parameter-free method to \emph{functionally project} the behavior of any transformer into the space of tree-structured networks. Given an input sentence, we produce a binary tree that approximates the transformer's representation-building process and a score that captures how ``tree-like'' the transformer's behavior is on the input. While calculation of this score does not require training any additional models, it provably upper-bounds the fit between a transformer and any tree-structured approximation. Using this method, we show that transformers for three different tasks become more tree-like over the course of training, in some cases unsupervisedly recovering the same trees as supervised parsers. These trees, in turn, are predictive of model behavior, with more tree-like models generalizing better on tests of compositional generalization. | https://openreview.net/pdf/a956f4e4759ae722c095dbea6bc4a342ff06edb5.pdf |
Augmentation Component Analysis: Modeling Similarity via the Augmentation Overlaps | https://openreview.net/forum?id=5vM51iamNeL | https://openreview.net/forum?id=5vM51iamNeL | Lu Han,Han-Jia Ye,De-Chuan Zhan | ICLR 2023,Poster | Self-supervised learning aims to learn a embedding space where semantically similar samples are close. Contrastive learning methods pull views of samples together and push different samples away, which utilizes semantic invariance of augmentation but ignores the relationship between samples. To better exploit the power of augmentation, we observe that semantically similar samples are more likely to have similar augmented views. Therefore, we can take the augmented views as a special description of a sample. In this paper, we model such a description as the augmentation distribution, and we call it augmentation feature. The similarity in augmentation feature reflects how much the views of two samples overlap and is related to their semantical similarity. Without computational burdens to explicitly estimate values of the augmentation feature, we propose Augmentation Component Analysis (ACA) with a contrastive-like loss to learn principal components and an on-the-fly projection loss to embed data. ACA equals an efficient dimension reduction by PCA and extracts low-dimensional embeddings, theoretically preserving the similarity of augmentation distribution between samples. Empirical results show that our method can achieve competitive results against various traditional contrastive learning methods on different benchmarks. | https://openreview.net/pdf/7b2fdf9a5f1f475782b119a0d0275eb0392c1f05.pdf |
Replicable Bandits | https://openreview.net/forum?id=gcD2UtCGMc2 | https://openreview.net/forum?id=gcD2UtCGMc2 | Hossein Esfandiari,Alkis Kalavasis,Amin Karbasi,Andreas Krause,Vahab Mirrokni,Grigoris Velegkas | ICLR 2023,Poster | In this paper, we introduce the notion of replicable policies in the context of stochastic bandits, one of the canonical problems in interactive learning. A policy in the bandit environment is called replicable if it pulls, with high probability, the exact same sequence of arms in two different and independent executions (i.e., under independent reward realizations). We show that not only do replicable policies exist, but also they achieve almost the same optimal (non-replicable) regret bounds in terms of the time horizon. More specifically, in the stochastic multi-armed bandits setting, we develop a policy with an optimal problem-dependent regret bound whose dependence on the replicability parameter is also optimal. Similarly, for stochastic linear bandits (with finitely and infinitely many arms) we develop replicable policies that achieve the best-known problem-independent regret bounds with an optimal dependency on the replicability parameter. Our results show that even though randomization is crucial for the exploration-exploitation trade-off, an optimal balance can still be achieved while pulling the exact same arms in two different rounds of executions. | https://openreview.net/pdf/8c96db0688585a0ce4e160b10379dd61b2b8caee.pdf |
Neural Bregman Divergences for Distance Learning | https://openreview.net/forum?id=nJ3Vx78Nf7p | https://openreview.net/forum?id=nJ3Vx78Nf7p | Fred Lu,Edward Raff,Francis Ferraro | ICLR 2023,Poster | Many metric learning tasks, such as triplet learning, nearest neighbor retrieval, and visualization, are treated primarily as embedding tasks where the ultimate metric is some variant of the Euclidean distance (e.g., cosine or Mahalanobis), and the algorithm must learn to embed points into the pre-chosen space. The study of non-Euclidean geometries is often not explored, which we believe is due to a lack of tools for learning non-Euclidean measures of distance. Recent work has shown that Bregman divergences can be learned from data, opening a promising approach to learning asymmetric distances. We propose a new approach to learning arbitrary Bergman divergences in a differentiable manner via input convex neural networks and show that it overcomes significant limitations of previous works. We also demonstrate that our method more faithfully learns divergences over a set of both new and previously studied tasks, including asymmetric regression, ranking, and clustering. Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification, showing the general utility of our approach for asymmetric learning. | https://openreview.net/pdf/a23fc2d7b89a04de3e67ff79cc425897fe1cdaac.pdf |
Bias Propagation in Federated Learning | https://openreview.net/forum?id=V7CYzdruWdm | https://openreview.net/forum?id=V7CYzdruWdm | Hongyan Chang,Reza Shokri | ICLR 2023,Poster | We show that participating in federated learning can be detrimental to group fairness. In fact, the bias of a few parties against under-represented groups (identified by sensitive attributes such as gender or race) can propagate through the network to all the parties in the network. We analyze and explain bias propagation in federated learning on naturally partitioned real-world datasets. Our analysis reveals that biased parties unintentionally yet stealthily encode their bias in a small number of model parameters, and throughout the training, they steadily increase the dependence of the global model on sensitive attributes. What is important to highlight is that the experienced bias in federated learning is higher than what parties would otherwise encounter in centralized training with a model trained on the union of all their data. This indicates that the bias is due to the algorithm. Our work calls for auditing group fairness in federated learning and designing learning algorithms that are robust to bias propagation.
| https://openreview.net/pdf/cd842625432bee710e8d97bcd6d111c02078164e.pdf |
Causal Confusion and Reward Misidentification in Preference-Based Reward Learning | https://openreview.net/forum?id=R0Xxvr_X3ZA | https://openreview.net/forum?id=R0Xxvr_X3ZA | Jeremy Tien,Jerry Zhi-Yang He,Zackory Erickson,Anca Dragan,Daniel S. Brown | ICLR 2023,Poster | Learning policies via preference-based reward learning is an increasingly popular method for customizing agent behavior, but has been shown anecdotally to be prone to spurious correlations and reward hacking behaviors. While much prior work focuses on causal confusion in reinforcement learning and behavioral cloning, we focus on a systematic study of causal confusion and reward misidentification when learning from preferences. In particular, we perform a series of sensitivity and ablation analyses on several benchmark domains where rewards learned from preferences achieve minimal test error but fail to generalize to out-of-distribution states---resulting in poor policy performance when optimized. We find that the presence of non-causal distractor features, noise in the stated preferences, and partial state observability can all exacerbate reward misidentification. We also identify a set of methods with which to interpret misidentified learned rewards. In general, we observe that optimizing misidentified rewards drives the policy off the reward's training distribution, resulting in high predicted (learned) rewards but low true rewards. These findings illuminate the susceptibility of preference learning to reward misidentification and causal confusion---failure to consider even one of many factors can result in unexpected, undesirable behavior. | https://openreview.net/pdf/f41368bc311fd0e894120cf88134acdbc361ec94.pdf |
UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question Answering Over Knowledge Graph | https://openreview.net/forum?id=Z63RvyAZ2Vh | https://openreview.net/forum?id=Z63RvyAZ2Vh | Jinhao Jiang,Kun Zhou,Xin Zhao,Ji-Rong Wen | ICLR 2023,Poster | Multi-hop Question Answering over Knowledge Graph~(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question on a large-scale Knowledge Graph (KG).
To cope with the vast search space, existing work usually adopts a two-stage approach: it first retrieves a relatively small subgraph related to the question and then performs the reasoning on the subgraph to find the answer entities accurately.
Although these two stages are highly related, previous work employs very different technical solutions for developing the retrieval and reasoning models, neglecting their relatedness in task essence.
In this paper, we propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
For model architecture, UniKGQA consists of a semantic matching module based on a pre-trained language model~(PLM) for question-relation semantic matching, and a matching information propagation module to propagate the matching information along the directed edges on KGs.
For parameter learning, we design a shared pre-training task based on question-relation matching for both retrieval and reasoning models, and then propose retrieval- and reasoning-oriented fine-tuning strategies.
Compared with previous studies, our approach is more unified, tightly relating the retrieval and reasoning stages.
Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our method on the multi-hop KGQA task.
Our codes and data are publicly available at~\url{https://github.com/RUCAIBox/UniKGQA}. | https://openreview.net/pdf/6a73e95053a44e33e96d38a7d3fc15c14bbe50d9.pdf |
Faster Last-iterate Convergence of Policy Optimization in Zero-Sum Markov Games | https://openreview.net/forum?id=bRwBpKrNzF7 | https://openreview.net/forum?id=bRwBpKrNzF7 | Shicong Cen,Yuejie Chi,Simon Shaolei Du,Lin Xiao | ICLR 2023,Poster | Multi-Agent Reinforcement Learning (MARL)---where multiple agents learn to interact in a shared dynamic environment---permeates across a wide range of critical applications. While there has been substantial progress on understanding the global convergence of policy optimization methods in single-agent RL, designing and analysis of efficient policy optimization algorithms in the MARL setting present significant challenges and new desiderata, which unfortunately, remain highly inadequately addressed by existing theory. In this paper, we focus on the most basic setting of competitive multi-agent RL, namely two-player zero-sum Markov games, and study equilibrium finding algorithms in both the infinite-horizon discounted setting and the finite-horizon episodic setting. We propose a single-loop policy optimization method with symmetric updates from both agents, where the policy is updated via the entropy-regularized optimistic multiplicative weights update (OMWU) method and the value is updated on a slower timescale. We show that, in the full-information tabular setting, the proposed method achieves a finite-time last-iterate linear convergence to the quantal response equilibrium of the regularized problem, which translates to a sublinear convergence to the Nash equilibrium by controlling the amount of regularization. Our convergence results improve upon the best known iteration complexities, and lead to a better understanding of policy optimization in competitive Markov games. | https://openreview.net/pdf/d08b35cf66974eeaf9743746b69c38f772753301.pdf |
Memorization Capacity of Neural Networks with Conditional Computation | https://openreview.net/forum?id=rB3zRN0lBYr | https://openreview.net/forum?id=rB3zRN0lBYr | Erdem Koyuncu | ICLR 2023,Poster | Many empirical studies have demonstrated the performance benefits of conditional computation in neural networks, including reduced inference time and power consumption. We study the fundamental limits of neural conditional computation from the perspective of memorization capacity. For Rectified Linear Unit (ReLU) networks without conditional computation, it is known that memorizing a collection of $n$ input-output relationships can be accomplished via a neural network with $O(\sqrt{n})$ neurons. Calculating the output of this neural network can be accomplished using $O(\sqrt{n})$ elementary arithmetic operations of additions, multiplications and comparisons for each input. Using a conditional ReLU network, we show that the same task can be accomplished using only $O(\log n)$ operations per input. This represents an almost exponential improvement as compared to networks without conditional computation. We also show that the $\Theta(\log n)$ rate is the best possible. Our achievability result utilizes a general methodology to synthesize a conditional network out of an unconditional network in a computationally-efficient manner, bridging the gap between unconditional and conditional architectures. | https://openreview.net/pdf/469cf18bab296b8d78b2e87f5e40ebabe7e71ef5.pdf |
Weighted Clock Logic Point Process | https://openreview.net/forum?id=YfUICnZMwk7 | https://openreview.net/forum?id=YfUICnZMwk7 | Ruixuan Yan,Yunshi Wen,Debarun Bhattacharjya,Ronny Luss,Tengfei Ma,Achille Fokoue,Anak Agung Julius | ICLR 2023,Poster | Datasets involving multivariate event streams are prevalent in numerous applications. We present a novel framework for modeling temporal point processes called clock logic neural networks (CLNN) which learn weighted clock logic (wCL) formulas as interpretable temporal rules by which some events promote or inhibit other events. Specifically, CLNN models temporal relations between events using conditional intensity rates informed by a set of wCL formulas, which are more expressive than related prior work. Unlike conventional approaches of searching for generative rules through expensive combinatorial optimization, we design smooth activation functions for components of wCL formulas that enable a continuous relaxation of the discrete search space and efficient learning of wCL formulas using gradient-based methods. Experiments on synthetic datasets manifest our model's ability to recover the ground-truth rules and improve computational efficiency. In addition, experiments on real-world datasets show that our models perform competitively when compared with state-of-the-art models. | https://openreview.net/pdf/eb9a99d990427c8b4ac5187f36e3bf4c618b20bf.pdf |
Simple Emergent Action Representations from Multi-Task Policy Training | https://openreview.net/forum?id=NUl0ylt7SM | https://openreview.net/forum?id=NUl0ylt7SM | Pu Hua,Yubei Chen,Huazhe Xu | ICLR 2023,Poster | The low-level sensory and motor signals in deep reinforcement learning, which exist in high-dimensional spaces such as image observations or motor torques, are inherently challenging to understand or utilize directly for downstream tasks. While sensory representations have been extensively studied, the representations of motor actions are still an area of active exploration. Our work reveals that a space containing meaningful action representations emerges when a multi-task policy network takes as inputs both states and task embeddings. Moderate constraints are added to improve its representation ability. Therefore, interpolated or composed embeddings can function as a high-level interface within this space, providing instructions to the agent for executing meaningful action sequences. Empirical results demonstrate that the proposed action representations are effective for intra-action interpolation and inter-action composition with limited or no additional learning. Furthermore, our approach exhibits superior task adaptation ability compared to strong baselines in Mujoco locomotion tasks. Our work sheds light on the promising direction of learning action representations for efficient, adaptable, and composable RL, forming the basis of abstract action planning and the understanding of motor signal space. Project page: https://sites.google.com/view/emergent-action-representation/ | https://openreview.net/pdf/af859d60f1af73991c631bf27224a456c44ce94a.pdf |
Interaction-Based Disentanglement of Entities for Object-Centric World Models | https://openreview.net/forum?id=JQc2VowqCzz | https://openreview.net/forum?id=JQc2VowqCzz | Akihiro Nakano,Masahiro Suzuki,Yutaka Matsuo | ICLR 2023,Poster | Perceiving the world compositionally in terms of space and time is essential to understanding object dynamics and solving downstream tasks. Object-centric learning using generative models has improved in its ability to learn distinct representations of individual objects and predict their interactions, and how to utilize the learned representations to solve untrained, downstream tasks is a focal question. However, as models struggle to predict object interactions and track the objects accurately, especially for unseen configurations, using object-centric representations in downstream tasks is still a challenge. This paper proposes STEDIE, a new model that disentangles object representations, based on interactions, into interaction-relevant relational features and interaction-irrelevant global features without supervision. Empirical evaluation shows that the proposed model factorizes global features, unaffected by interactions from relational features that are necessary to predict outcome of interactions. We also show that STEDIE achieves better performance in planning tasks and understanding causal relationships. In both tasks, our model not only achieves better performance in terms of reconstruction ability but also utilizes the disentangled representations to solve the tasks in a structured manner. | https://openreview.net/pdf/80652bebd3a8b4e2454b1a34ba3c63c68e480c4c.pdf |
Neural Image-based Avatars: Generalizable Radiance Fields for Human Avatar Modeling | https://openreview.net/forum?id=-ng-FXFlzgK | https://openreview.net/forum?id=-ng-FXFlzgK | YoungJoong Kwon,Dahun Kim,Duygu Ceylan,Henry Fuchs | ICLR 2023,Poster | We present a method that enables synthesizing novel views and novel poses of arbitrary human performers from sparse multi-view images. A key ingredient of our method is a hybrid appearance blending module that combines the advantages of the implicit body NeRF representation and image-based rendering. Existing generalizable human NeRF methods that are conditioned on the body model have shown robustness against the geometric variation of arbitrary human performers. Yet they often exhibit blurry results when generalized onto unseen identities. Meanwhile, image-based rendering shows high-quality results when sufficient observations are available, whereas it suffers artifacts in sparse-view settings. We propose Neural Image-based Avatars (NIA) that exploits the best of those two methods: to maintain robustness under new articulations and self-occlusions while directly leveraging the available (sparse) source view colors to preserve appearance details of new subject identities. Our hybrid design outperforms recent methods on both in-domain identity generalization as well as challenging cross-dataset generalization settings. Also, in terms of the pose generalization, our method outperforms even the per-subject optimized animatable NeRF methods. | https://openreview.net/pdf/5d6559de435716b5db2c92eef6e3b4b0b8ed2bb9.pdf |
Federated Neural Bandits | https://openreview.net/forum?id=38m4h8HcNRL | https://openreview.net/forum?id=38m4h8HcNRL | Zhongxiang Dai,Yao Shu,Arun Verma,Flint Xiaofeng Fan,Bryan Kian Hsiang Low,Patrick Jaillet | ICLR 2023,Poster | Recent works on neural contextual bandits have achieved compelling performances due to their ability to leverage the strong representation power of neural networks (NNs) for reward prediction. Many applications of contextual bandits involve multiple agents who collaborate without sharing raw observations, thus giving rise to the setting of federated contextual bandits}. Existing works on federated contextual bandits rely on linear or kernelized bandits, which may fall short when modeling complex real-world reward functions. So, this paper introduces the federated neural-upper confidence bound (FN-UCB) algorithm. To better exploit the federated setting, FN-UCB adopts a weighted combination of two UCBs: $\text{UCB}^{a}$ allows every agent to additionally use the observations from the other agents to accelerate exploration (without sharing raw observations), while $\text{UCB}^{b}$ uses an NN with aggregated parameters for reward prediction in a similar way to federated averaging for supervised learning. Notably, the weight between the two UCBs required by our theoretical analysis is amenable to an interesting interpretation, which emphasizes $\text{UCB}^{a}$ initially for accelerated exploration and relies more on $\text{UCB}^{b}$ later after enough observations have been collected to train the NNs for accurate reward prediction (i.e., reliable exploitation). We prove sub-linear upper bounds on both the cumulative regret and the number of communication rounds of FN-UCB, and empirically demonstrate its competitive performance. | https://openreview.net/pdf/8ac7b7db42b45091cff07d9f5d520796a17d3efa.pdf |
Compositional Task Representations for Large Language Models | https://openreview.net/forum?id=6axIMJA7ME3 | https://openreview.net/forum?id=6axIMJA7ME3 | NAN SHAO,Zefan Cai,Hanwei xu,Chonghua Liao,Yanan Zheng,Zhilin Yang | ICLR 2023,Poster | Large language models have shown a remarkable cross-task generalization ability. Most prior work assumed that prompts effectively extract knowledge from language models to facilitate generalization to new tasks. This perspective led to numerous studies on improving prompts. In contrast, we introduce a new perspective, compositional generalization, that views each task as a composition of latent codes and generalizes to test tasks by a new composition of seen codes. To this end, we propose a novel prompt-free approach, Compositional Task Representations (CTR), that employs multi-task training to learn a discrete, compositional codebook. Empirically, our CTR substantially outperforms prompt-based methods in zero-label learning on average. According to our analysis, some of the learned CTR codes are interpretable to human and demonstrate a certain degree of controllability.
| https://openreview.net/pdf/ef7361f4ac0604d204d4c3f22d833e3e5d4c3163.pdf |
REPAIR: REnormalizing Permuted Activations for Interpolation Repair | https://openreview.net/forum?id=gU5sJ6ZggcX | https://openreview.net/forum?id=gU5sJ6ZggcX | Keller Jordan,Hanie Sedghi,Olga Saukh,Rahim Entezari,Behnam Neyshabur | ICLR 2023,Poster | In this paper we empirically investigate the conjecture from Entezari et al. (2021) which states that if permutation invariance is taken into account, then there should be no loss barrier to the linear interpolation between SGD solutions. We conduct our investigation using standard computer vision architectures trained on CIFAR-10 and ImageNet. First, we observe a general phenomenon in which interpolated deep networks suffer a collapse in the variance of their activations. We demonstrate that an appropriate rescaling of the pre-activations of the interpolated networks ameliorates this problem and significantly reduces the barrier. Second, by combining this with an algorithm for finding permutations based on maximizing correlations between the activations of matched neurons, we are able to reduce the interpolation barrier for a standard ResNet18 trained on CIFAR-10 to 1.5% absolute test error. We explore the interaction between our method and the choice of normalization layer, and demonstrate its robustness across a variety of architectures and training sets. | https://openreview.net/pdf/381917473060028f559d623409e7f06d9ae61d3e.pdf |
Diffusion-GAN: Training GANs with Diffusion | https://openreview.net/forum?id=HZf7UbpWHuA | https://openreview.net/forum?id=HZf7UbpWHuA | Zhendong Wang,Huangjie Zheng,Pengcheng He,Weizhu Chen,Mingyuan Zhou | ICLR 2023,Poster | Generative adversarial networks (GANs) are challenging to train stably, and a promising remedy of injecting instance noise into the discriminator input has not been very effective in practice. In this paper, we propose Diffusion-GAN, a novel GAN framework that leverages a forward diffusion chain to generate Gaussian-mixture distributed instance noise. Diffusion-GAN consists of three components, including an adaptive diffusion process, a diffusion timestep-dependent discriminator, and a generator. Both the observed and generated data are diffused by the adaptive diffusion process via different noise-to-data ratios at each timestep. The timestep-dependent discriminator learns to distinguish the diffused real data from the diffused generated data at each diffusion timestep. The generator learns from the discriminator's feedback by backpropagating through the forward diffusion chain, whose length is adaptively adjusted to balance the noise and data levels. We theoretically show that the discriminator's timestep-dependent strategy gives consistent and helpful guidance to the generator, enabling it to match the true data distribution. We demonstrate the advantages of Diffusion-GAN over strong GAN baselines on various datasets, showing that it can produce more realistic images with higher stability and data efficiency than state-of-the-art GANs. | https://openreview.net/pdf/40959e2540c461dcff0f54468915dfe64c6cfef8.pdf |
Mind the Pool: Convolutional Neural Networks Can Overfit Input Size | https://openreview.net/forum?id=cWmtUcsYC3V | https://openreview.net/forum?id=cWmtUcsYC3V | Bilal Alsallakh,David Yan,Narine Kokhlikyan,Vivek Miglani,Orion Reblitz-Richardson,Pamela Bhattacharya | ICLR 2023,Poster | We demonstrate how convolutional neural networks can overfit the input size: The accuracy drops significantly when using certain sizes, compared with favorable ones. This issue is inherent to pooling arithmetic, with standard downsampling layers playing a major role in favoring certain input sizes and skewing the weights accordingly. We present a solution to this problem by depriving these layers from the arithmetic cues they use to overfit the input size. Through various examples, we show how our proposed spatially-balanced pooling improves the generalization of the network to arbitrary input sizes and its robustness to translational shifts. | https://openreview.net/pdf/43498b197a3176acfb479e3d500111f3a47c7094.pdf |
Reparameterization through Spatial Gradient Scaling | https://openreview.net/forum?id=Kpdewuy7RU6 | https://openreview.net/forum?id=Kpdewuy7RU6 | Alexander Detkov,Mohammad Salameh,Muhammad Fetrat,Jialin Zhang,Robin Luwei,SHANGLING JUI,Di Niu | ICLR 2023,Poster | Reparameterization aims to improve the generalization of deep neural networks by transforming a convolution operation into equivalent multi-branched structures during training. However, there exists a gap in understanding how reparameterization may change and benefit learning processes for neural networks. In this paper, we present a novel spatial gradient scaling method to redistribute learning focus among weights in convolutional neural networks. We prove that spatial gradient scaling achieves the same learning dynamics as a branched reparameterization yet without introducing structural changes into the network. We further propose an analytical approach that dynamically learns scalings for each convolutional layer based on the spatial characteristics of its input feature map gauged by mutual information. Experiments on CIFAR-10, CIFAR-100, and ImageNet show that without searching for reparameterized structures, our proposed scaling method outperforms the state-of-the-art reparameterization methods at a lower computational cost. | https://openreview.net/pdf/1f4adabaa2c99c6fee15ec87965010130e01f2a2.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.