title
stringlengths
15
138
url
stringlengths
42
42
detail_url
stringlengths
42
42
authors
stringlengths
7
526
tags
stringclasses
3 values
abstract
stringlengths
480
3.09k
pdf
stringlengths
71
71
Realistic Evaluation of Semi-supervised Learning Algorithms in Open Environments
https://openreview.net/forum?id=RvUVMjfp8i
https://openreview.net/forum?id=RvUVMjfp8i
Lin-Han Jia,Lan-Zhe Guo,Zhi Zhou,Yu-Feng Li
ICLR 2024,Spotlight
Semi-supervised learning (SSL) is a powerful paradigm for leveraging unlabeled data and has been proven to be successful across various tasks. Conventional SSL studies typically assume close environment scenarios where labeled and unlabeled examples are independently sampled from the same distribution. However, real-world tasks often involve open environment scenarios where the data distribution, label space, and feature space could differ between labeled and unlabeled data. This inconsistency introduces robustness challenges for SSL algorithms. In this paper, we first propose several robustness metrics for SSL based on the Robustness Analysis Curve (RAC), secondly, we establish a theoretical framework for studying the generalization performance and robustness of SSL algorithms in open environments, thirdly, we re-implement widely adopted SSL algorithms within a unified SSL toolkit and evaluate their performance on proposed open environment SSL benchmarks, including both image, text, and tabular datasets. By investigating the empirical and theoretical results, insightful discussions on enhancing the robustness of SSL algorithms in open environments are presented. The re-implementation and benchmark datasets are all publicly available. More details can be found at https://ygzwqzd.github.io/Robust-SSL-Benchmark.
https://openreview.net/pdf/3ab2c500841b15c2298e78836e9a91caa20ef54d.pdf
Efficient Inverse Multiagent Learning
https://openreview.net/forum?id=JzvIWvC9MG
https://openreview.net/forum?id=JzvIWvC9MG
Denizalp Goktas,Amy Greenwald,Sadie Zhao,Alec Koppel,Sumitra Ganesh
ICLR 2024,Spotlight
In this paper, we study inverse game theory (resp. inverse multiagent learning) in which the goal is to find parameters of a game’s payoff functions for which the expected (resp. sampled) behavior is an equilibrium. We formulate these problems as generative-adversarial (i.e., min-max) optimization problems, which we develop polynomial-time algorithms to solve, the former of which relies on an exact first- order oracle, and the latter, a stochastic one. We extend our approach to solve inverse multiagent simulacral learning in polynomial time and number of samples. In these problems, we seek a simulacrum, meaning parameters and an associated equilibrium that replicate the given observations in expectation. We find that our approach outperforms the widely-used ARIMA method in predicting prices in Spanish electricity markets based on time-series data.
https://openreview.net/pdf/a6e9bb008de3d4a2686aa0016448ee0ae913b390.pdf
On the Role of Discrete Tokenization in Visual Representation Learning
https://openreview.net/forum?id=WNLAkjUm19
https://openreview.net/forum?id=WNLAkjUm19
Tianqi Du,Yifei Wang,Yisen Wang
ICLR 2024,Spotlight
In the realm of self-supervised learning (SSL), masked image modeling (MIM) has gained popularity alongside contrastive learning methods. MIM involves reconstructing masked regions of input images using their unmasked portions. A notable subset of MIM methodologies employs discrete tokens as the reconstruction target, but the theoretical underpinnings of this choice remain underexplored. In this paper, we explore the role of these discrete tokens, aiming to unravel their benefits and limitations. Building upon the connection between MIM and contrastive learning, we provide a comprehensive theoretical understanding on how discrete tokenization affects the model's generalization capabilities. Furthermore, we propose a novel metric named TCAS, which is specifically designed to assess the effectiveness of discrete tokens within the MIM framework. Inspired by this metric, we contribute an innovative tokenizer design and propose a corresponding MIM method named ClusterMIM. It demonstrates superior performance on a variety of benchmark datasets and ViT backbones. Code is available at \url{https://github.com/PKU-ML/ClusterMIM}.
https://openreview.net/pdf/df6a2badfb1ca27a043b219c9e61e43688458fdf.pdf
The Consensus Game: Language Model Generation via Equilibrium Search
https://openreview.net/forum?id=n9xeGcI4Yg
https://openreview.net/forum?id=n9xeGcI4Yg
Athul Paul Jacob,Yikang Shen,Gabriele Farina,Jacob Andreas
ICLR 2024,Spotlight
When applied to question answering and other text generation tasks, language models (LMs) may be queried generatively (by sampling answers from their output distribution) or discriminatively (by using them to score or rank a set of candidate answers). These procedures sometimes yield very different predictions. How do we reconcile mutually incompatible scoring procedures to obtain coherent LM predictions? We introduce a new, a training-free, game-theoretic procedure for language model decoding. Our approach casts language model decoding as a regularized imperfect-information sequential signaling game—which we term the concensus game—in which a generator seeks to communicate an abstract correctness parameter using natural language sentences to a discriminator. We develop computational procedures for finding approximate equilibria of this game, resulting in a decoding algorithm we call equilibrium-ranking. Applied to a large number of tasks (including reading comprehension, commonsense reasoning, mathematical problem-solving, and assistive dialog), equilibrium-ranking consistently improves performance over existing LM decoding procedures. These improvements are sometimes substantial—on multiple benchmarks, we observe that applying equilibrium-ranking to LLaMA-7B outperforms the much larger LLaMA-65B and PaLM-540B models.
https://openreview.net/pdf/6a766aa0bf6a7a4f5d339309db677987d04377ce.pdf
AMAGO: Scalable In-Context Reinforcement Learning for Adaptive Agents
https://openreview.net/forum?id=M6XWoEdmwf
https://openreview.net/forum?id=M6XWoEdmwf
Jake Grigsby,Linxi Fan,Yuke Zhu
ICLR 2024,Spotlight
We introduce AMAGO, an in-context Reinforcement Learning (RL) agent that uses sequence models to tackle the challenges of generalization, long-term memory, and meta-learning. Recent works have shown that off-policy learning can make in-context RL with recurrent policies viable. Nonetheless, these approaches require extensive tuning and limit scalability by creating key bottlenecks in agents' memory capacity, planning horizon, and model size. AMAGO revisits and redesigns the off-policy in-context approach to successfully train long-sequence Transformers over entire rollouts in parallel with end-to-end RL. Our agent is scalable and applicable to a wide range of problems, and we demonstrate its strong performance empirically in meta-RL and long-term memory domains. AMAGO's focus on sparse rewards and off-policy data also allows in-context learning to extend to goal-conditioned problems with challenging exploration. When combined with a multi-goal hindsight relabeling scheme, AMAGO can solve a previously difficult category of open-world domains, where agents complete many possible instructions in procedurally generated environments.
https://openreview.net/pdf/6ffd1eb5dc0bd2b144d5d0309763b3ed5e114e8b.pdf
PILOT: An $\mathcal{O}(1/K)$-Convergent Approach for Policy Evaluation with Nonlinear Function Approximation
https://openreview.net/forum?id=OkHHJcMroY
https://openreview.net/forum?id=OkHHJcMroY
Zhuqing Liu,Xin Zhang,Jia Liu,Zhengyuan Zhu,Songtao Lu
ICLR 2024,Spotlight
Learning an accurate value function for a given policy is a critical step in solving reinforcement learning (RL) problems. So far, however, the convergence speed and sample complexity performances of most existing policy evaluation algorithms remain unsatisfactory, particularly with non-linear function approximation. This challenge motivates us to develop a new path-integrated primal-dual stochastic gradient (PILOT) method, that is able to achieve a fast convergence speed for RL policy evaluation with nonlinear function approximation. To further alleviate the periodic full gradient evaluation requirement, we further propose an enhanced method with an adaptive-batch adjustment called PILOT$^+$. The main advantages of our methods include: i) PILOT allows the use of {\em{constant}} step sizes and achieves the $\mathcal{O}(1/K)$ convergence rate to first-order stationary points of non-convex policy evaluation problems; ii) PILOT is a generic {\em{single}}-timescale algorithm that is also applicable for solving a large class of non-convex strongly-concave minimax optimization problems; iii) By adaptively adjusting the batch size via historical stochastic gradient information, PILOT$^+$ is more sample-efficient empirically without loss of theoretical convergence rate. Our extensive numerical experiments verify our theoretical findings and showcase the high efficiency of the proposed PILOT and PILOT$^+$ algorithms compared with the state-of-the-art methods.
https://openreview.net/pdf/902e07eaba32bb9c32d1b7969eb59578e7e928ca.pdf
Confronting Reward Model Overoptimization with Constrained RLHF
https://openreview.net/forum?id=gkfUvn0fLU
https://openreview.net/forum?id=gkfUvn0fLU
Ted Moskovitz,Aaditya K Singh,DJ Strouse,Tuomas Sandholm,Ruslan Salakhutdinov,Anca Dragan,Stephen Marcus McAleer
ICLR 2024,Spotlight
Large language models are typically aligned with human preferences by optimizing reward models (RMs) fitted to human feedback. However, human preferences are multi-faceted, and it is increasingly common to derive reward from a composition of simpler reward models which each capture a different aspect of language quality. This itself presents a challenge, as it is difficult to appropriately weight these component RMs when combining them. Compounding this difficulty, because any RM is only a proxy for human evaluation, this process is vulnerable to *overoptimization*, wherein past a certain point, accumulating higher reward is associated with worse human ratings. In this paper, we perform the first study on overoptimization in composite RMs, showing that correlation between component RMs has a significant effect on the locations of these points. We then introduce an approach to solve this issue using constrained reinforcement learning as a means of preventing the agent from exceeding each RM's threshold of usefulness. Our method addresses the problem of weighting component RMs by learning dynamic weights, naturally given by the Lagrange multipliers. As a result, each RM stays within the range at which it is an effective proxy, improving evaluation performance. Finally, we introduce an adaptive method using gradient-free optimization to identify and optimize towards these points during a single run.
https://openreview.net/pdf/9110e24405b3d1c469f8710d548cf6e5b7867692.pdf
LiDAR: Sensing Linear Probing Performance in Joint Embedding SSL Architectures
https://openreview.net/forum?id=f3g5XpL9Kb
https://openreview.net/forum?id=f3g5XpL9Kb
Vimal Thilak,Chen Huang,Omid Saremi,Laurent Dinh,Hanlin Goh,Preetum Nakkiran,Joshua M. Susskind,Etai Littwin
ICLR 2024,Spotlight
Joint embedding (JE) architectures have emerged as a promising avenue for ac- quiring transferable data representations. A key obstacle to using JE methods, however, is the inherent challenge of evaluating learned representations without access to a downstream task, and an annotated dataset. Without efficient and re- liable evaluation, it is difficult to iterate on architectural and training choices for JE methods. In this paper, we introduce LiDAR (Linear Discriminant Analysis Rank), a metric designed to measure the quality of representations within JE archi- tectures. Our metric addresses several shortcomings of recent approaches based on feature covariance rank by discriminating between informative and uninforma- tive features. In essence, LiDAR quantifies the rank of the Linear Discriminant Analysis (LDA) matrix associated with the surrogate SSL task—a measure that intuitively captures the information content as it pertains to solving the SSL task. We empirically demonstrate that LiDAR significantly surpasses naive rank based approaches in its predictive power of optimal hyperparameters. Our proposed cri- terion presents a more robust and intuitive means of assessing the quality of rep- resentations within JE architectures, which we hope facilitates broader adoption of these powerful techniques in various domains.
https://openreview.net/pdf/fee7013fbdfbbccda18d5123b30300919a05a18f.pdf
Improved Efficiency Based on Learned Saccade and Continuous Scene Reconstruction From Foveated Visual Sampling
https://openreview.net/forum?id=lOwkOIUJtx
https://openreview.net/forum?id=lOwkOIUJtx
Jiayang Liu,Yiming Bu,Daniel Tso,Qinru Qiu
ICLR 2024,Spotlight
High accuracy, low latency and high energy efficiency represent a set of contradictory goals when searching for system solutions for image classification and detection. While high-quality images naturally result in more precise detection and classification, they also result in a heavier computational workload for imaging and processing, reduce camera refresh rates, and increase the volume of data communication between the camera and processor. Taking inspiration from the foveal-peripheral sampling mechanism, saccade mechanism observed in the human visual system and the filling-in phenomena of brain, we have developed an active scene reconstruction architecture based on multiple foveal views. This model stitches together information from foveal and peripheral vision, which are sampled from multiple glances. Assisted by a reinforcement learning-based saccade mechanism, our model reduces the required input pixels by over 90\% per frame while maintaining the same level of performance in image recognition as with the original images. We evaluated the effectiveness of our model using the GTSRB dataset and the ImageNet dataset. Using an equal number of input pixels, our study demonstrates a 5\% higher image recognition accuracy compared to state-of-the-art foveal-peripheral vision systems. Furthermore, we demonstrate that our foveal sampling/saccadic scene reconstruction model exhibits significantly lower complexity and higher data efficiency during the training phase compared to existing approaches.
https://openreview.net/pdf/ad265fc85f17aaf422d2fd23b3440143ba832adc.pdf
Overthinking the Truth: Understanding how Language Models Process False Demonstrations
https://openreview.net/forum?id=Tigr1kMDZy
https://openreview.net/forum?id=Tigr1kMDZy
Danny Halawi,Jean-Stanislas Denain,Jacob Steinhardt
ICLR 2024,Spotlight
Modern language models can imitate complex patterns through few-shot learning, enabling them to complete challenging tasks without fine-tuning. However, imitation can also lead models to reproduce inaccuracies or harmful content if present in the context. We study harmful imitation through the lens of a model’s internal representations, and identify two related phenomena: overthinking and false induction heads. The first phenomenon, overthinking, appears when we decode predictions from intermediate layers, given correct vs. incorrect few-shot demonstrations. At early layers, both demonstrations induce similar model behavior, but the behavior diverges sharply at some “critical layer”, after which the accuracy given incorrect demonstrations progressively decreases. The second phenomenon, false induction heads, are a possible mechanistic cause of overthinking: these are heads in late layers that attend to and copy false information from previous demonstrations, and whose ablation reduces overthinking. Beyond scientific understanding, our results suggest that studying intermediate model computations could be a promising avenue for understanding and guarding against harmful model behaviors.
https://openreview.net/pdf/55ea326a66e2cf61319ebe01bdea1b4ebbd8d775.pdf
MT-Ranker: Reference-free machine translation evaluation by inter-system ranking
https://openreview.net/forum?id=Rry1SeSOQL
https://openreview.net/forum?id=Rry1SeSOQL
Ibraheem Muhammad Moosa,Rui Zhang,Wenpeng Yin
ICLR 2024,Spotlight
Traditionally, Machine Translation (MT) Evaluation has been treated as a regression problem -- producing an absolute translation-quality score. This approach has two limitations: i) the scores lack interpretability, and human annotators struggle with giving consistent scores; ii) most scoring methods are based on (reference, translation) pairs, limiting their applicability in real-world scenarios where references are absent. In practice, we often care about whether a new MT system is better or worse than some competitors. In addition, reference-free MT evaluation is increasingly practical and necessary. Unfortunately, these two practical considerations have yet to be jointly explored. In this work, we formulate the reference-free MT evaluation into a pairwise ranking problem. Given the source sentence and a pair of translations, our system predicts which translation is better. In addition to proposing this new formulation, we further show that this new paradigm can demonstrate superior correlation with human judgments by merely using indirect supervision from natural language inference and weak supervision from our synthetic data. In the context of reference-free evaluation, MT-Ranker, trained without any human annotations, achieves state-of-the-art results on the WMT Shared Metrics Task benchmarks DARR20, MQM20, and MQM21. On a more challenging benchmark, ACES, which contains fine-grained evaluation criteria such as addition, omission, and mistranslation errors, MT-Ranker marks state-of-the-art against reference-free as well as reference-based baselines.
https://openreview.net/pdf/fa181eb3a2cd5c3485b73e3829ad16f3dffa5faa.pdf
MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning
https://openreview.net/forum?id=jenyYQzue1
https://openreview.net/forum?id=jenyYQzue1
Zayne Rea Sprague,Xi Ye,Kaj Bostrom,Swarat Chaudhuri,Greg Durrett
ICLR 2024,Spotlight
While large language models (LLMs) equipped with techniques like chain-of-thought prompting have demonstrated impressive capabilities, they still fall short in their ability to reason robustly in complex settings. However, evaluating LLM reasoning is challenging because system capabilities continue to grow while benchmark datasets for tasks like logical deduction have remained static. We introduce MuSR, a dataset for evaluating language models on multistep soft reasoning tasks specified in a natural language narrative. This dataset has two crucial features. First, it is created through a novel neurosymbolic synthetic-to-natural generation algorithm, enabling the construction of complex reasoning instances that challenge GPT-4 (e.g., murder mysteries roughly 1000 words in length) and which can be scaled further as more capable LLMs are released. Second, our data instances are free text narratives corresponding to real-world domains of reasoning; this makes it simultaneously much more challenging than other synthetically-crafted benchmarks while remaining realistic and tractable for human annotators to solve with high accuracy. We evaluate a range of LLMs and prompting techniques on this dataset and characterize the gaps that remain for techniques like chain-of-thought to perform robust reasoning.
https://openreview.net/pdf/0fd545b50f3dd67f4d965d2b37b07aa5d08aba77.pdf
Harnessing Density Ratios for Online Reinforcement Learning
https://openreview.net/forum?id=THJEa8adBn
https://openreview.net/forum?id=THJEa8adBn
Philip Amortila,Dylan J Foster,Nan Jiang,Ayush Sekhari,Tengyang Xie
ICLR 2024,Spotlight
The theories of offline and online reinforcement learning, despite having evolved in parallel, have begun to show signs of the possibility for a unification, with algorithms and analysis techniques for one setting often having natural counterparts in the other. However, the notion of *density ratio modeling*, an emerging paradigm in offline RL, has been largely absent from online RL, perhaps for good reason: the very existence and boundedness of density ratios relies on access to an exploratory dataset with good coverage, but the core challenge in online RL is to collect such a dataset without having one to start. In this work we show---perhaps surprisingly---that density ratio-based algorithms have online counterparts. Assuming only the existence of an exploratory distribution with good coverage, a structural condition known as *coverability* (Xie et al., 2023), we give a new algorithm (GLOW) that uses density ratio realizability and value function realizability to perform sample-efficient online exploration. GLOW addresses unbounded density ratios via careful use of truncation, and combines this with optimism to guide exploration. GLOW is computationally inefficient; we complement it with a more efficient counterpart, HyGLOW, for the Hybrid RL setting (Song et al., 2023) wherein online RL is augmented with additional offline data. HyGLOW is derived as a special case of a more general meta-algorithm that provides a provable black-box reduction from hybrid RL to offline RL, which may be of independent interest.
https://openreview.net/pdf/2fffb35b07edd292dea91e77c2c874abdf3e831f.pdf
Predictive, scalable and interpretable knowledge tracing on structured domains
https://openreview.net/forum?id=NgaLU2fP5D
https://openreview.net/forum?id=NgaLU2fP5D
Hanqi Zhou,Robert Bamler,Charley M Wu,Álvaro Tejero-Cantero
ICLR 2024,Spotlight
Intelligent tutoring systems optimize the selection and timing of learning materials to enhance understanding and long-term retention. This requires estimates of both the learner's progress ("knowledge tracing"; KT), and the prerequisite structure of the learning domain ("knowledge mapping"). While recent deep learning models achieve high KT accuracy, they do so at the expense of the interpretability of psychologically-inspired models. In this work, we present a solution to this trade-off. PSI-KT is a hierarchical generative approach that explicitly models how both individual cognitive traits and the prerequisite structure of knowledge influence learning dynamics, thus achieving interpretability by design. Moreover, by using scalable Bayesian inference, PSI-KT targets the real-world need for efficient personalization even with a growing body of learners and interaction data. Evaluated on three datasets from online learning platforms, PSI-KT achieves superior multi-step **p**redictive accuracy and **s**calable inference in continual-learning settings, all while providing **i**nterpretable representations of learner-specific traits and the prerequisite structure of knowledge that causally supports learning. In sum, predictive, scalable and interpretable knowledge tracing with solid knowledge mapping lays a key foundation for effective personalized learning to make education accessible to a broad, global audience.
https://openreview.net/pdf/52c2533f0586eecd513971405b1c38d7810ec28b.pdf
From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication
https://openreview.net/forum?id=vngVydDWft
https://openreview.net/forum?id=vngVydDWft
Irene Cannistraci,Luca Moschella,Marco Fumero,Valentino Maiorca,Emanuele Rodolà
ICLR 2024,Spotlight
It has been observed that representations learned by distinct neural networks conceal structural similarities when the models are trained under similar inductive biases. From a geometric perspective, identifying the classes of transformations and the related invariances that connect these representations is fundamental to unlocking applications, such as merging, stitching, and reusing different neural modules. However, estimating task-specific transformations a priori can be challenging and expensive due to several factors (e.g., weights initialization, training hyperparameters, or data modality). To this end, we introduce a versatile method to directly incorporate a set of invariances into the representations, constructing a product space of invariant components on top of the latent representations without requiring prior knowledge about the optimal invariance to infuse. We validate our solution on classification and reconstruction tasks, observing consistent latent similarity and downstream performance improvements in a zero-shot stitching setting. The experimental analysis comprises three modalities (vision, text, and graphs), twelve pretrained foundational models, nine benchmarks, and several architectures trained from scratch.
https://openreview.net/pdf/4643421d3e88ae6516cd76daa15145a0b3f490d5.pdf
Proximal Policy Gradient Arborescence for Quality Diversity Reinforcement Learning
https://openreview.net/forum?id=TFKIfhvdmZ
https://openreview.net/forum?id=TFKIfhvdmZ
Sumeet Batra,Bryon Tjanaka,Matthew Christopher Fontaine,Aleksei Petrenko,Stefanos Nikolaidis,Gaurav S. Sukhatme
ICLR 2024,Spotlight
Training generally capable agents that thoroughly explore their environment and learn new and diverse skills is a long-term goal of robot learning. Quality Diversity Reinforcement Learning (QD-RL) is an emerging research area that blends the best aspects of both fields – Quality Diversity (QD) provides a principled form of exploration and produces collections of behaviorally diverse agents, while Reinforcement Learning (RL) provides a powerful performance improvement operator enabling generalization across tasks and dynamic environments. Existing QD-RL approaches have been constrained to sample efficient, deterministic off- policy RL algorithms and/or evolution strategies and struggle with highly stochastic environments. In this work, we, for the first time, adapt on-policy RL, specifically Proximal Policy Optimization (PPO), to the Differentiable Quality Diversity (DQD) framework and propose several changes that enable efficient optimization and discovery of novel skills on high-dimensional, stochastic robotics tasks. Our new algorithm, Proximal Policy Gradient Arborescence (PPGA), achieves state-of- the-art results, including a 4x improvement in best reward over baselines on the challenging humanoid domain.
https://openreview.net/pdf/f3442a5e81e1db8cb186b12c6fe204d3de0490dc.pdf
Memorization Capacity of Multi-Head Attention in Transformers
https://openreview.net/forum?id=MrR3rMxqqv
https://openreview.net/forum?id=MrR3rMxqqv
Sadegh Mahdavi,Renjie Liao,Christos Thrampoulidis
ICLR 2024,Spotlight
Transformers have become the go-to architecture for language and vision tasks, yet their theoretical properties, especially memorization capacity, remain elusive. This paper investigates the memorization abilities of multi-head attention mechanisms, examining how many example sequences they can memorize, as a function of the number of heads and sequence length. Motivated by experimental findings on vision transformers, we introduce novel assumptions about the linear independence of input data, distinct from the commonly used general-position assumption. Under these assumptions, we demonstrate that an attention layer with $H$ heads, dimension $d$, and context size $n < d,$ featuring $\Theta(Hd^2)$ parameters, can memorize $\Omega(Hn)$ examples. Our analysis sheds light on how different attention heads handle various example sequences, aided by the softmax operator’s saturation property. We validate our findings through experiments on synthetic data.
https://openreview.net/pdf/30b070a1d5057982a67ece4fdb61fca629dea9e6.pdf
Circuit Component Reuse Across Tasks in Transformer Language Models
https://openreview.net/forum?id=fpoAYV6Wsk
https://openreview.net/forum?id=fpoAYV6Wsk
Jack Merullo,Carsten Eickhoff,Ellie Pavlick
ICLR 2024,Spotlight
Recent work in mechanistic interpretability has shown that behaviors in language models can be successfully reverse-engineered through circuit analysis. A common criticism, however, is that each circuit is task-specific, and thus such analysis cannot contribute to understanding the models at a higher level. In this work, we present evidence that insights (both low-level findings about specific heads and higher-level findings about general algorithms) can indeed generalize across tasks. Specifically, we study the circuit discovered in (Wang, 2022) for the Indirect Object Identification (IOI) task and 1.) show that it reproduces on a larger GPT2 model, and 2.) that it is mostly reused to solve a seemingly different task: Colored Objects (Ippolito & Callison-Burch, 2023). We provide evidence that the process underlying both tasks is functionally very similar, and contains about a 78% overlap in in-circuit attention heads. We further present a proof-of-concept intervention experiment, in which we adjust four attention heads in middle layers in order to ‘repair’ the Colored Objects circuit and make it behave like the IOI circuit. In doing so, we boost accuracy from 49.6% to 93.7% on the Colored Objects task and explain most sources of error. The intervention affects downstream attention heads in specific ways predicted by their interactions in the IOI circuit, indicating that this subcircuit behavior is invariant to the different task inputs. Overall, our results provide evidence that it may yet be possible to explain large language models' behavior in terms of a relatively small number of interpretable task-general algorithmic building blocks and computational components.
https://openreview.net/pdf/20152cb1b27ee48c1edca998e2aa13b4249cabaa.pdf
Likelihood Training of Cascaded Diffusion Models via Hierarchical Volume-preserving Maps
https://openreview.net/forum?id=sojpn00o8z
https://openreview.net/forum?id=sojpn00o8z
Henry Li,Ronen Basri,Yuval Kluger
ICLR 2024,Spotlight
Cascaded models are multi-scale generative models with a marked capacity for producing perceptually impressive samples at high resolutions. In this work, we show that they can also be excellent likelihood models, so long as we overcome a fundamental difficulty with probabilistic multi-scale models: the intractability of the likelihood function. Chiefly, in cascaded models each intermediary scale introduces extraneous variables that cannot be tractably marginalized out for likelihood evaluation. This issue vanishes by modeling the diffusion process on latent spaces induced by a class of transformations we call hierarchical volume-preserving maps, which decompose spatially structured data in a hierarchical fashion without introducing local distortions in the latent space. We demonstrate that two such maps are well-known in the literature for multiscale modeling: Laplacian pyramids and wavelet transforms. Not only do such reparameterizations allow the likelihood function to be directly expressed as a joint likelihood over the scales, we show that the Laplacian pyramid and wavelet transform also produces significant improvements to the state-of-the-art on a selection of benchmarks in likelihood modeling, including density estimation, lossless compression, and out-of-distribution detection. Investigating the theoretical basis of our empirical gains we uncover deep connections to score matching under the Earth Mover's Distance (EMD), which is a well-known surrogate for perceptual similarity.
https://openreview.net/pdf/865dfe741f8cb5f9543b9889e222096ffcafed42.pdf
Empirical Analysis of Model Selection for Heterogeneous Causal Effect Estimation
https://openreview.net/forum?id=yuy6cGt3KL
https://openreview.net/forum?id=yuy6cGt3KL
Divyat Mahajan,Ioannis Mitliagkas,Brady Neal,Vasilis Syrgkanis
ICLR 2024,Spotlight
We study the problem of model selection in causal inference, specifically for conditional average treatment effect (CATE) estimation. Unlike machine learning, there is no perfect analogue of cross-validation for model selection as we do not observe the counterfactual potential outcomes. Towards this, a variety of surrogate metrics have been proposed for CATE model selection that use only observed data. However, we do not have a good understanding regarding their effectiveness due to limited comparisons in prior studies. We conduct an extensive empirical analysis to benchmark the surrogate model selection metrics introduced in the literature, as well as the novel ones introduced in this work. We ensure a fair comparison by tuning the hyperparameters associated with these metrics via AutoML, and provide more detailed trends by incorporating realistic datasets via generative modeling. Our analysis suggests novel model selection strategies based on careful hyperparameter selection of CATE estimators and causal ensembling.
https://openreview.net/pdf/68047fc672c9a6819fb21499f7e2b6e8191790b9.pdf
Confidential-DPproof: Confidential Proof of Differentially Private Training
https://openreview.net/forum?id=PQY2v6VtGe
https://openreview.net/forum?id=PQY2v6VtGe
Ali Shahin Shamsabadi,Gefei Tan,Tudor Ioan Cebere,Aurélien Bellet,Hamed Haddadi,Nicolas Papernot,Xiao Wang,Adrian Weller
ICLR 2024,Spotlight
Post hoc privacy auditing techniques can be used to test the privacy guarantees of a model, but come with several limitations: (i) they can only establish lower bounds on the privacy loss, (ii) the intermediate model updates and some data must be shared with the auditor to get a better approximation of the privacy loss, and (iii) the auditor typically faces a steep computational cost to run a large number of attacks. In this paper, we propose to proactively generate a cryptographic certificate of privacy during training to forego such auditing limitations. We introduce Confidential-DPproof , a framework for Confidential Proof of Differentially Private Training, which enhances training with a certificate of the $(\varepsilon,\delta)$-DP guarantee achieved. To obtain this certificate without revealing information about the training data or model, we design a customized zero-knowledge proof protocol tailored to the requirements introduced by differentially private training, including random noise addition and privacy amplification by subsampling. In experiments on CIFAR-10, Confidential-DPproof trains a model achieving state-of-the-art $91$% test accuracy with a certified privacy guarantee of $(\varepsilon=0.55,\delta=10^{-5})$-DP in approximately 100 hours.
https://openreview.net/pdf/94732346a3d36701b0a68d02f2366498641c54ee.pdf
In-Context Pretraining: Language Modeling Beyond Document Boundaries
https://openreview.net/forum?id=LXVswInHOo
https://openreview.net/forum?id=LXVswInHOo
Weijia Shi,Sewon Min,Maria Lomeli,Chunting Zhou,Margaret Li,Xi Victoria Lin,Noah A. Smith,Luke Zettlemoyer,Wen-tau Yih,Mike Lewis
ICLR 2024,Spotlight
Language models are currently trained to predict tokens given document prefixes, enabling them to zero shot long form generation and prompting-style tasks which can be reduced to document completion. We instead present IN-CONTEXT PRETRAINING, a new approach where language models are trained on a sequence of related documents, thereby explicitly encouraging them to read and reason across document boundaries. Our approach builds on the fact that current pipelines train by concatenating random sets of shorter documents to create longer context windows; this improves efficiency even though the prior documents provide no signal for predicting the next document. Given this fact, we can do IN-CONTEXT PRETRAINING by simply changing the document ordering so that each context contains related documents, and directly applying existing pretraining pipelines. However, this document sorting problem is challenging. There are billions of documents and we would like the sort to maximize contextual similarity for every document without repeating any data. To do this, we introduce approximate algorithms for finding related documents with efficient nearest neighbor search and constructing coherent batches with a graph cover algorithm. Our experiments show IN-CONTEXT PRETRAINING offers a scalable and simple approach to significantly enhance LM performance: we see notable improvements in tasks that require more complex contextual reasoning, including in-context learning (+8%), reading comprehension (+15%), faithfulness to previous contexts (+16%), long-context reasoning (+5%), and retrieval augmentation (+9%).
https://openreview.net/pdf/a7de9c0b47acd0a990190c7e40945c1f335d4201.pdf
What's In My Big Data?
https://openreview.net/forum?id=RvfPnOkPV4
https://openreview.net/forum?id=RvfPnOkPV4
Yanai Elazar,Akshita Bhagia,Ian Helgi Magnusson,Abhilasha Ravichander,Dustin Schwenk,Alane Suhr,Evan Pete Walsh,Dirk Groeneveld,Luca Soldaini,Sameer Singh,Hannaneh Hajishirzi,Noah A. Smith,Jesse Dodge
ICLR 2024,Spotlight
Large text corpora are the backbone of language models. However, we have a limited understanding of the content of these corpora, including general statistics, quality, social factors, and inclusion of evaluation data (contamination). In this work, we propose What's In My Big Data? (WIMBD), a platform and a set of sixteen analyses that allow us to reveal and compare the contents of large text corpora. WIMBD builds on two basic capabilities---count and search---*at scale*, which allows us to analyze more than 35 terabytes on a standard compute node. We apply WIMBD to ten different corpora used to train popular language models, including *C4*, *The Pile*, and *RedPajama*. Our analysis uncovers several surprising and previously undocumented findings about these corpora, including the high prevalence of duplicate, synthetic, and low-quality content, personally identifiable information, toxic language, and benchmark contamination. For instance, we find that about 50% of the documents in *RedPajama* and *LAION-2B-en* are duplicates. In addition, several datasets used for benchmarking models trained on such corpora are contaminated with respect to important benchmarks, including the Winograd Schema Challenge and parts of GLUE and SuperGLUE. We open-source WIMBD's code and artifacts to provide a standard set of evaluations for new text-based corpora and to encourage more analyses and transparency around them.
https://openreview.net/pdf/8e645356fd6998459b2c368f65c8a2b3a44206af.pdf
On Diffusion Modeling for Anomaly Detection
https://openreview.net/forum?id=lR3rk7ysXz
https://openreview.net/forum?id=lR3rk7ysXz
Victor Livernoche,Vineet Jain,Yashar Hezaveh,Siamak Ravanbakhsh
ICLR 2024,Spotlight
Known for their impressive performance in generative modeling, diffusion models are attractive candidates for density-based anomaly detection. This paper investigates different variations of diffusion modeling for unsupervised and semi-supervised anomaly detection. In particular, we find that Denoising Diffusion Probability Models (DDPM) are performant on anomaly detection benchmarks yet computationally expensive. By simplifying DDPM in application to anomaly detection, we are naturally led to an alternative approach called Diffusion Time Estimation (DTE). DTE estimates the distribution over diffusion time for a given input and uses the mode or mean of this distribution as the anomaly score. We derive an analytical form for this density and leverage a deep neural network to improve inference efficiency. Through empirical evaluations on the ADBench benchmark, we demonstrate that all diffusion-based anomaly detection methods perform competitively for both semi-supervised and unsupervised settings. Notably, DTE achieves orders of magnitude faster inference time than DDPM, while outperforming it on this benchmark. These results establish diffusion-based anomaly detection as a scalable alternative to traditional methods and recent deep-learning techniques for standard unsupervised and semi-supervised anomaly detection settings.
https://openreview.net/pdf/c6480e4c58a2924ca498ff399ce467bb1e61ed7b.pdf
Social Reward: Evaluating and Enhancing Generative AI through Million-User Feedback from an Online Creative Community
https://openreview.net/forum?id=tjn2YZSHUv
https://openreview.net/forum?id=tjn2YZSHUv
Arman Isajanyan,Artur Shatveryan,David Kocharian,Zhangyang Wang,Humphrey Shi
ICLR 2024,Spotlight
Social reward as a form of community recognition provides a strong source of motivation for users of online platforms to actively engage and contribute with content to accumulate peers approval. In the realm of text-conditioned image synthesis, the recent surge in progress has ushered in a collaborative era where users and AI systems coalesce to refine visual creations. This co-creative pro- cess in the landscape of online social networks empowers users to craft original visual artworks seeking for community validation. Nevertheless, assessing these models in the context of collective community preference introduces distinct chal- lenges. Existing evaluation methods predominantly center on limited size user studies guided by image quality and alignment with prompts. This work pio- neers a paradigm shift, unveiling Social Reward - an innovative reward modeling framework that leverages implicit feedback from social network users engaged in creative editing of generated images. We embark on an extensive journey of dataset curation and refinement, drawing from Picsart: an online visual creation and editing platform, yielding a first million-user-scale dataset of implicit human preferences for user-generated visual art named Picsart Image-Social. Our anal- ysis exposes the shortcomings of current metrics in modeling community creative preference of text-to-image models’ outputs, compelling us to introduce a novel predictive model explicitly tailored to address these limitations. Rigorous quan- titative experiments and user study show that our Social Reward model aligns better with social popularity than existing metrics. Furthermore, we utilize So- cial Reward to fine-tune text-to-image models, yielding images that are more fa- vored by not only Social Reward, but also other established metrics. These find- ings highlight the relevance and effectiveness of Social Reward in assessing com- munity appreciation for AI-generated artworks, establishing a closer alignment with users’ creative goals: creating popular visual art. Codes can be accessed at https://github.com/Picsart-AI-Research/Social-Reward
https://openreview.net/pdf/2deafb9f8640664b5840f46d75dd6e361f54bd88.pdf
Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs
https://openreview.net/forum?id=AfnsTnYphT
https://openreview.net/forum?id=AfnsTnYphT
Aakash Lahoti,Stefani Karp,Ezra Winston,Aarti Singh,Yuanzhi Li
ICLR 2024,Spotlight
Vision tasks are characterized by the properties of locality and translation invariance. The superior performance of convolutional neural networks (CNNs) on these tasks is widely attributed to the inductive bias of locality and weight sharing baked into their architecture. Existing attempts to quantify the statistical benefits of these biases in CNNs over locally connected convolutional neural networks (LCNs) and fully connected neural networks (FCNs) fall into one of the following categories: either they disregard the optimizer and only provide uniform convergence upper bounds with no separating lower bounds, or they consider simplistic tasks that do not truly mirror the locality and translation invariance as found in real-world vision tasks. To address these deficiencies, we introduce the Dynamic Signal Distribution (DSD) classification task that models an image as consisting of $k$ patches, each of dimension $d$, and the label is determined by a $d$-sparse signal vector that can freely appear in any one of the $k$ patches. On this task, for any orthogonally equivariant algorithm like gradient descent, we prove that CNNs require $\tilde{O}(k+d)$ samples, whereas LCNs require $\Omega(kd)$ samples, establishing the statistical advantages of weight sharing in translation invariant tasks. Furthermore, LCNs need $\tilde{O}(k(k+d))$ samples, compared to $\Omega(k^2d)$ samples for FCNs, showcasing the benefits of locality in local tasks. Additionally, we develop information theoretic tools for analyzing randomized algorithms, which may be of interest for statistical research.
https://openreview.net/pdf/8956f1209e23a0bd6029fbbe9b6d595916060b56.pdf
Lion Secretly Solves a Constrained Optimization: As Lyapunov Predicts
https://openreview.net/forum?id=e4xS9ZarDr
https://openreview.net/forum?id=e4xS9ZarDr
Lizhang Chen,Bo Liu,Kaizhao Liang,qiang liu
ICLR 2024,Spotlight
Lion (Evolved Sign Momentum), a new optimizer discovered through program search, has shown promising results in training large AI models. It achieves results comparable to AdamW but with greater memory efficiency. As what we can expect from the result of the random search, Lion blends a number of elements from existing algorithms, including signed momentum, decoupled weight decay, Polayk and Nesterov momentum, but doesn't fit into any existing category of theoretically grounded optimizers. Thus, even though Lion appears to perform well as a general-purpose optimizer for a wide range of tasks, its theoretical basis remains uncertain. This absence of theoretical clarity limits opportunities to further enhance and expand Lion's efficacy. This work aims to demystify Lion. Using both continuous-time and discrete-time analysis, we demonstrate that Lion is a novel and theoretically grounded approach for minimizing a general loss function $f(x)$ while enforcing a bound constraint $||x||_\infty \leq 1/\lambda$. Lion achieves this through the incorporation of decoupled weight decay, where $\lambda$ represents the weight decay coefficient. Our analysis is facilitated by the development of a new Lyapunov function for the Lion updates. It applies to a wide range of Lion-$\phi$ algorithms, where the $sign(\cdot)$ operator in Lion is replaced by the subgradient of a convex function $\phi$, leading to the solution of the general composite optimization problem $\min_x f(x) + \phi^*(x)$. Our findings provide valuable insights into the dynamics of Lion and pave the way for further enhancements and extensions of Lion-related algorithms.
https://openreview.net/pdf/415e0c57e06c454b4abc460e3311a5bc3e5c9b6b.pdf
Distributionally Robust Optimization with Bias and Variance Reduction
https://openreview.net/forum?id=TTrzgEZt9s
https://openreview.net/forum?id=TTrzgEZt9s
Ronak Mehta,Vincent Roulet,Krishna Pillutla,Zaid Harchaoui
ICLR 2024,Spotlight
We consider the distributionally robust optimization (DRO) problem, wherein a learner optimizes the worst-case empirical risk achievable by reweighing the observed training examples. We present Prospect, a stochastic gradient-based algorithm that only requires tuning a single learning rate hyperparameter, and prove that it enjoys linear convergence for smooth regularized losses. This contrasts with previous algorithms that either require tuning multiple hyperparameters or potentially fail to converge due to biased gradient estimates or inadequate regularization. Empirically, we show that Prospect can converge 2-3x faster than baselines such as SGD and stochastic saddle-point methods on distribution shift and fairness benchmarks spanning tabular, vision, and language domains.
https://openreview.net/pdf/6c3d461c90f544421c04e52861860e354c20c157.pdf
A Benchmark for Learning to Translate a New Language from One Grammar Book
https://openreview.net/forum?id=tbVWug9f2h
https://openreview.net/forum?id=tbVWug9f2h
Garrett Tanzer,Mirac Suzgun,Eline Visser,Dan Jurafsky,Luke Melas-Kyriazi
ICLR 2024,Spotlight
Large language models (LLMs) can perform impressive feats with in-context learning or lightweight finetuning. It is natural to wonder how well these models adapt to genuinely new tasks, but how does one find tasks that are unseen in internet-scale training sets? We turn to a field that is explicitly motivated and bottlenecked by a scarcity of web data: low-resource languages. In this paper, we introduce MTOB (Machine Translation from One Book), a benchmark for learning to translate between English and Kalamang—a language with less than 200 speakers and therefore virtually no presence on the web—using several hundred pages of field linguistics reference materials. This task framing is novel in that it asks a model to learn a language from a single human-readable book of grammar explanations, rather than a large mined corpus of in-domain data, more akin to L2 language learning than L1 language acquisition. We demonstrate that baselines using current LLMs are promising but fall short of human performance, achieving 44.7 chrF on Kalamang to English translation and 45.8 chrF on English to Kalamang translation, compared to 51.6 and 57.0 chrF by a human who learned Kalamang from the same reference materials. We hope that MTOB will help measure LLM capabilities along a new dimension, and that the methods developed to solve it could help expand access to language technology for underserved communities by leveraging qualitatively different kinds of data than traditional machine translation.
https://openreview.net/pdf/410fe170bf3313698302de77b1641b5a9ea9eaa3.pdf
Improving Offline RL by Blending Heuristics
https://openreview.net/forum?id=MCl0TLboP1
https://openreview.net/forum?id=MCl0TLboP1
Sinong Geng,Aldo Pacchiano,Andrey Kolobov,Ching-An Cheng
ICLR 2024,Spotlight
We propose **H**e**u**ristic **Bl**ending (HUBL), a simple performance-improving technique for a broad class of offline RL algorithms based on value bootstrapping. HUBL modifies the Bellman operators used in these algorithms, partially replacing the bootstrapped values with heuristic ones that are estimated with Monte-Carlo returns. For trajectories with higher returns, HUBL relies more on the heuristic values and less on bootstrapping; otherwise, it leans more heavily on bootstrapping. HUBL is very easy to combine with many existing offline RL implementations by relabeling the offline datasets with adjusted rewards and discount factors. We derive a theory that explains HUBL's effect on offline RL as reducing offline RL's complexity and thus increasing its finite-sample performance. Furthermore, we empirically demonstrate that HUBL consistently improves the policy quality of four state-of-the-art bootstrapping-based offline RL algorithms (ATAC, CQL, TD3+BC, and IQL), by 9% on average over 27 datasets of the D4RL and Meta-World benchmarks.
https://openreview.net/pdf/16cac64e07aa3a54bc305bc6025b1b1e3326c1f7.pdf
Decision ConvFormer: Local Filtering in MetaFormer is Sufficient for Decision Making
https://openreview.net/forum?id=af2c8EaKl8
https://openreview.net/forum?id=af2c8EaKl8
Jeonghye Kim,Suyoung Lee,Woojun Kim,Youngchul Sung
ICLR 2024,Spotlight
The recent success of Transformer in natural language processing has sparked its use in various domains. In offline reinforcement learning (RL), Decision Transformer (DT) is emerging as a promising model based on Transformer. However, we discovered that the attention module of DT is not appropriate to capture the inherent local dependence pattern in trajectories of RL modeled as a Markov decision process. To overcome the limitations of DT, we propose a novel action sequence predictor, named Decision ConvFormer (DC), based on the architecture of MetaFormer, which is a general structure to process multiple entities in parallel and understand the interrelationship among the multiple entities. DC employs local convolution filtering as the token mixer and can effectively capture the inherent local associations of the RL dataset. In extensive experiments, DC achieved state-of-the-art performance across various standard RL benchmarks while requiring fewer resources. Furthermore, we show that DC better understands the underlying meaning in data and exhibits enhanced generalization capability.
https://openreview.net/pdf/d34443d3c25936596abdb44faed04a15cdf3e290.pdf
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?
https://openreview.net/forum?id=vSh5ePa0ph
https://openreview.net/forum?id=vSh5ePa0ph
Jingfeng Wu,Difan Zou,Zixiang Chen,Vladimir Braverman,Quanquan Gu,Peter Bartlett
ICLR 2024,Spotlight
Transformers pretrained on diverse tasks exhibit remarkable in-context learning (ICL) capabilities, enabling them to solve unseen tasks solely based on input contexts without adjusting model parameters. In this paper, we study ICL in one of its simplest setups: pretraining a single-layer linear attention model for linear regression with a Gaussian prior. We establish a statistical task complexity bound for the attention model pretraining, showing that effective pretraining only requires a small number of independent tasks. Furthermore, we prove that the pretrained model closely matches the Bayes optimal algorithm, i.e., optimally tuned ridge regression, by achieving nearly Bayes optimal risk on unseen tasks under a fixed context length. These theoretical findings complement prior experimental research and shed light on the statistical foundations of ICL.
https://openreview.net/pdf/2afad356cf9372d0067c51eea7b8c4169effbe2c.pdf
Tool-Augmented Reward Modeling
https://openreview.net/forum?id=d94x0gWTUX
https://openreview.net/forum?id=d94x0gWTUX
Lei Li,Yekun Chai,Shuohuan Wang,Yu Sun,Hao Tian,Ningyu Zhang,Hua Wu
ICLR 2024,Spotlight
Reward modeling (*a.k.a.*, preference modeling) is instrumental for aligning large language models with human preferences, particularly within the context of reinforcement learning from human feedback (RLHF). While conventional reward models (RMs) have exhibited remarkable scalability, they oft struggle with fundamental functionality such as arithmetic computation, code execution, and factual lookup. In this paper, we propose a tool-augmented preference modeling approach, named Themis, to address these limitations by empowering RMs with access to external environments, including calculators and search engines. This approach not only fosters synergy between tool utilization and reward grading but also enhances interpretive capacity and scoring reliability. Our study delves into the integration of external tools into RMs, enabling them to interact with diverse external sources and construct task-specific tool engagement and reasoning traces in an autoregressive manner. We validate our approach across a wide range of domains, incorporating seven distinct external tools. Our experimental results demonstrate a noteworthy overall improvement of 17.7% across eight tasks in preference ranking. Furthermore, our approach outperforms Gopher 280B by 7.3% on TruthfulQA task in zero-shot evaluation. In human evaluations, RLHF trained with Themis attains an average win rate of 32% when compared to baselines across four distinct tasks. Additionally, we provide a comprehensive collection of tool-related RM datasets, incorporating data from seven distinct tool APIs, totaling 15,000 instances. We have made the code, data, and model checkpoints publicly available to facilitate and inspire further research advancements (https://github.com/ernie-research/Tool-Augmented-Reward-Model).
https://openreview.net/pdf/65b055ecf7ec43b68562fc8ca3ce916f8c400085.pdf
Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning
https://openreview.net/forum?id=GSBHKiw19c
https://openreview.net/forum?id=GSBHKiw19c
Fan-Ming Luo,Tian Xu,Xingchen Cao,Yang Yu
ICLR 2024,Spotlight
Learning a precise dynamics model can be crucial for offline reinforcement learning, which, unfortunately, has been found to be quite challenging. Dynamics models that are learned by fitting historical transitions often struggle to generalize to unseen transitions. In this study, we identify a hidden but pivotal factor termed dynamics reward that remains consistent across transitions, offering a pathway to better generalization. Therefore, we propose the idea of reward-consistent dynamics models: any trajectory generated by the dynamics model should maximize the dynamics reward derived from the data. We implement this idea as the MOREC (Model-based Offline reinforcement learning with Reward Consistency) method, which can be seamlessly integrated into previous offline model-based reinforcement learning (MBRL) methods. MOREC learns a generalizable dynamics reward function from offline data, which is subsequently employed as a transition filter in any offline MBRL method: when generating transitions, the dynamics model generates a batch of transitions and selects the one with the highest dynamics reward value. On a synthetic task, we visualize that MOREC has a strong generalization ability and can surprisingly recover some distant unseen transitions. On 21 offline tasks in D4RL and NeoRL benchmarks, MOREC improves the previous state-of-the-art performance by a significant margin, i.e., 4.6\% on D4RL tasks and 25.9\% on NeoRL tasks. Notably, MOREC is the first method that can achieve above 95\% online RL performance in 6 out of 12 D4RL tasks and 3 out of 9 NeoRL tasks. Code is available at https://github.com/polixir/morec.
https://openreview.net/pdf/b755072e1d772c902d9b57e9ad4a0ff78b0df063.pdf
Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback
https://openreview.net/forum?id=6yv8UHVJn4
https://openreview.net/forum?id=6yv8UHVJn4
Haolin Liu,Chen-Yu Wei,Julian Zimmert
ICLR 2024,Spotlight
We study online reinforcement learning in linear Markov decision processes with adversarial losses and bandit feedback. We introduce two algorithms that achieve improved regret performance compared to existing approaches. The first algorithm, although computationally inefficient, achieves a regret of $\widetilde{O}(\sqrt{K})$ without relying on simulators, where $K$ is the number of episodes. This is the first rate-optimal result in the considered setting. The second algorithm is computationally efficient and achieves a regret of $\widetilde{O}(K^{\frac{3}{4}})$ . These results significantly improve over the prior state-of-the-art: a computationally inefficient algorithm by Kong et al. (2023) with $\widetilde{O}(K^{\frac{4}{5}}+1/\lambda_{\min})$ regret, and a computationally efficient algorithm by Sherman et al. (2023b) with $\widetilde{O}(K^{\frac{6}{7}})$ regret.
https://openreview.net/pdf/364346cf68aee0e5e5761a8aad4c6a42391b9e05.pdf
Dual RL: Unification and New Methods for Reinforcement and Imitation Learning
https://openreview.net/forum?id=xt9Bu66rqv
https://openreview.net/forum?id=xt9Bu66rqv
Harshit Sikchi,Qinqing Zheng,Amy Zhang,Scott Niekum
ICLR 2024,Spotlight
The goal of reinforcement learning (RL) is to find a policy that maximizes the expected cumulative return. It has been shown that this objective can be represented as an optimization problem of state-action visitation distribution under linear constraints. The dual problem of this formulation, which we refer to as *dual RL*, is unconstrained and easier to optimize. In this work, we first cast several state-of-the-art offline RL and offline imitation learning (IL) algorithms as instances of dual RL approaches with shared structures. Such unification allows us to identify the root cause of the shortcomings of prior methods. For offline IL, our analysis shows that prior methods are based on a restrictive coverage assumption that greatly limits their performance in practice. To fix this limitation, we propose a new discriminator-free method ReCOIL that learns to imitate from arbitrary off-policy data to obtain near-expert performance. For offline RL, our analysis frames a recent offline RL method XQL in the dual framework, and we further propose a new method $f$-DVL that provides alternative choices to the Gumbel regression loss that fixes the known training instability issue of XQL. The performance improvements by both of our proposed methods, ReCOIL and $f$-DVL, in IL and RL are validated on an extensive suite of simulated robot locomotion and manipulation tasks.
https://openreview.net/pdf/981af9a85be76008d6063c6ddd1477450c3bf463.pdf
Out-Of-Domain Unlabeled Data Improves Generalization
https://openreview.net/forum?id=Bo6GpQ3B9a
https://openreview.net/forum?id=Bo6GpQ3B9a
seyed amir hossein saberi,Amir Najafi,Alireza Heidari,Mohammad Hosein Movasaghinia,Abolfazl Motahari,Babak Khalaj
ICLR 2024,Spotlight
We propose a novel framework for incorporating unlabeled data into semi-supervised classification problems, where scenarios involving the minimization of either i) adversarially robust or ii) non-robust loss functions have been considered. Notably, we allow the unlabeled samples to deviate slightly (in total variation sense) from the in-domain distribution. The core idea behind our framework is to combine Distributionally Robust Optimization (DRO) with self-supervised training. As a result, we also leverage efficient polynomial-time algorithms for the training stage. From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in $\mathbb{R}^d$, where in addition to the $m$ independent and labeled samples from the true distribution, a set of $n$ (usually with $n\gg m$) out of domain and unlabeled samples are gievn as well. Using only the labeled data, it is known that the generalization error can be bounded by $\propto\left(d/m\right)^{1/2}$. However, using our method on both isotropic and non-isotropic Gaussian mixture models, one can derive a new set of analytically explicit and non-asymptotic bounds which show substantial improvement on the generalization error compared ERM. Our results underscore two significant insights: 1) out-of-domain samples, even when unlabeled, can be harnessed to narrow the generalization gap, provided that the true data distribution adheres to a form of the "cluster assumption", and 2) the semi-supervised learning paradigm can be regarded as a special case of our framework when there are no distributional shifts. We validate our claims through experiments conducted on a variety of synthetic and real-world datasets.
https://openreview.net/pdf/db772b639656e2c5123c187ceacfb10805558c17.pdf
Catastrophic Jailbreak of Open-source LLMs via Exploiting Generation
https://openreview.net/forum?id=r42tSSCHPh
https://openreview.net/forum?id=r42tSSCHPh
Yangsibo Huang,Samyak Gupta,Mengzhou Xia,Kai Li,Danqi Chen
ICLR 2024,Spotlight
The rapid progress in open-source large language models (LLMs) is significantly advancing AI development. Extensive efforts have been made before model release to align their behavior with human values, with the primary goal of ensuring their helpfulness and harmlessness. However, even carefully aligned models can be manipulated maliciously, leading to unintended behaviors, known as ``jailbreaks". These jailbreaks are typically triggered by specific text inputs, often referred to as adversarial prompts. In this work, we propose the generation exploitation attack, an extremely simple approach that disrupts model alignment by only manipulating variations of decoding methods. By exploiting different generation strategies, including varying decoding hyper-parameters and sampling methods, we increase the attack success rate from $0\%$ to more than $95\%$ across 11 language models including LLaMA2, Vicuna, Falcon, and MPT families, outperforming state-of-the-art attacks with $30\times$ lower computational cost. Finally, we propose an effective alignment method that explores diverse generation strategies, which can reasonably reduce the attack success rate under our attack. Altogether, our study underscores a major failure in current safety evaluation and alignment procedures for open-source LLMs, strongly advocating for more comprehensive red teaming and better alignment before releasing such models.
https://openreview.net/pdf/721e3e663ee57023772f0b3b63424ccc43e71e44.pdf
PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters
https://openreview.net/forum?id=y21ZO6M86t
https://openreview.net/forum?id=y21ZO6M86t
Jingyu Chen,Runlin Lei,Zhewei Wei
ICLR 2024,Spotlight
Recently, Graph Contrastive Learning (GCL) has achieved significantly superior performance in self-supervised graph representation learning. However, the existing GCL technique has inherent smooth characteristics because of its low-pass GNN encoder and objective based on homophily assumption, which poses a challenge when applying it to heterophilic graphs. In supervised learning tasks, spectral GNNs with polynomial approximation excel in both homophilic and heterophilic settings by adaptively fitting graph filters of arbitrary shapes. Yet, their applications in unsupervised learning are rarely explored. Based on the above analysis, a natural question arises: Can we incorporate the excellent properties of spectral polynomial filters into graph contrastive learning? In this paper, we address the question by studying the necessity of introducing high-pass information for heterophily from a spectral perspective. We propose PolyGCL, a GCL pipeline that utilizes polynomial filters to achieve contrastive learning between the low-pass and high-pass views. Specifically, PolyGCL utilizes polynomials with learnable filter functions to generate different spectral views and an objective that incorporates high-pass information through a linear combination. We theoretically prove that PolyGCL outperforms previous GCL paradigms when applied to graphs with varying levels of homophily. We conduct extensive experiments on both synthetic and real-world datasets, which demonstrate the promising performance of PolyGCL on homophilic and heterophilic graphs.
https://openreview.net/pdf/e0bdb5536d418b614a12c003721153c1e6fbaf4b.pdf
Solving Homogeneous and Heterogeneous Cooperative Tasks with Greedy Sequential Execution
https://openreview.net/forum?id=hB2hXtxIPH
https://openreview.net/forum?id=hB2hXtxIPH
Shanqi Liu,Dong Xing,Pengjie Gu,Xinrun Wang,Bo An,Yong Liu
ICLR 2024,Spotlight
Cooperative multi-agent reinforcement learning (MARL) is extensively used for solving complex cooperative tasks, and value decomposition methods are a prevalent approach for this domain. However, these methods have not been successful in addressing both homogeneous and heterogeneous tasks simultaneously which is a crucial aspect for the practical application of cooperative agents. On one hand, value decomposition methods demonstrate superior performance in homogeneous tasks. Nevertheless, they tend to produce agents with similar policies, which is unsuitable for heterogeneous tasks. On the other hand, solutions based on personalized observation or assigned roles are well-suited for heterogeneous tasks. However, they often lead to a trade-off situation where the agent's performance in homogeneous scenarios is negatively affected due to the aggregation of distinct policies. An alternative approach is to adopt sequential execution policies, which offer a flexible form for learning both types of tasks. However, learning sequential execution policies poses challenges in terms of credit assignment, and the limited information about subsequently executed agents can lead to sub-optimal solutions, which is known as the relative over-generalization problem. To tackle these issues, this paper proposes Greedy Sequential Execution (GSE) as a solution to learn the optimal policy that covers both scenarios. In the proposed GSE framework, we introduce an individual utility function into the framework of value decomposition to consider the complex interactions between agents. This function is capable of representing both the homogeneous and heterogeneous optimal policies. Furthermore, we utilize greedy marginal contribution calculated by the utility function as the credit value of the sequential execution policy to address the credit assignment and relative over-generalization problem. We evaluated GSE in both homogeneous and heterogeneous scenarios. The results demonstrate that GSE achieves significant improvement in performance across multiple domains, especially in scenarios involving both homogeneous and heterogeneous tasks.
https://openreview.net/pdf/e91e10a20d5e17ec8a9a469f872e7d0ec680cb37.pdf
Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from Offline Data
https://openreview.net/forum?id=Xkf2EBj4w3
https://openreview.net/forum?id=Xkf2EBj4w3
Chongyi Zheng,Benjamin Eysenbach,Homer Rich Walke,Patrick Yin,Kuan Fang,Ruslan Salakhutdinov,Sergey Levine
ICLR 2024,Spotlight
Robotic systems that rely primarily on self-supervised learning have the potential to decrease the amount of human annotation and engineering effort required to learn control strategies. In the same way that prior robotic systems have leveraged self-supervised techniques from computer vision (CV) and natural language processing (NLP), our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem: learning to reach any goal without human-specified rewards or labels. Despite the seeming appeal, little (if any) prior work has demonstrated how self-supervised RL methods can be practically deployed on robotic systems. By first studying a challenging simulated version of this task, we discover design decisions about architectures and hyperparameters that increase the success rate by $2 \times$. These findings lay the groundwork for our main result: we demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks, with tasks being specified by a single goal image provided after training.
https://openreview.net/pdf/d61ac5544c95c3239516e7c46f315bce6b00ce8b.pdf
Multi-View Causal Representation Learning with Partial Observability
https://openreview.net/forum?id=OGtnhKQJms
https://openreview.net/forum?id=OGtnhKQJms
Dingling Yao,Danru Xu,Sebastien Lachapelle,Sara Magliacane,Perouz Taslakian,Georg Martius,Julius von Kügelgen,Francesco Locatello
ICLR 2024,Spotlight
We present a unified framework for studying the identifiability of representations learned from simultaneously observed views, such as different data modalities. We allow a partially observed setting in which each view constitutes a nonlinear mixture of a subset of underlying latent variables, which can be causally related. We prove that the information shared across all subsets of any number of views can be learned up to a smooth bijection using contrastive learning and a single encoder per view. We also provide graphical criteria indicating which latent variables can be identified through a simple set of rules, which we refer to as identifiability algebra. Our general framework and theoretical results unify and extend several previous work on multi-view nonlinear ICA, disentanglement, and causal representation learning. We experimentally validate our claims on numerical, image, and multi-modal data sets. Further, we demonstrate that the performance of prior methods is recovered in different special cases of our setup. Overall, we find that access to multiple partial views offers unique opportunities for identifiable representation learning, enabling the discovery of latent structures from purely observational data.
https://openreview.net/pdf/c4477da8d3ff31861069faeb5e0c7ebdb054e07f.pdf
CABINET: Content Relevance-based Noise Reduction for Table Question Answering
https://openreview.net/forum?id=SQrHpTllXa
https://openreview.net/forum?id=SQrHpTllXa
Sohan Patnaik,Heril Changwal,Milan Aggarwal,Sumit Bhatia,Yaman Kumar,Balaji Krishnamurthy
ICLR 2024,Spotlight
Table understanding capability of Large Language Models (LLMs) has been extensively studied through the task of question-answering (QA) over tables. Typically, only a small part of the whole table is relevant to derive the answer for a given question. The irrelevant parts act as noise and are distracting information, resulting in sub-optimal performance due to the vulnerability of LLMs to noise. To mitigate this, we propose CABINET (Content RelevAnce-Based NoIse ReductioN for TablE QuesTion-Answering) – a framework to enable LLMs to focus on relevant tabular data by suppressing extraneous information. CABINET comprises an Unsupervised Relevance Scorer (URS), trained differentially with the QA LLM, that weighs the table content based on its relevance to the input question before feeding it to the question answering LLM (QA LLM). To further aid the relevance scorer, CABINET employs a weakly supervised module that generates a parsing statement describing the criteria of rows and columns relevant to the question and highlights the content of corresponding table cells. CABINET significantly outperforms various tabular LLM baselines, as well as GPT3-based in-context learning methods, is more robust to noise, maintains outperformance on tables of varying sizes, and establishes new SoTA performance on WikiTQ, FeTaQA, and WikiSQL datasets. We release our code and datasets here.
https://openreview.net/pdf/0a15c1a222a5d423ce19524261f01484f4e7b695.pdf
Safe RLHF: Safe Reinforcement Learning from Human Feedback
https://openreview.net/forum?id=TyFrPOKYXw
https://openreview.net/forum?id=TyFrPOKYXw
Josef Dai,Xuehai Pan,Ruiyang Sun,Jiaming Ji,Xinbo Xu,Mickel Liu,Yizhou Wang,Yaodong Yang
ICLR 2024,Spotlight
With the development of large language models (LLMs), striking a balance between the performance and safety of AI systems has never been more critical. However, the inherent tension between the objectives of helpfulness and harmlessness presents a significant challenge during LLM training. To address this issue, we propose Safe Reinforcement Learning from Human Feedback (Safe RLHF), a novel algorithm for human value alignment. Safe RLHF explicitly decouples human preferences regarding helpfulness and harmlessness, effectively avoiding the crowd workers' confusion about the tension and allowing us to train separate reward and cost models. We formalize the safety concern of LLMs as an optimization task of maximizing the reward function while satisfying specified cost constraints. Leveraging the Lagrangian method to solve this constrained problem, Safe RLHF dynamically adjusts the balance between the two objectives during fine-tuning. Through a three-round fine-tuning using Safe RLHF, we demonstrate a superior ability to mitigate harmful responses while enhancing model performance compared to existing value-aligned algorithms. Experimentally, we fine-tuned the Alpaca-7B using Safe RLHF and aligned it with collected human preferences, significantly improving its helpfulness and harmlessness according to human evaluations. Code is available at https://github.com/PKU-Alignment/safe-rlhf. Warning: This paper contains example data that may be offensive or harmful.
https://openreview.net/pdf/4db509db9de557fb05dd265958739cb86ea87827.pdf
Benchmarking Algorithms for Federated Domain Generalization
https://openreview.net/forum?id=wprSv7ichW
https://openreview.net/forum?id=wprSv7ichW
Ruqi Bai,Saurabh Bagchi,David I. Inouye
ICLR 2024,Spotlight
While prior federated learning (FL) methods mainly consider client heterogeneity, we focus on the *Federated Domain Generalization (DG)* task, which introduces train-test heterogeneity in the FL context. Existing evaluations in this field are limited in terms of the scale of the clients and dataset diversity. Thus, we propose a Federated DG benchmark that aim to test the limits of current methods with high client heterogeneity, large numbers of clients, and diverse datasets. Towards this objective, we introduce a novel data partition method that allows us to distribute any domain dataset among few or many clients while controlling client heterogeneity. We then introduce and apply our methodology to evaluate 14 DG methods, which include centralized DG methods adapted to the FL context, FL methods that handle client heterogeneity, and methods designed specifically for Federated DG on 7 datasets. Our results suggest that, despite some progress, significant performance gaps remain in Federated DG, especially when evaluating with a large number of clients, high client heterogeneity, or more realistic datasets. Furthermore, our extendable benchmark code will be publicly released to aid in benchmarking future Federated DG approaches.
https://openreview.net/pdf/216358074a4ed3ebfdc3beb60b624a2b6647445d.pdf
CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity
https://openreview.net/forum?id=PczQtTsTIX
https://openreview.net/forum?id=PczQtTsTIX
Aditya Bhatt,Daniel Palenicek,Boris Belousov,Max Argus,Artemij Amiranashvili,Thomas Brox,Jan Peters
ICLR 2024,Spotlight
Sample efficiency is a crucial problem in deep reinforcement learning. Recent algorithms, such as REDQ and DroQ, found a way to improve the sample efficiency by increasing the update-to-data (UTD) ratio to 20 gradient update steps on the critic per environment sample. However, this comes at the expense of a greatly increased computational cost. To reduce this computational burden, we introduce CrossQ: A lightweight algorithm for continuous control tasks that makes careful use of Batch Normalization and removes target networks to surpass the current state-of-the-art in sample efficiency while maintaining a low UTD ratio of 1. Notably, CrossQ does not rely on advanced bias-reduction schemes used in current methods. CrossQ's contributions are threefold: (1) it matches or surpasses current state-of-the-art methods in terms of sample efficiency, (2) it substantially reduces the computational cost compared to REDQ and DroQ, (3) it is easy to implement, requiring just a few lines of code on top of SAC.
https://openreview.net/pdf/750ae12418a1dc0f2dd3d9ff5ef5013234515fe6.pdf
Blending Imitation and Reinforcement Learning for Robust Policy Improvement
https://openreview.net/forum?id=eJ0dzPJq1F
https://openreview.net/forum?id=eJ0dzPJq1F
Xuefeng Liu,Takuma Yoneda,Rick Stevens,Matthew Walter,Yuxin Chen
ICLR 2024,Spotlight
While reinforcement learning (RL) has shown promising performance, its sample complexity continues to be a substantial hurdle, restricting its broader application across a variety of domains. Imitation learning (IL) utilizes oracles to improve sample efficiency, yet it is often constrained by the quality of the oracles deployed. To address the demand for robust policy improvement in real-world scenarios, we introduce a novel algorithm, Robust Policy Improvement (RPI), which actively interleaves between IL and RL based on an online estimate of their performance. RPI draws on the strengths of IL, using oracle queries to facilitate exploration—an aspect that is notably challenging in sparse-reward RL—particularly during the early stages of learning. As learning unfolds, RPI gradually transitions to RL, effectively treating the learned policy as an improved oracle. This algorithm is capable of learning from and improving upon a diverse set of black-box oracles. Integral to RPI are Robust Active Policy Selection (RAPS) and Robust Policy Gradient (RPG), both of which reason over whether to perform state-wise imitation from the oracles or learn from its own value function when the learner’s performance surpasses that of the oracles in a specific state. Empirical evaluations and theoretical analysis validate that RPI excels in comparison to existing state-of-the-art methodologies, demonstrating superior performance across various benchmark domains.
https://openreview.net/pdf/bdad46427f76d0c9b2c72d8012d5b33aeddc4e8e.pdf
H-GAP: Humanoid Control with a Generalist Planner
https://openreview.net/forum?id=LYG6tBlEX0
https://openreview.net/forum?id=LYG6tBlEX0
zhengyao jiang,Yingchen Xu,Nolan Wagener,Yicheng Luo,Michael Janner,Edward Grefenstette,Tim Rocktäschel,Yuandong Tian
ICLR 2024,Spotlight
Humanoid control is an important research challenge offering avenues for integration into human-centric infrastructures and enabling physics-driven humanoid animations. The daunting challenges in this field stem from the difficulty of optimizing in high-dimensional action spaces and the instability introduced by the bipedal morphology of humanoids. However, the extensive collection of human motion-captured data and the derived datasets of humanoid trajectories, such as MoCapAct, paves the way to tackle these challenges. In this context, we present Humanoid Generalist Autoencoding Planner (H-GAP), a state-action trajectory generative model trained on humanoid trajectories derived from human motion-captured data, capable of adeptly handling downstream control tasks with Model Predictive Control (MPC). For 56 degrees of freedom humanoid, we empirically demonstrate that H-GAP learns to represent and generate a wide range of motor behaviors. Further, without any learning from online interactions, it can also flexibly transfer these behaviours to solve novel downstream control tasks via planning. Notably, H-GAP excels established MPC baselines with access to the ground truth model, and is superior or comparable to offline RL methods trained for individual tasks. Finally, we do a series of empirical studies on the scaling properties of H-GAP, showing the potential for performance gains via additional data but not computing.
https://openreview.net/pdf/5572333360c59f829af61902b8f2157f4a2e4109.pdf
Unlocking the Power of Representations in Long-term Novelty-based Exploration
https://openreview.net/forum?id=OwtMhMSybu
https://openreview.net/forum?id=OwtMhMSybu
Alaa Saade,Steven Kapturowski,Daniele Calandriello,Charles Blundell,Pablo Sprechmann,Leopoldo Sarra,Oliver Groth,Michal Valko,Bilal Piot
ICLR 2024,Spotlight
We introduce Robust Exploration via Clustering-based Online Density Estimation (RECODE), a non-parametric method for novelty-based exploration that estimates visitation counts for clusters of states based on their similarity in a chosen embedding space. By adapting classical clustering to the nonstationary setting of Deep RL, RECODE can efficiently track state visitation counts over thousands of episodes. We further propose a novel generalization of the inverse dynamics loss, which leverages masked transformer architectures for multi-step prediction; which in conjunction with \DETOCS achieves a new state-of-the-art in a suite of challenging 3D-exploration tasks in DM-Hard-8. RECODE also sets new state-of-the-art in hard exploration Atari games, and is the first agent to reach the end screen in "Pitfall!"
https://openreview.net/pdf/97bd16e3685d8761abc10cf118ffd94e8ae31b77.pdf
Accelerating Data Generation for Neural Operators via Krylov Subspace Recycling
https://openreview.net/forum?id=UpgRVWexaD
https://openreview.net/forum?id=UpgRVWexaD
Hong Wang,Zhongkai Hao,Jie Wang,Zijie Geng,Zhen Wang,Bin Li,Feng Wu
ICLR 2024,Spotlight
Learning neural operators for solving partial differential equations (PDEs) has attracted great attention due to its high inference efficiency. However, training such operators requires generating a substantial amount of labeled data, i.e., PDE problems together with their solutions. The data generation process is exceptionally time-consuming, as it involves solving numerous systems of linear equations to obtain numerical solutions to the PDEs. Many existing methods solve these systems independently without considering their inherent similarities, resulting in extremely redundant computations. To tackle this problem, we propose a novel method, namely **S**orting **K**rylov **R**ecycling (**SKR**), to boost the efficiency of solving these systems, thus significantly accelerating data generation for neural operators training. To the best of our knowledge, SKR is the first attempt to address the time-consuming nature of data generation for learning neural operators. The working horse of SKR is Krylov subspace recycling, a powerful technique for solving a series of interrelated systems by leveraging their inherent similarities. Specifically, SKR employs a sorting algorithm to arrange these systems in a sequence, where adjacent systems exhibit high similarities. Then it equips a solver with Krylov subspace recycling to solve the systems sequentially instead of independently, thus effectively enhancing the solving efficiency. Both theoretical analysis and extensive experiments demonstrate that SKR can significantly accelerate neural operator data generation, achieving a remarkable speedup of up to 13.9 times.
https://openreview.net/pdf/ff0efe03064ef3409c6562ca6ccdf6ff02fb9cba.pdf
Deep Orthogonal Hypersphere Compression for Anomaly Detection
https://openreview.net/forum?id=cJs4oE4m9Q
https://openreview.net/forum?id=cJs4oE4m9Q
Yunhe Zhang,Yan Sun,Jinyu Cai,Jicong Fan
ICLR 2024,Spotlight
Many well-known and effective anomaly detection methods assume that a reasonable decision boundary has a hypersphere shape, which however is difficult to obtain in practice and is not sufficiently compact, especially when the data are in high-dimensional spaces. In this paper, we first propose a novel deep anomaly detection model that improves the original hypersphere learning through an orthogonal projection layer, which ensures that the training data distribution is consistent with the hypersphere hypothesis, thereby increasing the true positive rate and decreasing the false negative rate. Moreover, we propose a bi-hypersphere compression method to obtain a hyperspherical shell that yields a more compact decision region than a hyperball, which is demonstrated theoretically and numerically. The proposed methods are not confined to common datasets such as image and tabular data, but are also extended to a more challenging but promising scenario, graph-level anomaly detection, which learns graph representation with maximum mutual information between the substructure and global structure features while exploring orthogonal single- or bi-hypersphere anomaly decision boundaries. The numerical and visualization results on benchmark datasets demonstrate the superiority of our methods in comparison to many baselines and state-of-the-art methods.
https://openreview.net/pdf/b8052c3c7a3cfeccc963ae2d0d24045831b4d84e.pdf
On the Role of General Function Approximation in Offline Reinforcement Learning
https://openreview.net/forum?id=JSS9rKHySk
https://openreview.net/forum?id=JSS9rKHySk
Chenjie Mao,Qiaosheng Zhang,Zhen Wang,Xuelong Li
ICLR 2024,Spotlight
We study offline reinforcement learning (RL) with general function approximation. General function approximation is a powerful tool for algorithm design and analysis, but its adaptation to offline RL encounters several challenges due to varying approximation targets and assumptions that blur the real meanings of function assumptions. In this paper, we try to formulate and clarify the treatment of general function approximation in offline RL in two aspects: (1) analyzing different types of assumptions and their practical usage, and (2) understanding its role as a restriction on underlying MDPs from information-theoretic perspectives. Additionally, we introduce a new insight for lower bound establishing: one can exploit model-realizability to establish general-purpose lower bounds that can be generalized into other functions. Building upon this insight, we propose two generic lower bounds that contribute to a better understanding of offline RL with general function approximation.
https://openreview.net/pdf/8621c59f51343401b65a5d8c4ba33ef5a631dd93.pdf
Analyzing Feed-Forward Blocks in Transformers through the Lens of Attention Maps
https://openreview.net/forum?id=mYWsyTuiRp
https://openreview.net/forum?id=mYWsyTuiRp
Goro Kobayashi,Tatsuki Kuribayashi,Sho Yokoi,Kentaro Inui
ICLR 2024,Spotlight
Transformers are ubiquitous in wide tasks. Interpreting their internals is a pivotal goal. Nevertheless, their particular components, feed-forward (FF) blocks, have typically been less analyzed despite their substantial parameter amounts. We analyze the input contextualization effects of FF blocks by rendering them in the attention maps as a human-friendly visualization scheme. Our experiments with both masked- and causal-language models reveal that FF networks modify the input contextualization to emphasize specific types of linguistic compositions. In addition, FF and its surrounding components tend to cancel out each other's effects, suggesting potential redundancy in the processing of the Transformer layer.
https://openreview.net/pdf/bf95c5609d1e4a6df3810208c09e97d13c14a953.pdf
Asymptotically Free Sketched Ridge Ensembles: Risks, Cross-Validation, and Tuning
https://openreview.net/forum?id=i9Vs5NGDpk
https://openreview.net/forum?id=i9Vs5NGDpk
Pratik Patil,Daniel LeJeune
ICLR 2024,Spotlight
We employ random matrix theory to establish consistency of generalized cross validation (GCV) for estimating prediction risks of sketched ridge regression ensembles, enabling efficient and consistent tuning of regularization and sketching parameters. Our results hold for a broad class of asymptotically free sketches under very mild data assumptions. For squared prediction risk, we provide a decomposition into an unsketched equivalent implicit ridge bias and a sketching-based variance, and prove that the risk can be globally optimized by only tuning sketch size in infinite ensembles. For general subquadratic prediction risk functionals, we extend GCV to construct consistent risk estimators, and thereby obtain distributional convergence of the GCV-corrected predictions in Wasserstein-2 metric. This in particular allows construction of prediction intervals with asymptotically correct coverage conditional on the training data. We also propose an "ensemble trick" whereby the risk for unsketched ridge regression can be efficiently estimated via GCV using small sketched ridge ensembles. We empirically validate our theoretical results using both synthetic and real large-scale datasets with practical sketches including CountSketch and subsampled randomized discrete cosine transforms.
https://openreview.net/pdf/28ddc17b08b8df3bdcb23d2393cdb2d882b39eef.pdf
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models
https://openreview.net/forum?id=zmJDzPh1Dm
https://openreview.net/forum?id=zmJDzPh1Dm
Shuai Fu,Xiequn Wang,Qiushi Huang,Yu Zhang
ICLR 2024,Spotlight
With the prevalence of large-scale pretrained vision-language models (VLMs), such as CLIP, soft-prompt tuning has become a popular method for adapting these models to various downstream tasks. However, few works delve into the inherent properties of learnable soft-prompt vectors, specifically the impact of their norms to the performance of VLMs. This motivates us to pose an unexplored research question: ``Do we need to normalize the soft prompts in VLMs?'' To fill this research gap, we first uncover a phenomenon, called the $\textbf{Low-Norm Effect}$ by performing extensive corruption experiments, suggesting that reducing the norms of certain learned prompts occasionally enhances the performance of VLMs, while increasing them often degrades it. To harness this effect, we propose a novel method named $\textbf{N}$ormalizing th$\textbf{e}$ soft-pro$\textbf{m}$pt v$\textbf{e}$ctors of vi$\textbf{si}$on-language model$\textbf{s}$ ($\textbf{Nemesis}$) to normalize soft-prompt vectors in VLMs. To the best of our knowledge, our work is the first to systematically investigate the role of norms of soft-prompt vector in VLMs, offering valuable insights for future research in soft-prompt tuning.
https://openreview.net/pdf/13b62a20df81ff095be76d919e6602f6f86780bf.pdf
Towards Understanding Factual Knowledge of Large Language Models
https://openreview.net/forum?id=9OevMUdods
https://openreview.net/forum?id=9OevMUdods
Xuming Hu,Junzhe Chen,Xiaochuan Li,Yufei Guo,Lijie Wen,Philip S. Yu,Zhijiang Guo
ICLR 2024,Spotlight
Large language models (LLMs) have recently driven striking performance improvements across a range of natural language processing tasks. The factual knowledge acquired during pretraining and instruction tuning can be useful in various downstream tasks, such as question answering, and language generation. Unlike conventional Knowledge Bases (KBs) that explicitly store factual knowledge, LLMs implicitly store facts in their parameters. Content generated by the LLMs can often exhibit inaccuracies or deviations from the truth, due to facts that can be incorrectly induced or become obsolete over time. To this end, we aim to explore the extent and scope of factual knowledge within LLMs by designing the benchmark Pinocchio. Pinocchio contains 20K diverse factual questions that span different sources, timelines, domains, regions, and languages. Furthermore, we investigate whether LLMs can compose multiple facts, update factual knowledge temporally, reason over multiple pieces of facts, identify subtle factual differences, and resist adversarial examples. Extensive experiments on different sizes and types of LLMs show that existing LLMs still lack factual knowledge and suffer from various spurious correlations. We believe this is a critical bottleneck for realizing trustworthy artificial intelligence. The dataset Pinocchio and our codes are publicly available at: https://github.com/THU-BPM/Pinocchio.
https://openreview.net/pdf/025c1d2beaec72d3724b68ca610dd61083362fdc.pdf
CAS: A Probability-Based Approach for Universal Condition Alignment Score
https://openreview.net/forum?id=E78OaH2s3f
https://openreview.net/forum?id=E78OaH2s3f
Chunsan Hong,ByungHee Cha,Tae-Hyun Oh
ICLR 2024,Spotlight
Recent conditional diffusion models have shown remarkable advancements and have been widely applied in fascinating real-world applications. However, samples generated by these models often do not strictly comply with user-provided conditions. Due to this, there have been few attempts to evaluate this alignment via pre-trained scoring models to select well-generated samples. Nonetheless, current studies are confined to the text-to-image domain and require large training datasets. This suggests that crafting alignment scores for various conditions will demand considerable resources in the future. In this context, we introduce a universal condition alignment score that leverages the conditional probability measurable through the diffusion process. Our technique operates across all conditions and requires no additional models beyond the diffusion model used for generation, effectively enabling self-rejection. Our experiments validate that our met- ric effectively applies in diverse conditional generations, such as text-to-image, {instruction, image}-to-image, edge-/scribble-to-image, and text-to-audio.
https://openreview.net/pdf/f985d33f22b4298cf2016099ca2640c09738d070.pdf
Demystifying CLIP Data
https://openreview.net/forum?id=5BCFlnfE1g
https://openreview.net/forum?id=5BCFlnfE1g
Hu Xu,Saining Xie,Xiaoqing Tan,Po-Yao Huang,Russell Howes,Vasu Sharma,Shang-Wen Li,Gargi Ghosh,Luke Zettlemoyer,Christoph Feichtenhofer
ICLR 2024,Spotlight
Contrastive Language-Image Pre-training (CLIP) is an approach that has advanced research and applications in computer vision, fueling modern recognition systems and generative models. We believe that the main ingredient to the success of CLIP is its \textit{data} and \textit{not} the \textit{model} architecture or pre-training {objective}. However, CLIP only provides very limited information about its data and how it has been collected, leading to works that aim to reproduce CLIP's data by filtering with its model parameters. In this work, we intend to reveal CLIP's data curation approach and in our pursuit of making it open to the community introduce Metadata-Curated Language-Image Pre-training (MetaCLIP). MetaCLIP takes a raw data pool and metadata (derived from CLIP's concepts) and yields a balanced subset over the metadata distribution. Our experimental study rigorously isolates the model and training settings, concentrating solely on data. MetaCLIP applied to CommonCrawl with 400M image-text data pairs outperforms CLIP's data on multiple standard benchmarks. In zero-shot ImageNet classification, MetaCLIP achieves 70.8\% accuracy, surpassing CLIP's 68.3\% on \mbox{ViT-B} models. Scaling to 1B data, while maintaining the same training budget, attains \textbf{72.4\%}. Our observations hold across various model sizes, exemplified by ViT-H achieving \textbf{80.5\%}, without any bells-and-whistles. Curation code and training data distribution over metadata will be made available.
https://openreview.net/pdf/9c2e91f6f4a65a812be7ea55d1d89e0fd4844e26.pdf
Adversarial AutoMixup
https://openreview.net/forum?id=o8tjamaJ80
https://openreview.net/forum?id=o8tjamaJ80
Huafeng Qin,Xin Jin,Yun Jiang,Mounîm El-Yacoubi,Xinbo Gao
ICLR 2024,Spotlight
Data mixing augmentation has been widely applied to improve the generalization ability of deep neural networks. Recently, offline data mixing augmentation, e.g. handcrafted and saliency information-based mixup, has been gradually replaced by automatic mixing approaches. Through minimizing two sub-tasks, namely, mixed sample generation and mixup classification in an end-to-end way, AutoMix significantly improves accuracy on image classification tasks. However, as the optimization objective is consistent for the two sub-tasks, this approach is prone to generating consistent instead of diverse mixed samples, which results in overfitting for target task training. In this paper, we propose AdAutomixup, an adversarial automatic mixup augmentation approach that generates challenging samples to train a robust classifier for image classification, by alternatively optimizing the classifier and the mixup sample generator. AdAutomixup comprises two modules, a mixed example generator, and a target classifier. The mixed sample generator aims to produce hard mixed examples to challenge the target classifier, while the target classifier's aim is to learn robust features from hard mixed examples to improve generalization. To prevent the collapse of the inherent meanings of images, we further introduce an exponential moving average (EMA) teacher and cosine similarity to train AdAutomixup in an end-to-end way. Extensive experiments on seven image benchmarks consistently prove that our approach outperforms the state of the art in various classification scenarios. The source code is available at https://github.com/JinXins/Adversarial-AutoMixup.
https://openreview.net/pdf/47a0f838e2a5b9d0158f56d2adb6432f9f878803.pdf
Spatially-Aware Transformers for Embodied Agents
https://openreview.net/forum?id=Ts95eXsPBc
https://openreview.net/forum?id=Ts95eXsPBc
Junmo Cho,Jaesik Yoon,Sungjin Ahn
ICLR 2024,Spotlight
Episodic memory plays a crucial role in various cognitive processes, such as the ability to mentally recall past events. While cognitive science emphasizes the significance of spatial context in the formation and retrieval of episodic memory, the current primary approach to implementing episodic memory in AI systems is through transformers that store temporally ordered experiences, which overlooks the spatial dimension. As a result, it is unclear how the underlying structure could be extended to incorporate the spatial axis beyond temporal order alone and thereby what benefits can be obtained. To address this, this paper explores the use of Spatially-Aware Transformer models that incorporate spatial information. These models enable the creation of place-centric episodic memory that considers both temporal and spatial dimensions. Adopting this approach, we demonstrate that memory utilization efficiency can be improved, leading to enhanced accuracy in various place-centric downstream tasks. Additionally, we propose the Adaptive Memory Allocator, a memory management method based on reinforcement learning that aims to optimize efficiency of memory utilization. Our experiments demonstrate the advantages of our proposed model in various environments and across multiple downstream tasks, including prediction, generation, reasoning, and reinforcement learning. The source code for our models and experiments will be available at \href{https://github.com/spatially_aware_transformer}{https://github.com/spatially_aware_transformer}.
https://openreview.net/pdf/1dc0ff45874e872749597c38bcfc2df0fa7ed0d6.pdf
Grounding Language Plans in Demonstrations Through Counterfactual Perturbations
https://openreview.net/forum?id=qoHeuRAcSl
https://openreview.net/forum?id=qoHeuRAcSl
Yanwei Wang,Tsun-Hsuan Wang,Jiayuan Mao,Michael Hagenow,Julie Shah
ICLR 2024,Spotlight
Grounding the common-sense reasoning of Large Language Models in physical domains remains a pivotal yet unsolved problem for embodied AI. Whereas prior works have focused on leveraging LLMs directly for planning in symbolic spaces, this work uses LLMs to guide the search of task structures and constraints implicit in multi-step demonstrations. Specifically, we borrow from manipulation planning literature the concept of mode families, which group robot configurations by specific motion constraints, to serve as an abstraction layer between the high-level language representations of an LLM and the low-level physical trajectories of a robot. By replaying a few human demonstrations with synthetic perturbations, we generate coverage over the demonstrations' state space with additional successful executions as well as counterfactuals that fail the task. Our explanation-based learning framework trains an end-to-end differentiable neural network to predict successful trajectories from failures and as a by-product learns classifiers that ground low-level states and images in mode families without dense labeling. The learned grounding classifiers can further be used to translate language plans into reactive policies in the physical domain in an interpretable manner. We show our approach improves the interpretability and reactivity of imitation learning through 2D navigation and simulated and real robot manipulation tasks. Website: https://yanweiw.github.io/glide/
https://openreview.net/pdf/095205c11ca0cd7a485da10923a605bbfd899160.pdf
Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies
https://openreview.net/forum?id=DFTHW0MyiW
https://openreview.net/forum?id=DFTHW0MyiW
Xiangyu Liu,Chenghao Deng,Yanchao Sun,Yongyuan Liang,Furong Huang
ICLR 2024,Spotlight
In light of the burgeoning success of reinforcement learning (RL) in diverse real-world applications, considerable focus has been directed towards ensuring RL policies are robust to adversarial attacks during test time. Current approaches largely revolve around solving a minimax problem to prepare for potential worst-case scenarios. While effective against strong attacks, these methods often compromise performance in the absence of attacks or the presence of only weak attacks. To address this, we study policy robustness under the well-accepted state-adversarial attack model, extending our focus beyond merely worst-case attacks. We first formalize this task at test time as a regret minimization problem and establish its intrinsic difficulty in achieving sublinear regret when the baseline policy is from a general continuous policy class, $\Pi$. This finding prompts us to \textit{refine} the baseline policy class $\Pi$ prior to test time, aiming for efficient adaptation within a compact, finite policy class $\tilde{\Pi}$, which can resort to an adversarial bandit subroutine. In light of the importance of a finite and compact $\tilde{\Pi}$, we propose a novel training-time algorithm to iteratively discover \textit{non-dominated policies}, forming a near-optimal and minimal $\tilde{\Pi}$, thereby ensuring both robustness and test-time efficiency. Empirical validation on the Mujoco corroborates the superiority of our approach in terms of natural and robust performance, as well as adaptability to various attack scenarios.
https://openreview.net/pdf/a7e93789afc93e70c42ccbaecde9d7a6bb824d01.pdf
Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy
https://openreview.net/forum?id=eFWG9Cy3WK
https://openreview.net/forum?id=eFWG9Cy3WK
Pingzhi Li,Zhenyu Zhang,Prateek Yadav,Yi-Lin Sung,Yu Cheng,Mohit Bansal,Tianlong Chen
ICLR 2024,Spotlight
Sparsely activated Mixture-of-Experts (SMoE) has shown promise to scale up the learning capacity of neural networks, however, they have issues like: ($a$) $\textit{High Memory Usage,}$ due to duplication of the network layers into multiple copies as experts; and ($b$) $\textit{Redundancy in Experts,}$ as common learning-based routing policies suffer from representational collapse. Therefore, vanilla SMoE models are memory inefficient and non-scalable, especially for resource-constrained downstream scenarios. In this paper, we ask: Can we craft a compact SMoE model by consolidating expert information? What is the best recipe to merge multiple experts into fewer but more knowledgeable experts? Our pilot investigation reveals that conventional model merging methods fail to be effective in such expert merging for SMoE. The potential reasons are: ($1$) redundant information overshadows critical experts; ($2$) appropriate neuron permutation for each expert is missing to bring all of them in alignment. To address these challenges, we propose a novel merging algorithm for SMoE, $\textit{i.e.}$, $\texttt{M-SMoE}$, which leverages routing statistics to guide expert merging. Specifically, it starts with neuron permutation alignment for experts; then, dominant experts and their "group members" are formed based on routing policies; lastly, every expert group is merged into a single expert by utilizing each expert's activation frequency as their weight for merging, thus diminishing the impact of insignificant experts. Moreover, we draw an interesting observation that our proposed merging promotes a low dimensionality in the merged expert's weight space, naturally paving the way for additional compression. Hence, our final method, $\texttt{MC-SMoE}$ ($\textit{i.e.}$, Merge, then Compress SMoE), further decomposes the merged experts into low-rank and structural sparse alternatives. Extensive experiments across $8$ benchmarks validate the effectiveness of our proposals. For instance, our $\texttt{MC-SMoE}$ achieves up to $80\%$ memory and a $20\%$ FLOPs reduction, with virtually no loss in performance. Our code is provided as supplementary material.
https://openreview.net/pdf/93b33eef04948ce274bfc922884b3a34062628c7.pdf
On Bias-Variance Alignment in Deep Models
https://openreview.net/forum?id=i2Phucne30
https://openreview.net/forum?id=i2Phucne30
Lin Chen,Michal Lukasik,Wittawat Jitkrittum,Chong You,Sanjiv Kumar
ICLR 2024,Spotlight
Classical wisdom in machine learning holds that the generalization error can be decomposed into bias and variance, and these two terms exhibit a \emph{trade-off}. However, in this paper, we show that for an ensemble of deep learning based classification models, bias and variance are \emph{aligned} at a sample level, where squared bias is approximately \emph{equal} to variance for correctly classified sample points. We present empirical evidence confirming this phenomenon in a variety of deep learning models and datasets. Moreover, we study this phenomenon from two theoretical perspectives: calibration and neural collapse. We first show theoretically that under the assumption that the models are well calibrated, we can observe the bias-variance alignment. Second, starting from the picture provided by the neural collapse theory, we show an approximate correlation between bias and variance.
https://openreview.net/pdf/178a29fe5352a6013ac367fcb6cc4e69d501cb81.pdf
SEGNO: Generalizing Equivariant Graph Neural Networks with Physical Inductive Biases
https://openreview.net/forum?id=3oTPsORaDH
https://openreview.net/forum?id=3oTPsORaDH
Yang Liu,Jiashun Cheng,Haihong Zhao,Tingyang Xu,Peilin Zhao,Fugee Tsung,Jia Li,Yu Rong
ICLR 2024,Spotlight
Graph Neural Networks (GNNs) with equivariant properties have emerged as powerful tools for modeling complex dynamics of multi-object physical systems. However, their generalization ability is limited by the inadequate consideration of physical inductive biases: (1) Existing studies overlook the continuity of transitions among system states, opting to employ several discrete transformation layers to learn the direct mapping between two adjacent states; (2) Most models only account for first-order velocity information, despite the fact that many physical systems are governed by second-order motion laws. To incorporate these inductive biases, we propose the Second-order Equivariant Graph Neural Ordinary Differential Equation (SEGNO). Specifically, we show how the second-order continuity can be incorporated into GNNs while maintaining the equivariant property. Furthermore, we offer theoretical insights into SEGNO, highlighting that it can learn a unique trajectory between adjacent states, which is crucial for model generalization. Additionally, we prove that the discrepancy between this learned trajectory of SEGNO and the true trajectory is bounded. Extensive experiments on complex dynamical systems including molecular dynamics and motion capture demonstrate that our model yields a significant improvement over the state-of-the-art baselines.
https://openreview.net/pdf/4acee75d595246c171ea11abc7187435b10862ea.pdf
Spectrally Transformed Kernel Regression
https://openreview.net/forum?id=OeQE9zsztS
https://openreview.net/forum?id=OeQE9zsztS
Runtian Zhai,Rattana Pukdee,Roger Jin,Maria Florina Balcan,Pradeep Kumar Ravikumar
ICLR 2024,Spotlight
Unlabeled data is a key component of modern machine learning. In general, the role of unlabeled data is to impose a form of smoothness, usually from the similarity information encoded in a base kernel, such as the ϵ-neighbor kernel or the adjacency matrix of a graph. This work revisits the classical idea of spectrally transformed kernel regression (STKR), and provides a new class of general and scalable STKR estimators able to leverage unlabeled data. Intuitively, via spectral transformation, STKR exploits the data distribution for which unlabeled data can provide additional information. First, we show that STKR is a principled and general approach, by characterizing a universal type of “target smoothness”, and proving that any sufficiently smooth function can be learned by STKR. Second, we provide scalable STKR implementations for the inductive setting and a general transformation function, while prior work is mostly limited to the transductive setting. Third, we derive statistical guarantees for two scenarios: STKR with a known polynomial transformation, and STKR with kernel PCA when the transformation is unknown. Overall, we believe that this work helps deepen our understanding of how to work with unlabeled data, and its generality makes it easier to inspire new methods.
https://openreview.net/pdf/cb1d12d196c77a2a26e09caff4a57bf370c27871.pdf
Online GNN Evaluation Under Test-time Graph Distribution Shifts
https://openreview.net/forum?id=KbetDM33YG
https://openreview.net/forum?id=KbetDM33YG
Xin Zheng,Dongjin Song,Qingsong Wen,Bo Du,Shirui Pan
ICLR 2024,Spotlight
Evaluating the performance of a well-trained GNN model on real-world graphs is a pivotal step for reliable GNN online deployment and serving. Due to a lack of test node labels and unknown potential training-test graph data distribution shifts, conventional model evaluation encounters limitations in calculating performance metrics (e.g., test error) and measuring graph data-level discrepancies, particularly when the training graph used for developing GNNs remains unobserved during test time. In this paper, we study a new research problem, online GNN evaluation, which aims to provide valuable insights into the well-trained GNNs's ability to effectively generalize to real-world unlabeled graphs under the test-time graph distribution shifts. Concretely, we develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models. Through a novel GNN re-training strategy with a parameter-free optimality criterion, the proposed LeBeD comprehensively integrates learning behavior discrepancies from both node prediction and structure reconstruction perspectives. This enables the effective evaluation of the well-trained GNNs' ability to capture test node semantics and structural representations, making it an expressive metric for estimating the generalization error in online GNN evaluation. Extensive experiments on real-world test graphs under diverse graph distribution shifts could verify the effectiveness of the proposed method, revealing its strong correlation with ground-truth test errors on various well-trained GNN models.
https://openreview.net/pdf/3b5dbcc5108726d3892b0ca05c69f9fa4eca4529.pdf
Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
https://openreview.net/forum?id=TjhUtloBZU
https://openreview.net/forum?id=TjhUtloBZU
Hao Chen,Jindong Wang,Ankit Shah,Ran Tao,Hongxin Wei,Xing Xie,Masashi Sugiyama,Bhiksha Raj
ICLR 2024,Spotlight
Pre-training on large-scale datasets and then fine-tuning on downstream tasks have become a standard practice in deep learning. However, pre-training data often contain label noise that may adversely affect the generalization of the model. This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks. More specifically, through extensive experiments of supervised pre-training models on synthetic noisy ImageNet-1K and YFCC15M datasets, we demonstrate that while slight noise in pre-training can benefit in-domain (ID) transfer performance, where the training and testing data share the same distribution, it always deteriorates out-of-domain (OOD) performance, where training and testing data distribution are different. We empirically verify that the reason behind is noise in pre-training shapes the feature space differently. We then propose a light-weight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization on both ID and OOD tasks, considering one may not be able to fully fine-tune or even access the pre-trained models. We conduct practical experiments on popular vision and language models that are pre-trained on noisy data for evaluation of our approach. Our analysis and results show the importance of this interesting and novel research direction, which we term Noisy Model Learning.
https://openreview.net/pdf/ae1d6d7106b28a7cb9cacfd2f2b5dd34c10f0ce0.pdf
WildChat: 1M ChatGPT Interaction Logs in the Wild
https://openreview.net/forum?id=Bl8u7ZRlbM
https://openreview.net/forum?id=Bl8u7ZRlbM
Wenting Zhao,Xiang Ren,Jack Hessel,Claire Cardie,Yejin Choi,Yuntian Deng
ICLR 2024,Spotlight
Chatbots such as GPT-4 and ChatGPT are now serving millions of users. Despite their widespread use, there remains a lack of public datasets showcasing how these tools are used by a population of users in practice. To bridge this gap, we offered free access to ChatGPT for online users in exchange for their affirmative, consensual opt-in to anonymously collect their chat transcripts and request headers. From this, we compiled WildChat, a corpus of 1 million user-ChatGPT conversations, which consists of over 2.5 million interaction turns. We compare WildChat with other popular user-chatbot interaction datasets, and find that our dataset offers the most diverse user prompts, contains the largest number of languages, and presents the richest variety of potentially toxic use-cases for researchers to study. In addition to timestamped chat transcripts, we enrich the dataset with demographic data, including state, country, and hashed IP addresses, alongside request headers. This augmentation allows for more detailed analysis of user behaviors across different geographical regions and temporal dimensions. Finally, because it captures a broad range of use cases, we demonstrate the dataset’s potential utility in fine-tuning instruction-following models. WildChat is released at https://wildchat.allen.ai under AI2 ImpACT Licenses.
https://openreview.net/pdf/40986fdeaa994d9dc8bbbe1a2c320eff8eff10a7.pdf
Learning Hierarchical Image Segmentation For Recognition and By Recognition
https://openreview.net/forum?id=IRcv4yFX6z
https://openreview.net/forum?id=IRcv4yFX6z
Tsung-Wei Ke,Sangwoo Mo,Stella X. Yu
ICLR 2024,Spotlight
Large vision and language models learned directly through image-text associations often lack detailed visual substantiation, whereas image segmentation tasks are treated separately from recognition, supervisedly learned without interconnections. Our key observation is that, while an image can be recognized in multiple ways, each has a consistent part-and-whole visual organization. Segmentation thus should be treated not as an end task to be mastered through supervised learning, but as an internal process that evolves with and supports the ultimate goal of recognition. We propose to integrate a hierarchical segmenter into the recognition process, {\it train} and {\it adapt} the entire model solely on image-level recognition objectives. We learn hierarchical segmentation {\it for free} alongside recognition, automatically uncovering part-to-whole relationships that not only underpin but also enhance recognition. Enhancing the Vision Transformer (ViT) with adaptive segment tokens and graph pooling, our model surpasses ViT in unsupervised part-whole discovery, semantic segmentation, image classification, and efficiency. Notably, our model (trained on {\it unlabeled} 1M ImageNet images) outperforms SAM (trained on 11M images and 1 billion masks) by absolute 8\% in mIoU on PartImageNet object segmentation.
https://openreview.net/pdf/9442c745c4c7ad1fb08e97f6a98ae46f4593aea4.pdf
Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models
https://openreview.net/forum?id=plmBsXHxgR
https://openreview.net/forum?id=plmBsXHxgR
Erfan Shayegani,Yue Dong,Nael Abu-Ghazaleh
ICLR 2024,Spotlight
We introduce new jailbreak attacks on vision language models (VLMs), which use aligned LLMs and are resilient to text-only jailbreak attacks. Specifically, we develop cross-modality attacks on alignment where we pair adversarial images going through the vision encoder with textual prompts to break the alignment of the language model. Our attacks employ a novel compositional strategy that combines an image, adversarially targeted towards toxic embeddings, with generic prompts to accomplish the jailbreak. Thus, the LLM draws the context to answer the generic prompt from the adversarial image. The generation of benign-appearing adversarial images leverages a novel embedding-space-based methodology, operating with no access to the LLM model. Instead, the attacks require access only to the vision encoder and utilize one of our four embedding space targeting strategies. By not requiring access to the LLM, the attacks lower the entry barrier for attackers, particularly when vision encoders such as CLIP are embedded in closed-source LLMs. The attacks achieve a high success rate across different VLMs, highlighting the risk of cross-modality alignment vulnerabilities, and the need for new alignment approaches for multi-modal models.
https://openreview.net/pdf/73245653c0cb13379877051e65dbc93ef4aa85cd.pdf
DreamFlow: High-quality text-to-3D generation by Approximating Probability Flow
https://openreview.net/forum?id=GURqUuTebY
https://openreview.net/forum?id=GURqUuTebY
Kyungmin Lee,Kihyuk Sohn,Jinwoo Shin
ICLR 2024,Spotlight
Recent progress in text-to-3D generation has been achieved through the utilization of score distillation methods: they make use of the pre-trained text-to-image (T2I) diffusion models by distilling via the diffusion model training objective. However, such an approach inevitably results in the use of random timesteps at each update, which increases the variance of the gradient and ultimately prolongs the optimization process. In this paper, we propose to enhance the text-to-3D optimization by leveraging the T2I diffusion prior in the generative sampling process with a predetermined timestep schedule. To this end, we interpret text-to-3D optimization as a multi-view image-to-image translation problem, and propose a solution by approximating the probability flow. By leveraging the proposed novel optimization algorithm, we design DreamFlow, a practical three-stage coarse-to-fine text-to-3D optimization framework that enables fast generation of high-quality and high-resolution (i.e., 1024×1024) 3D contents. For example, we demonstrate that DreamFlow is 5 times faster than the existing state-of-the-art text-to-3D method, while producing more photorealistic 3D contents.
https://openreview.net/pdf/65b059da26adcac6aded2561eda017af0181ec6d.pdf
Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns
https://openreview.net/forum?id=XVhm3X8Fum
https://openreview.net/forum?id=XVhm3X8Fum
Brian DuSell,David Chiang
ICLR 2024,Spotlight
Attention, specifically scaled dot-product attention, has proven effective for natural language, but it does not have a mechanism for handling hierarchical patterns of arbitrary nesting depth, which limits its ability to recognize certain syntactic structures. To address this shortcoming, we propose stack attention: an attention operator that incorporates stacks, inspired by their theoretical connections to context-free languages (CFLs). We show that stack attention is analogous to standard attention, but with a latent model of syntax that requires no syntactic supervision. We propose two variants: one related to deterministic pushdown automata (PDAs) and one based on nondeterministic PDAs, which allows transformers to recognize arbitrary CFLs. We show that transformers with stack attention are very effective at learning CFLs that standard transformers struggle on, achieving strong results on a CFL with theoretically maximal parsing difficulty. We also show that stack attention is more effective at natural language modeling under a constrained parameter budget, and we include results on machine translation.
https://openreview.net/pdf/1725d2a5bfd546b5acdabab5eb5d281caf92d1e4.pdf
SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents
https://openreview.net/forum?id=mM7VurbA4r
https://openreview.net/forum?id=mM7VurbA4r
Xuhui Zhou,Hao Zhu,Leena Mathur,Ruohong Zhang,Haofei Yu,Zhengyang Qi,Louis-Philippe Morency,Yonatan Bisk,Daniel Fried,Graham Neubig,Maarten Sap
ICLR 2024,Spotlight
*Humans are social beings*; we pursue social goals in our daily interactions, which is a crucial aspect of social intelligence. Yet, AI systems' abilities in this realm remain elusive. We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and evaluate their social intelligence. In our environment, agents role-play and *interact* under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals. We simulate the role-play interaction between LLM-based agents and humans within this task space and evaluate their performance with a holistic evaluation framework called SOTOPIA-Eval. With SOTOPIA, we find significant differences between these models in terms of their social intelligence, and we identify a subset of SOTOPIA scenarios, SOTOPIA-hard, that is generally challenging for all models. We find that on this subset, GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills. These findings demonstrate SOTOPIA's promise as a general platform for research on evaluating and improving social intelligence in artificial agents.
https://openreview.net/pdf/6aece1f9088fc415196df5830f1ca62e6dbf37d3.pdf
Privileged Sensing Scaffolds Reinforcement Learning
https://openreview.net/forum?id=EpVe8jAjdx
https://openreview.net/forum?id=EpVe8jAjdx
Edward S. Hu,James Springer,Oleh Rybkin,Dinesh Jayaraman
ICLR 2024,Spotlight
We need to look at our shoelaces as we first learn to tie them but having mastered this skill, can do it from touch alone. We call this phenomenon “sensory scaffolding”: observation streams that are not needed by a master might yet aid a novice learner. We consider such sensory scaffolding setups for training artificial agents. For example, a robot arm may need to be deployed with just a low-cost, robust, general-purpose camera; yet its performance may improve by having privileged training-time-only access to informative albeit expensive and unwieldy motion capture rigs or fragile tactile sensors. For these settings, we propose “Scaffolder”, a reinforcement learning approach which effectively exploits privileged sensing in critics, world models, reward estimators, and other such auxiliary components that are only used at training time, to improve the target policy. For evaluating sensory scaffolding agents, we design a new “S3” suite of ten diverse simulated robotic tasks that explore a wide range of practical sensor setups. Agents must use privileged camera sensing to train blind hurdlers, privileged active visual perception to help robot arms overcome visual occlusions, privileged touch sensors to train robot hands, and more. Scaffolder easily outperforms relevant prior baselines and frequently performs comparably even to policies that have test-time access to the privileged sensors. Website: https://penn-pal-lab.github.io/scaffolder/
https://openreview.net/pdf/8ab7ec56b6e56ac45ce2b68e95b277283dbb8377.pdf
Learning to Act without Actions
https://openreview.net/forum?id=rvUq3cxpDF
https://openreview.net/forum?id=rvUq3cxpDF
Dominik Schmidt,Minqi Jiang
ICLR 2024,Spotlight
Pre-training large models on vast amounts of web data has proven to be an effective approach for obtaining powerful, general models in domains such as language and vision. However, this paradigm has not yet taken hold in reinforcement learning. This is because videos, the most abundant form of embodied behavioral data on the web, lack the action labels required by existing methods for imitating behavior from demonstrations. We introduce **Latent Action Policies** (LAPO), a method for recovering latent action information—and thereby latent-action policies, world models, and inverse dynamics models—purely from videos. LAPO is the first method able to recover the structure of the true action space just from observed dynamics, even in challenging procedurally-generated environments. LAPO enables training latent-action policies that can be rapidly fine-tuned into expert-level policies, either offline using a small action-labeled dataset, or online with rewards. LAPO takes a first step towards pre-training powerful, generalist policies and world models on the vast amounts of videos readily available on the web. Our code is available here: [https://github.com/schmidtdominik/LAPO](https://github.com/schmidtdominik/LAPO).
https://openreview.net/pdf/3828f51b4c06dfddf96ff09f99344482461b30d4.pdf
Error Norm Truncation: Robust Training in the Presence of Data Noise for Text Generation Models
https://openreview.net/forum?id=zMvMwNvs4R
https://openreview.net/forum?id=zMvMwNvs4R
Tianjian Li,Haoran Xu,Philipp Koehn,Daniel Khashabi,Kenton Murray
ICLR 2024,Spotlight
Text generation models are notoriously vulnerable to errors in the training data. With the wide-spread availability of massive amounts of web-crawled data becoming more commonplace, how can we enhance the robustness of models trained on a massive amount of noisy web-crawled text? In our work, we propose Error Norm Truncation (ENT), a robust enhancement method to the standard training objective that truncates noisy data. Compared to methods that only uses the negative log-likelihood loss to estimate data quality, our method provides a more accurate estimation by considering the distribution of non-target tokens, which is often overlooked by previous work. Through comprehensive experiments across language modeling, machine translation, and text summarization, we show that equipping text generation models with ENT improves generation quality over standard training and previous soft and hard truncation methods. Furthermore, we show that our method improves the robustness of models against two of the most detrimental types of noise in machine translation, resulting in an increase of more than 2 BLEU points over the MLE baseline when up to 50\% of noise is added to the data.
https://openreview.net/pdf/1f26c396130ee811d490f099f908c0b8c99a3382.pdf
Massively Scalable Inverse Reinforcement Learning in Google Maps
https://openreview.net/forum?id=z3L59iGALM
https://openreview.net/forum?id=z3L59iGALM
Matt Barnes,Matthew Abueg,Oliver F. Lange,Matt Deeds,Jason Trader,Denali Molitor,Markus Wulfmeier,Shawn O'Banion
ICLR 2024,Spotlight
Inverse reinforcement learning (IRL) offers a powerful and general framework for learning humans' latent preferences in route recommendation, yet no approach has successfully addressed planetary-scale problems with hundreds of millions of states and demonstration trajectories. In this paper, we introduce scaling techniques based on graph compression, spatial parallelization, and improved initialization conditions inspired by a connection to eigenvector algorithms. We revisit classic IRL methods in the routing context, and make the key observation that there exists a trade-off between the use of cheap, deterministic planners and expensive yet robust stochastic policies. This insight is leveraged in Receding Horizon Inverse Planning (RHIP), a new generalization of classic IRL algorithms that provides fine-grained control over performance trade-offs via its planning horizon. Our contributions culminate in a policy that achieves a 16-24% improvement in route quality at a global scale, and to the best of our knowledge, represents the largest published study of IRL algorithms in a real-world setting to date. We conclude by conducting an ablation study of key components, presenting negative results from alternative eigenvalue solvers, and identifying opportunities to further improve scalability via IRL-specific batching strategies.
https://openreview.net/pdf/4d44d3413d097944d3e6328c02e01436c56a3fac.pdf
Thin-Shell Object Manipulations With Differentiable Physics Simulations
https://openreview.net/forum?id=KsUh8MMFKQ
https://openreview.net/forum?id=KsUh8MMFKQ
Yian Wang,Juntian Zheng,Zhehuan Chen,Zhou Xian,Gu Zhang,Chao Liu,Chuang Gan
ICLR 2024,Spotlight
In this work, we aim to teach robots to manipulate various thin-shell materials. Prior works studying thin-shell object manipulation mostly rely on heuristic policies or learn policies from real-world video demonstrations, and only focus on limited material types and tasks (e.g., cloth unfolding). However, these approaches face significant challenges when extended to a wider variety of thin-shell materials and a diverse range of tasks. On the other hand, while virtual simulations are shown to be effective in diverse robot skill learning and evaluation, prior thin-shell simulation environments only support a subset of thin-shell materials, which also limits their supported range of tasks. To fill in this gap, we introduce ThinShellLab - a fully differentiable simulation platform tailored for robotic interactions with diverse thin-shell materials possessing varying material properties, enabling flexible thin-shell manipulation skill learning and evaluation. Building on top of our developed simulation engine, we design a diverse set of manipulation tasks centered around different thin-shell objects. Our experiments suggest that manipulating thin-shell objects presents several unique challenges: 1) thin-shell manipulation relies heavily on frictional forces due to the objects' co-dimensional nature, 2) the materials being manipulated are highly sensitive to minimal variations in interaction actions, and 3) the constant and frequent alteration in contact pairs makes trajectory optimization methods susceptible to local optima, and neither standard reinforcement learning algorithms nor trajectory optimization methods (either gradient-based or gradient-free) are able to solve the tasks alone. To overcome these challenges, we present an optimization scheme that couples sampling-based trajectory optimization and gradient-based optimization, boosting both learning efficiency and converged performance across various proposed tasks. In addition, the differentiable nature of our platform facilitates a smooth sim-to-real transition. By tuning simulation parameters with a minimal set of real-world data, we demonstrate successful deployment of the learned skills to real-robot settings. ThinShellLab will be publicly available. Video demonstration and more information can be found on the project website https://vis-www.cs.umass.edu/ThinShellLab/.
https://openreview.net/pdf/edbd3be2c1ca4369cdf41d2d892af284a0c21cc3.pdf
Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks
https://openreview.net/forum?id=7erlRDoaV8
https://openreview.net/forum?id=7erlRDoaV8
Vaidehi Patil,Peter Hase,Mohit Bansal
ICLR 2024,Spotlight
Pretrained language models sometimes possess knowledge that we do not wish them to, including memorized personal information and knowledge that could be used to harm people. They can also output toxic or harmful text. To mitigate these safety and informational issues, we propose an attack-and-defense framework for studying the task of deleting sensitive information directly from model weights. We study direct edits to model weights because (1) this approach should guarantee that particular deleted information is never extracted by future prompt attacks, and (2) it should protect against whitebox attacks, which is necessary for making claims about safety/privacy in a setting where publicly available model weights could be used to elicit sensitive information. Our threat model assumes that an attack succeeds if the answer to a sensitive question is located among a set of B generated candidates, based on scenarios where the information would be insecure if the answer is among B candidates. Experimentally, we show that even state-of-the-art model editing methods such as ROME struggle to truly delete factual information from models like GPT-J, as our whitebox and blackbox attacks can recover “deleted” information from an edited model 38% of the time. These attacks leverage two key observations: (1) that traces of deleted information can be found in intermediate model hidden states, and (2) that applying an editing method for one question may not delete information across rephrased versions of the question. Finally, we provide new defense methods that protect against some extraction attacks, but we do not find a single universally effective defense method. Our results suggest that truly deleting sensitive information is a tractable but difficult problem, since even relatively low attack success rates have potentially severe implications for the deployment of language models in a world where individuals enjoy ownership of their personal data, a right to privacy, and safety from harmful model outputs.
https://openreview.net/pdf/58461c89ea8f999b82924f883aa34b627914dc42.pdf
Learning to Reject Meets Long-tail Learning
https://openreview.net/forum?id=ta26LtNq2r
https://openreview.net/forum?id=ta26LtNq2r
Harikrishna Narasimhan,Aditya Krishna Menon,Wittawat Jitkrittum,Neha Gupta,Sanjiv Kumar
ICLR 2024,Spotlight
Learning to reject (L2R) is a classical problem where one seeks a classifier capable of abstaining on low-confidence samples. Most prior work on L2R has focused on minimizing the standard misclassification error. However, in many real-world applications, the label distribution is highly imbalanced, necessitating alternate evaluation metrics such as the balanced error or the worst-group error that enforce equitable performance across both the head and tail classes. In this paper, we establish that traditional L2R methods can be grossly sub-optimal for such metrics, and show that this is due to an intricate dependence in the objective between the label costs and the rejector. We then derive the form of the Bayes-optimal classifier and rejector for the balanced error, propose a novel plug-in approach to mimic this solution, and extend our results to general evaluation metrics. Through experiments on benchmark image classification tasks, we show that our approach yields better trade-offs in both the balanced and worst-group error compared to L2R baselines.
https://openreview.net/pdf/2d19da352871db6a1e7954f4651e9b44594126b8.pdf
On the Foundations of Shortcut Learning
https://openreview.net/forum?id=Tj3xLVuE9f
https://openreview.net/forum?id=Tj3xLVuE9f
Katherine Hermann,Hossein Mobahi,Thomas FEL,Michael Curtis Mozer
ICLR 2024,Spotlight
Deep-learning models can extract a rich assortment of features from data. Which features a model uses depends not only on *predictivity*---how reliably a feature indicates training-set labels---but also on *availability*---how easily the feature can be extracted from inputs. The literature on shortcut learning has noted examples in which models privilege one feature over another, for example texture over shape and image backgrounds over foreground objects. Here, we test hypotheses about which input properties are more available to a model, and systematically study how predictivity and availability interact to shape models' feature use. We construct a minimal, explicit generative framework for synthesizing classification datasets with two latent features that vary in predictivity and in factors we hypothesize to relate to availability, and we quantify a model's shortcut bias---its over-reliance on the shortcut (more available, less predictive) feature at the expense of the core (less available, more predictive) feature. We find that linear models are relatively unbiased, but introducing a single hidden layer with ReLU or Tanh units yields a bias. Our empirical findings are consistent with a theoretical account based on Neural Tangent Kernels. Finally, we study how models used in practice trade off predictivity and availability in naturalistic datasets, discovering availability manipulations which increase models' degree of shortcut bias. Taken together, these findings suggest that the propensity to learn shortcut features is a fundamental characteristic of deep nonlinear architectures warranting systematic study given its role in shaping how models solve tasks.
https://openreview.net/pdf/3f47b29f0e35691e7047d9fbfa0e4c47ea966e49.pdf
Synaptic Weight Distributions Depend on the Geometry of Plasticity
https://openreview.net/forum?id=x5txICnnjC
https://openreview.net/forum?id=x5txICnnjC
Roman Pogodin,Jonathan Cornford,Arna Ghosh,Gauthier Gidel,Guillaume Lajoie,Blake Aaron Richards
ICLR 2024,Spotlight
A growing literature in computational neuroscience leverages gradient descent and learning algorithms that approximate it to study synaptic plasticity in the brain. However, the vast majority of this work ignores a critical underlying assumption: the choice of distance for synaptic changes - i.e. the geometry of synaptic plasticity. Gradient descent assumes that the distance is Euclidean, but many other distances are possible, and there is no reason that biology necessarily uses Euclidean geometry. Here, using the theoretical tools provided by mirror descent, we show that the distribution of synaptic weights will depend on the geometry of synaptic plasticity. We use these results to show that experimentally-observed log-normal weight distributions found in several brain areas are not consistent with standard gradient descent (i.e. a Euclidean geometry), but rather with non-Euclidean distances. Finally, we show that it should be possible to experimentally test for different synaptic geometries by comparing synaptic weight distributions before and after learning. Overall, our work shows that the current paradigm in theoretical work on synaptic plasticity that assumes Euclidean synaptic geometry may be misguided and that it should be possible to experimentally determine the true geometry of synaptic plasticity in the brain.
https://openreview.net/pdf/bfba522bee31428aed87f14e95fb1c48baff7080.pdf
Graph Metanetworks for Processing Diverse Neural Architectures
https://openreview.net/forum?id=ijK5hyxs0n
https://openreview.net/forum?id=ijK5hyxs0n
Derek Lim,Haggai Maron,Marc T. Law,Jonathan Lorraine,James Lucas
ICLR 2024,Spotlight
Neural networks efficiently encode learned information within their parameters. Consequently, many tasks can be unified by treating neural networks themselves as input data. When doing so, recent studies demonstrated the importance of accounting for the symmetries and geometry of parameter spaces. However, those works developed architectures tailored to specific networks such as MLPs and CNNs without normalization layers, and generalizing such architectures to other types of networks can be challenging. In this work, we overcome these challenges by building new metanetworks --- neural networks that take weights from other neural networks as input. Put simply, we carefully build graphs representing the input neural networks and process the graphs using graph neural networks. Our approach, Graph Metanetworks (GMNs), generalizes to neural architectures where competing methods struggle, such as multi-head attention layers, normalization layers, convolutional layers, ResNet blocks, and group-equivariant linear layers. We prove that GMNs are expressive and equivariant to parameter permutation symmetries that leave the input neural network functions unchanged. We validate the effectiveness of our method on several metanetwork tasks over diverse neural network architectures.
https://openreview.net/pdf/461000be3cc4ed15947307020c46aafa982c40ae.pdf
Dropout Enhanced Bilevel Training
https://openreview.net/forum?id=06lrITXVAx
https://openreview.net/forum?id=06lrITXVAx
Peiran Yu,Junyi Li,Heng Huang
ICLR 2024,Spotlight
Bilevel optimization problems appear in many widely used machine learning tasks. Bilevel optimization models are sensitive to small changes, and bilevel training tasks typically involve limited datasets. Therefore, overfitting is a common challenge in bilevel training tasks. This paper considers the use of dropout to address this problem. We propose a bilevel optimization model that depends on the distribution of dropout masks. We investigate how the dropout rate affects the hypergradient of this model. We propose a dropout bilevel method to solve the dropout bilevel optimization model. Subsequently, we analyze the resulting dropout bilevel method from an optimization perspective. Analyzing the optimization properties of methods with dropout is essential because it provides convergence guarantees for methods using dropout. However, there has been limited investigation in this research direction. We provide the complexity of the resulting dropout bilevel method in terms of reaching an $\epsilon$ stationary point of the proposed stochastic bilevel model. Empirically, we demonstrate that overfitting occurs in data cleaning problems, and the method proposed in this work mitigates this issue.
https://openreview.net/pdf/09304d5bf3e31448450004ee461830870db26085.pdf
Privacy Amplification for Matrix Mechanisms
https://openreview.net/forum?id=xUzWmFdglP
https://openreview.net/forum?id=xUzWmFdglP
Christopher A. Choquette-Choo,Arun Ganesh,Thomas Steinke,Abhradeep Guha Thakurta
ICLR 2024,Spotlight
Privacy amplification exploits randomness in data selection to provide tighter differential privacy (DP) guarantees. This analysis is key to DP-SGD's success in machine learning (ML), but, is not readily applicable to the newer state-of-the-art (SOTA) algorithms. This is because these algorithms, known as DP-FTRL, use the matrix mechanism to add correlated noise instead of independent noise as in DP-SGD. In this paper, we propose "MMCC'' (matrix mechanism conditional composition), the first algorithm to analyze privacy amplification via sampling for any generic matrix mechanism. MMCC is nearly tight in that it approaches a lower bound as $\epsilon\to0$. To analyze correlated outputs in MMCC, we prove that they can be analyzed as if they were independent, by conditioning them on prior outputs. Our "conditional composition theorem'' has broad utility: we use it to show that the noise added to binary-tree-DP-FTRL can asymptotically match the noise added to DP-SGD with amplification. Our algorithm also has practical empirical utility. We show that amplification leads to significant improvement in the privacy/utility trade-offs for DP-FTRL style algorithms for standard benchmark tasks.
https://openreview.net/pdf/1f4107c113add2f5ec1249dac4d4dfa313bfb5c5.pdf
Bandits Meet Mechanism Design to Combat Clickbait in Online Recommendation
https://openreview.net/forum?id=lsxeNvYqCj
https://openreview.net/forum?id=lsxeNvYqCj
Thomas Kleine Buening,Aadirupa Saha,Christos Dimitrakakis,Haifeng Xu
ICLR 2024,Spotlight
We study a strategic variant of the multi-armed bandit problem, which we coin the strategic click-bandit. This model is motivated by applications in online recommendation where the choice of recommended items depends on both the click-through rates and the post-click rewards. Like in classical bandits, rewards follow a fixed unknown distribution. However, we assume that the click-rate of each arm is chosen strategically by the arm (e.g., a host on Airbnb) in order to maximize the number of times it gets clicked. The algorithm designer does not know the post-click rewards nor the arms' actions (i.e., strategically chosen click-rates) in advance, and must learn both values over time. To solve this problem, we design an incentive-aware learning algorithm, UCB-S, which achieves two goals simultaneously: (a) incentivizing desirable arm behavior under uncertainty; (b) minimizing regret by learning unknown parameters. We approximately characterize all Nash equilibria of the arms under UCB-S and show a $\tilde{\mathcal{O}} (\sqrt{KT})$ regret bound uniformly in every equilibrium. We also show that incentive-unaware algorithms generally fail to achieve low regret in the strategic click-bandit. Finally, we support our theoretical results by simulations of strategic arm behavior which confirm the effectiveness and robustness of our proposed incentive design.
https://openreview.net/pdf/8dcfb207122a45da36a445b1322405bc5aafef25.pdf
Towards Principled Representation Learning from Videos for Reinforcement Learning
https://openreview.net/forum?id=3mnWvUZIXt
https://openreview.net/forum?id=3mnWvUZIXt
Dipendra Misra,Akanksha Saran,Tengyang Xie,Alex Lamb,John Langford
ICLR 2024,Spotlight
We study pre-training representations for decision-making using video data, which is abundantly available for tasks such as game agents and software testing. Even though significant empirical advances have been made on this problem, a theoretical understanding remains absent. We initiate the theoretical investigation into principled approaches for representation learning and focus on learning the latent state representations of the underlying MDP using video data. We study two types of settings: one where there is iid noise in the observation, and a more challenging setting where there is also the presence of exogenous noise, which is non-iid noise that is temporally correlated, such as the motion of people or cars in the background. We study three commonly used approaches: autoencoding, temporal contrastive learning, and forward modeling. We prove upper bounds for temporal contrastive learning and forward modeling in the presence of only iid noise. We show that these approaches can learn the latent state and use it to do efficient downstream RL with polynomial sample complexity. When exogenous noise is also present, we establish a lower bound result showing that the sample complexity of learning from video data can be exponentially worse than learning from action-labeled trajectory data. This partially explains why reinforcement learning with video pre-training is hard. We evaluate these representational learning methods in two visual domains, yielding results that are consistent with our theoretical findings.
https://openreview.net/pdf/030d9ee1692e81df35ac143b32eb5beb1c384730.pdf
Optimal Sample Complexity of Contrastive Learning
https://openreview.net/forum?id=NU9AYHJvYe
https://openreview.net/forum?id=NU9AYHJvYe
Noga Alon,Dmitrii Avdiukhin,Dor Elboim,Orr Fischer,Grigory Yaroslavtsev
ICLR 2024,Spotlight
Contrastive learning is a highly successful technique for learning representations of data from labeled tuples, specifying the distance relations within the tuple. We study the sample complexity of contrastive learning, i.e. the minimum number of labeled tuples sufficient for getting high generalization accuracy. We give tight bounds on the sample complexity in a variety of settings, focusing on arbitrary distance functions, $\ell_p$-distances, and tree metrics. Our main result is an (almost) optimal bound on the sample complexity of learning $\ell_p$-distances for integer $p$. For any $p \ge 1$, we show that $\tilde \Theta(nd)$ labeled tuples are necessary and sufficient for learning $d$-dimensional representations of $n$-point datasets. Our results hold for an arbitrary distribution of the input samples and are based on giving the corresponding bounds on the Vapnik-Chervonenkis/Natarajan dimension of the associated problems. We further show that the theoretical bounds on sample complexity obtained via VC/Natarajan dimension can have strong predictive power for experimental results, in contrast with the folklore belief about a substantial gap between the statistical learning theory and the practice of deep learning.
https://openreview.net/pdf/00a6d5623a59e634ef01b7ebdd71bd1b30b22500.pdf
Post-hoc bias scoring is optimal for fair classification
https://openreview.net/forum?id=FM5xfcaR2Y
https://openreview.net/forum?id=FM5xfcaR2Y
Wenlong Chen,Yegor Klochkov,Yang Liu
ICLR 2024,Spotlight
We consider a binary classification problem under group fairness constraints, which can be one of Demographic Parity (DP), Equalized Opportunity (EOp), or Equalized Odds (EO). We propose an explicit characterization of Bayes optimal classifier under the fairness constraints, which turns out to be a simple modification rule of the unconstrained classifier. Namely, we introduce a novel instance-level measure of bias, which we call bias score, and the modification rule is a simple linear rule on top of the finite amount of bias scores. Based on this characterization, we develop a post-hoc approach that allows us to adapt to fairness constraints while maintaining high accuracy. In the case of DP and EOp constraints, the modification rule is thresholding a single bias score, while in the case of EO constraints we are required to fit a linear modification rule with 2 parameters. The method can also be applied for composite group-fairness criteria, such as ones involving several sensitive attributes. We achieve competitive or better performance compared to both in-processing and post-processing methods across three datasets: Adult, COMPAS, and CelebA. Unlike most post-processing methods, we do not require access to sensitive attributes during the inference time.
https://openreview.net/pdf/76bd92c39da18e17fb34a06d39a4c5bb7e7573ba.pdf
Sharpness-Aware Data Poisoning Attack
https://openreview.net/forum?id=bxITGFPVWh
https://openreview.net/forum?id=bxITGFPVWh
Pengfei He,Han Xu,Jie Ren,Yingqian Cui,Shenglai Zeng,Hui Liu,Charu C. Aggarwal,Jiliang Tang
ICLR 2024,Spotlight
Recent research has highlighted the vulnerability of Deep Neural Networks (DNNs) against data poisoning attacks. These attacks aim to inject poisoning samples into the models' training dataset such that the trained models have inference failures. While previous studies have executed different types of attacks, one major challenge that greatly limits their effectiveness is the uncertainty of the re-training process after the injection of poisoning samples. It includes the uncertainty of training initialization, algorithm and model architecture. To address this challenge, we propose a new strategy called **Sharpness-Aware Data Poisoning Attack (SAPA)**. In particular, it leverages the concept of DNNs' loss landscape sharpness to optimize the poisoning effect on the (approximately) worst re-trained model. Extensive experiments demonstrate that SAPA offers a general and principled strategy that significantly enhances various types of poisoning attacks against various types of re-training uncertainty.
https://openreview.net/pdf/c5b9990eda79f25ade2e1cff4e9d53d63490c7fc.pdf
Pre-training with Random Orthogonal Projection Image Modeling
https://openreview.net/forum?id=z4Hcegjzph
https://openreview.net/forum?id=z4Hcegjzph
Maryam Haghighat,Peyman Moghadam,Shaheer Mohamed,Piotr Koniusz
ICLR 2024,Spotlight
Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual pre-training without the use of labels. MIM applies random crops to input images, processes them with an encoder, and then recovers the masked inputs with a decoder, which encourages the network to capture and learn structural information about objects and scenes. The intermediate feature representations obtained from MIM are suitable for fine-tuning on downstream tasks. In this paper, we propose an Image Modeling framework based on random orthogonal projection instead of binary masking as in MIM. Our proposed Random Orthogonal Projection Image Modeling (ROPIM) reduces spatially-wise token information under guaranteed bound on the noise variance and can be considered as masking entire spatial image area under locally varying masking degrees. Since ROPIM uses a random subspace for the projection that realizes the masking step, the readily available complement of the subspace can be used during unmasking to promote recovery of removed information. In this paper, we show that using random orthogonal projection leads to superior performance compared to crop-based masking. We demonstrate state-of-the-art results on several popular benchmarks.
https://openreview.net/pdf/10fbc610db66b05362ea4434c03c92835ba7f338.pdf
Lagrangian Flow Networks for Conservation Laws
https://openreview.net/forum?id=Nshk5YpdWE
https://openreview.net/forum?id=Nshk5YpdWE
Fabricio Arend Torres,Marcello Massimo Negri,Marco Inversi,Jonathan Aellen,Volker Roth
ICLR 2024,Spotlight
We introduce Lagrangian Flow Networks (LFlows) for modeling fluid densities and velocities continuously in space and time. By construction, the proposed LFlows satisfy the continuity equation, a PDE describing mass conservation in its differential form. Our model is based on the insight that solutions to the continuity equation can be expressed as time-dependent density transformations via differentiable and invertible maps. This follows from classical theory of the existence and uniqueness of Lagrangian flows for smooth vector fields. Hence, we model fluid densities by transforming a base density with parameterized diffeomorphisms conditioned on time. The key benefit compared to methods relying on numerical ODE solvers or PINNs is that the analytic expression of the velocity is always consistent with changes in density. Furthermore, we require neither expensive numerical solvers, nor additional penalties to enforce the PDE. LFlows show higher predictive accuracy in density modeling tasks compared to competing models in 2D and 3D, while being computationally efficient. As a real-world application, we model bird migration based on sparse weather radar measurements.
https://openreview.net/pdf/53607a671ffb6900fcf4870e6bd3c5866146e9b4.pdf
Linearity of Relation Decoding in Transformer Language Models
https://openreview.net/forum?id=w7LU2s14kE
https://openreview.net/forum?id=w7LU2s14kE
Evan Hernandez,Arnab Sen Sharma,Tal Haklay,Kevin Meng,Martin Wattenberg,Jacob Andreas,Yonatan Belinkov,David Bau
ICLR 2024,Spotlight
Much of the knowledge encoded in transformer language models (LMs) may be expressed in terms of relations: relations between words and their synonyms, entities and their attributes, etc. We show that, for a subset of relations, this computation is well-approximated by a single linear transformation on the subject representation. Linear relation representations may be obtained by constructing a first-order approximation to the LM from a single prompt, and they exist for a variety of factual, commonsense, and linguistic relations. However, we also identify many cases in which LM predictions capture relational knowledge accurately, but this knowledge is not linearly encoded in their representations. Our results thus reveal a simple, interpretable, but heterogeneously deployed knowledge representation strategy in transformer LMs.
https://openreview.net/pdf/548eac40ba455c0509185e199cc8f49f2f96523c.pdf
Subtractive Mixture Models via Squaring: Representation and Learning
https://openreview.net/forum?id=xIHi5nxu9P
https://openreview.net/forum?id=xIHi5nxu9P
Lorenzo Loconte,Aleksanteri Mikulus Sladek,Stefan Mengel,Martin Trapp,Arno Solin,Nicolas Gillis,Antonio Vergari
ICLR 2024,Spotlight
Mixture models are traditionally represented and learned by adding several distributions as components. Allowing mixtures to subtract probability mass or density can drastically reduce the number of components needed to model complex distributions. However, learning such subtractive mixtures while ensuring they still encode a non-negative function is challenging. We investigate how to learn and perform inference on deep subtractive mixtures by squaring them. We do this in the framework of probabilistic circuits, which enable us to represent tensorized mixtures and generalize several other subtractive models. We theoretically prove that the class of squared circuits allowing subtractions can be exponentially more expressive than traditional additive mixtures; and, we empirically show this increased expressiveness on a series of real-world distribution estimation tasks.
https://openreview.net/pdf/3c3d0732ef99262e1fa4d9fc10ce89f35d07da28.pdf
On the Provable Advantage of Unsupervised Pretraining
https://openreview.net/forum?id=rmXXKxQpOR
https://openreview.net/forum?id=rmXXKxQpOR
Jiawei Ge,Shange Tang,Jianqing Fan,Chi Jin
ICLR 2024,Spotlight
Unsupervised pretraining, which learns a useful representation using a large amount of unlabeled data to facilitate the learning of downstream tasks, is a critical component of modern large-scale machine learning systems. Despite its tremendous empirical success, the rigorous theoretical understanding of why unsupervised pretraining generally helps remains rather limited---most existing results are restricted to particular methods or approaches for unsupervised pretraining with specialized structural assumptions. This paper studies a generic framework, where the unsupervised representation learning task is specified by an abstract class of latent variable models $\Phi$ and the downstream task is specified by a class of prediction functions $\Psi$. We consider a natural approach of using Maximum Likelihood Estimation (MLE) for unsupervised pretraining and Empirical Risk Minimization (ERM) for learning downstream tasks. We prove that, under a mild ``informative'' condition, our algorithm achieves an excess risk of $\\tilde{\\mathcal{O}}(\sqrt{\mathcal{C}\_\Phi/m} + \sqrt{\mathcal{C}\_\Psi/n})$ for downstream tasks, where $\mathcal{C}\_\Phi, \mathcal{C}\_\Psi$ are complexity measures of function classes $\Phi, \Psi$, and $m, n$ are the number of unlabeled and labeled data respectively. Comparing to the baseline of $\tilde{\mathcal{O}}(\sqrt{\mathcal{C}\_{\Phi \circ \Psi}/n})$ achieved by performing supervised learning using only the labeled data, our result rigorously shows the benefit of unsupervised pretraining when $m \gg n$ and $\mathcal{C}\_{\Phi\circ \Psi} > \mathcal{C}\_\Psi$. This paper further shows that our generic framework covers a wide range of approaches for unsupervised pretraining, including factor models, Gaussian mixture models, and contrastive learning.
https://openreview.net/pdf/0acbd62d42fa6cae47b05447238cec98212499f9.pdf
TorchRL: A data-driven decision-making library for PyTorch
https://openreview.net/forum?id=QxItoEAVMb
https://openreview.net/forum?id=QxItoEAVMb
Albert Bou,Matteo Bettini,Sebastian Dittert,Vikash Kumar,Shagun Sodhani,Xiaomeng Yang,Gianni De Fabritiis,Vincent Moens
ICLR 2024,Spotlight
PyTorch has ascended as a premier machine learning framework, yet it lacks a native and comprehensive library for decision and control tasks suitable for large development teams dealing with complex real-world data and environments. To address this issue, we propose TorchRL, a generalistic control library for PyTorch that provides well-integrated, yet standalone components. We introduce a new and flexible PyTorch primitive, the TensorDict, which facilitates streamlined algorithm development across the many branches of Reinforcement Learning (RL) and control. We provide a detailed description of the building blocks and an extensive overview of the library across domains and tasks. Finally, we experimentally demonstrate its reliability and flexibility, and show comparative benchmarks to demonstrate its computational efficiency. TorchRL fosters long-term support and is publicly available on GitHub for greater reproducibility and collaboration within the research community. The code is open-sourced on GitHub.
https://openreview.net/pdf/74ac7ade66fbb7c4ade0e5391457625d3599377c.pdf
Towards Robust Offline Reinforcement Learning under Diverse Data Corruption
https://openreview.net/forum?id=5hAMmCU0bK
https://openreview.net/forum?id=5hAMmCU0bK
Rui Yang,Han Zhong,Jiawei Xu,Amy Zhang,Chongjie Zhang,Lei Han,Tong Zhang
ICLR 2024,Spotlight
Offline reinforcement learning (RL) presents a promising approach for learning reinforced policies from offline datasets without the need for costly or unsafe interactions with the environment. However, datasets collected by humans in real-world environments are often noisy and may even be maliciously corrupted, which can significantly degrade the performance of offline RL. In this work, we first investigate the performance of current offline RL algorithms under comprehensive data corruption, including states, actions, rewards, and dynamics. Our extensive experiments reveal that implicit Q-learning (IQL) demonstrates remarkable resilience to data corruption among various offline RL algorithms. Furthermore, we conduct both empirical and theoretical analyses to understand IQL's robust performance, identifying its supervised policy learning scheme as the key factor. Despite its relative robustness, IQL still suffers from heavy-tail targets of Q functions under dynamics corruption. To tackle this challenge, we draw inspiration from robust statistics to employ the Huber loss to handle the heavy-tailedness and utilize quantile estimators to balance penalization for corrupted data and learning stability. By incorporating these simple yet effective modifications into IQL, we propose a more robust offline RL approach named Robust IQL (RIQL). Extensive experiments demonstrate that RIQL exhibits highly robust performance when subjected to diverse data corruption scenarios.
https://openreview.net/pdf/60415c1f68bdc23484417721aa0069e22923b3b7.pdf
Variational Bayesian Last Layers
https://openreview.net/forum?id=Sx7BIiPzys
https://openreview.net/forum?id=Sx7BIiPzys
James Harrison,John Willes,Jasper Snoek
ICLR 2024,Spotlight
We introduce a deterministic variational formulation for training Bayesian last layer neural networks. This yields a sampling-free, single-pass model and loss that effectively improves uncertainty estimation. Our variational Bayesian last layer (VBLL) can be trained and evaluated with only quadratic complexity in last layer width, and is thus (nearly) computationally free to add to standard architectures. We experimentally investigate VBLLs, and show that they improve predictive accuracy, calibration, and out of distribution detection over baselines across both regression and classification. Finally, we investigate combining VBLL layers with variational Bayesian feature learning, yielding a lower variance collapsed variational inference method for Bayesian neural networks.
https://openreview.net/pdf/60aaa131dccd0ce0a263698149f1661e9ffe3a5e.pdf
EQA-MX: Embodied Question Answering using Multimodal Expression
https://openreview.net/forum?id=7gUrYE50Rb
https://openreview.net/forum?id=7gUrYE50Rb
Md Mofijul Islam,Alexi Gladstone,Riashat Islam,Tariq Iqbal
ICLR 2024,Spotlight
Humans predominantly use verbal utterances and nonverbal gestures (e.g., eye gaze and pointing gestures) in their natural interactions. For instance, pointing gestures and verbal information is often required to comprehend questions such as "what object is that?" Thus, this question-answering (QA) task involves complex reasoning of multimodal expressions (verbal utterances and nonverbal gestures). However, prior works have explored QA tasks in non-embodied settings, where questions solely contain verbal utterances from a single verbal and visual perspective. In this paper, we have introduced 8 novel embodied question answering (EQA) tasks to develop learning models to comprehend embodied questions with multimodal expressions. We have developed a novel large-scale dataset, EQA-MX, with over 8 million diverse embodied QA data samples involving multimodal expressions from multiple visual and verbal perspectives. To learn salient multimodal representations from discrete verbal embeddings and continuous wrapping of multiview visual representations, we propose a vector-quantization (VQ) based multimodal representation learning model, VQ-Fusion, for the EQA tasks. Our extensive experimental results suggest that VQ-Fusion can improve the performance of existing state-of-the-art visual-language models up to 13% across EQA tasks.
https://openreview.net/pdf/f2829b2f4bb32e56e54abc38d0fe382ae50c7361.pdf