title
stringlengths 15
138
| url
stringlengths 42
42
| detail_url
stringlengths 42
42
| authors
stringlengths 7
526
| tags
stringclasses 3
values | abstract
stringlengths 480
3.09k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval | https://openreview.net/forum?id=GN921JHCRw | https://openreview.net/forum?id=GN921JHCRw | Parth Sarthi,Salman Abdullah,Aditi Tuli,Shubh Khanna,Anna Goldie,Christopher D Manning | ICLR 2024,Poster | Retrieval-augmented language models can better adapt to changes in world state and incorporate long-tail knowledge. However, most existing methods retrieve only short contiguous chunks from a retrieval corpus, limiting holistic understanding of the overall document context. We introduce the novel approach of recursively embedding, clustering, and summarizing chunks of text, constructing a tree with differing levels of summarization from the bottom up. At inference time, our RAPTOR model retrieves from this tree, integrating information across lengthy documents at different levels of abstraction. Controlled experiments show that retrieval with recursive summaries offers significant improvements over traditional retrieval-augmented LMs on several tasks. On question-answering tasks that involve complex, multi-step reasoning, we show state-of-the-art results; for example, by coupling RAPTOR retrieval with the use of GPT-4, we can improve the best performance on the QuALITY benchmark by 20\% in absolute accuracy. | https://openreview.net/pdf/1414c23eb8af2e30daa25a4f53471f69091f48d9.pdf |
Fair and Efficient Contribution Valuation for Vertical Federated Learning | https://openreview.net/forum?id=sLQb8q0sUi | https://openreview.net/forum?id=sLQb8q0sUi | Zhenan Fan,Huang Fang,Xinglu Wang,Zirui Zhou,Jian Pei,Michael Friedlander,Yong Zhang | ICLR 2024,Poster | Federated learning is an emerging technology for training machine learning models across decentralized data sources without sharing data. Vertical federated learning, also known as feature-based federated learning, applies to scenarios where data sources have the same sample IDs but different feature sets. To ensure fairness among data owners, it is critical to objectively assess the contributions from different data sources and compensate the corresponding data owners accordingly. The Shapley value is a provably fair contribution valuation metric originating from cooperative game theory. However, its straight-forward computation requires extensively retraining a model on each potential combination of data sources, leading to prohibitively high communication and computation overheads due to multiple rounds of federated learning. To tackle this challenge, we propose a contribution valuation metric called vertical federated Shapley value (VerFedSV) based on the classic Shapley value. We show that VerFedSV not only satisfies many desirable properties of fairness but is also efficient to compute. Moreover, VerFedSV can be adapted to both synchronous and asynchronous vertical federated learning algorithms. Both theoretical analysis and extensive experimental results demonstrate the fairness, efficiency, adaptability, and effectiveness of VerFedSV. | https://openreview.net/pdf/6a7f95c7baec41005e195a24ebc51afc7ef5acbf.pdf |
In-Context Learning through the Bayesian Prism | https://openreview.net/forum?id=HX5ujdsSon | https://openreview.net/forum?id=HX5ujdsSon | Madhur Panwar,Kabir Ahuja,Navin Goyal | ICLR 2024,Poster | In-context learning (ICL) is one of the surprising and useful features of large language models and subject of intense research. Recently, stylized meta-learning-like ICL setups have been devised that train transformers on sequences of input-output pairs $(x, f(x))$. The function $f$ comes from a function class and generalization is checked by evaluating on sequences generated from unseen functions from the same class. One of the main discoveries in this line of research has been that for several function classes, such as linear regression, transformers successfully generalize to new functions in the class. However, the inductive biases of these models resulting in this behavior are not clearly understood. A model with unlimited training data and compute is a Bayesian predictor: it learns the pretraining distribution.
In this paper we empirically examine how far this Bayesian perspective can help us understand ICL. To this end, we generalize the previous meta-ICL setup to hierarchical meta-ICL setup which involve unions of multiple task families. We instantiate this setup on a diverse range of linear and nonlinear function families and find that transformers can do ICL in this setting as well. Where Bayesian inference is tractable, we find evidence that high-capacity transformers mimic the Bayesian predictor. The Bayesian perspective provides insights into the inductive bias of ICL and how transformers perform a particular task when they are trained on multiple tasks. We also find that transformers can learn to generalize to new function classes that were not seen during pretraining. This involves deviation from the Bayesian predictor. We examine these deviations in more depth offering new insights and hypotheses. | https://openreview.net/pdf/fc9aa29ea37339217577f61679622246ebfce078.pdf |
RingAttention with Blockwise Transformers for Near-Infinite Context | https://openreview.net/forum?id=WsRHpHH4s0 | https://openreview.net/forum?id=WsRHpHH4s0 | Hao Liu,Matei Zaharia,Pieter Abbeel | ICLR 2024,Poster | Transformers have emerged as the architecture of choice for many state-of-the-art AI models, showcasing exceptional performance across a wide range of AI applications. However, the memory demands imposed by Transformers limit their ability to handle long sequences, thereby posing challenges in utilizing videos, actions, and other long-form sequences and modalities in complex environments. We present a novel approach, Blockwise RingAttention, which leverages blockwise computation of self-attention and feedforward to distribute long sequences across multiple devices while fully overlapping the communication of key-value blocks with the computation of blockwise attention. Our approach enables training and inference of sequences that are up to device count times longer than those achievable by prior memory-efficient Transformers, without resorting to approximations or incurring additional communication and computation overheads. Extensive experiments on language modeling and reinforcement learning tasks demonstrate the effectiveness of our approach in allowing millions of tokens context size and improving performance. | https://openreview.net/pdf/46002918e58387fc0091aa342ec23ebe66fd93e4.pdf |
Chain of Hindsight aligns Language Models with Feedback | https://openreview.net/forum?id=6xfe4IVcOu | https://openreview.net/forum?id=6xfe4IVcOu | Hao Liu,Carmelo Sferrazza,Pieter Abbeel | ICLR 2024,Poster | Learning from human preferences is important for language models to match human needs and to align with human and social values.
Prior works have achieved remarkable successes by learning from human feedback to understand and follow instructions. Nonetheless, these methods are either founded on hand-picked model generations that are favored by human annotators, rendering them inefficient in terms of data utilization and challenging to apply in general, or they depend on reinforcement learning, which often suffers from imperfect reward functions and relies on extremely challenging optimizations. In this work, we propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity. Our idea is inspired by how humans learn from extensive feedback presented in the form of languages. We convert all types of feedback into sequences of sentences, which are then used to fine-tune the model, allowing us to take advantage of the language comprehension capabilities of language models.
We condition the model on a sequence of model generations paired with feedback. By doing so, the model is trained to generate outputs based on feedback, while learning to identify and correct negative attributes or errors. Applying our method to large language models, we observed that Chain of Hindsight significantly surpasses previous methods in aligning language models with human preferences. We report significant improvements on summarization and dialogue benchmarks, with our approach markedly preferred in human evaluations. | https://openreview.net/pdf/6dc18121c96221c694b6c583053368af3aa30ace.pdf |
GraphChef: Decision-Tree Recipes to Explain Graph Neural Networks | https://openreview.net/forum?id=IjMUGuUmBI | https://openreview.net/forum?id=IjMUGuUmBI | Peter Müller,Lukas Faber,Karolis Martinkus,Roger Wattenhofer | ICLR 2024,Poster | We propose a new self-explainable Graph Neural Network (GNN) model: GraphChef. GraphChef integrates decision trees into the GNN message passing framework. Given a dataset, GraphChef returns a set of rules (a recipe) that explains each class in the dataset unlike existing GNNs and explanation methods that reason on individual graphs. Thanks to the decision trees, GraphChef recipes are human understandable. We also present a new pruning method to produce small and easy to digest trees. Experiments demonstrate that GraphChef reaches comparable accuracy to not self-explainable GNNs and produced decision trees are indeed small. We further validate the correctness of the discovered recipes on datasets where explanation ground truth is available: Reddit-Binary, MUTAG, BA-2Motifs, BA-Shapes, Tree-Cycle, and Tree-Grid. | https://openreview.net/pdf/0e5d8259f045fd1291bc9817e8008716c5c83cb3.pdf |
Safe Collaborative Filtering | https://openreview.net/forum?id=yarUvgEXq3 | https://openreview.net/forum?id=yarUvgEXq3 | Riku Togashi,Tatsushi Oka,Naoto Ohsaka,Tetsuro Morimura | ICLR 2024,Poster | Excellent tail performance is crucial for modern machine learning tasks, such as algorithmic fairness, class imbalance, and risk-sensitive decision making, as it ensures the effective handling of challenging samples within a dataset. Tail performance is also a vital determinant of success for personalized recommender systems to reduce the risk of losing users with low satisfaction. This study introduces a "safe" collaborative filtering method that prioritizes recommendation quality for less-satisfied users rather than focusing on the average performance. Our approach minimizes the conditional value at risk (CVaR), which represents the average risk over the tails of users' loss. To overcome computational challenges for web-scale recommender systems, we develop a robust yet practical algorithm that extends the most scalable method, implicit alternating least squares (iALS). Empirical evaluation on real-world datasets demonstrates the excellent tail performance of our approach while maintaining competitive computational efficiency. | https://openreview.net/pdf/51efd839f7a6eb9312e011822450ded36a856126.pdf |
On Representation Complexity of Model-based and Model-free Reinforcement Learning | https://openreview.net/forum?id=3K3s9qxSn7 | https://openreview.net/forum?id=3K3s9qxSn7 | Hanlin Zhu,Baihe Huang,Stuart Russell | ICLR 2024,Poster | We study the representation complexity of model-based and model-free reinforcement learning (RL) in the context of circuit complexity. We prove theoretically that there exists a broad class of MDPs such that their underlying transition and reward functions can be represented by constant depth circuits with polynomial size, while the optimal $Q$-function suffers an exponential circuit complexity in constant-depth circuits. By drawing attention to the approximation errors and building connections to complexity theory, our theory provides unique insights into why model-based algorithms usually enjoy better sample complexity than model-free algorithms from a novel representation complexity perspective: in some cases, the ground-truth rule (model) of the environment is simple to represent, while other quantities, such as $Q$-function, appear complex. We empirically corroborate our theory by comparing the approximation error of the transition kernel, reward function, and optimal $Q$-function in various Mujoco environments, which demonstrates that the approximation errors of the transition kernel and reward function are consistently lower than those of the optimal $Q$-function. To the best of our knowledge, this work is the first to study the circuit complexity of RL, which also provides a rigorous framework for future research. | https://openreview.net/pdf/94c3b9655119f9778dfab41fe3cb7661151ae6a3.pdf |
Addressing Loss of Plasticity and Catastrophic Forgetting in Continual Learning | https://openreview.net/forum?id=sKPzAXoylB | https://openreview.net/forum?id=sKPzAXoylB | Mohamed Elsayed,A. Rupam Mahmood | ICLR 2024,Poster | Deep representation learning methods struggle with continual learning, suffering from both catastrophic forgetting of useful units and loss of plasticity, often due to rigid and unuseful units. While many methods address these two issues separately, only a few currently deal with both simultaneously. In this paper, we introduce Utility-based Perturbed Gradient Descent (UPGD) as a novel approach for the continual learning of representations. UPGD combines gradient updates with perturbations, where it applies smaller modifications to more useful units, protecting them from forgetting, and larger modifications to less useful units, rejuvenating their plasticity. We use a challenging streaming learning setup where continual learning problems have hundreds of non-stationarities and unknown task boundaries. We show that many existing methods suffer from at least one of the issues, predominantly manifested by their decreasing accuracy over tasks. On the other hand, UPGD continues to improve performance and surpasses or is competitive with all methods in all problems. Finally, in extended reinforcement learning experiments with PPO, we show that while Adam exhibits a performance drop after initial learning, UPGD avoids it by addressing both continual learning issues. | https://openreview.net/pdf/7eb81f1c1c4fea7fa434ebe26bbf3145d56b032f.pdf |
A Good Learner can Teach Better: Teacher-Student Collaborative Knowledge Distillation | https://openreview.net/forum?id=Ixi4j6LtdX | https://openreview.net/forum?id=Ixi4j6LtdX | Ayan Sengupta,Shantanu Dixit,Md Shad Akhtar,Tanmoy Chakraborty | ICLR 2024,Poster | Knowledge distillation (KD) is a technique used to transfer knowledge from a larger ''teacher'' model into a smaller ''student'' model. Recent advancements in meta-learning-based knowledge distillation (MetaKD) emphasize that the fine-tuning of teacher models should be aware of the student's need to achieve better knowledge distillation. However, existing MetaKD methods often lack incentives for the teacher model to improve itself. In this study, we introduce MPDistil, a meta-policy distillation technique, that utilizes novel optimization strategies to foster both *collaboration* and *competition* during the fine-tuning of the teacher model in the meta-learning step. Additionally, we propose a curriculum learning framework for the student model in a competitive setup, in which the student model aims to outperform the teacher model by self-training on various tasks. Exhaustive experiments on SuperGLUE and GLUE benchmarks demonstrate the efficacy of MPDistil compared to $20$ conventional KD and advanced MetaKD baselines, showing significant performance enhancements in the student model -- e.g., a distilled 6-layer BERT model outperforms a 12-layer BERT model on five out of six SuperGLUE tasks. Furthermore, MPDistil, while applied to a large language teacher model (DeBERTa-v2-xxlarge), significantly narrows the performance gap of its smaller student counterpart (DeBERTa-12) by just $4.6$% on SuperGLUE. We further demonstrate how higher rewards and customized training curricula strengthen the student model and enhance generalizability. | https://openreview.net/pdf/79548908914d91f19b250084cc53384846d7ddbb.pdf |
Diagnosing Transformers: Illuminating Feature Spaces for Clinical Decision-Making | https://openreview.net/forum?id=k581sTMyPt | https://openreview.net/forum?id=k581sTMyPt | Aliyah R. Hsu,Yeshwanth Cherapanamjeri,Briton Park,Tristan Naumann,Anobel Odisho,Bin Yu | ICLR 2024,Poster | Pre-trained transformers are often fine-tuned to aid clinical decision-making using limited clinical notes. Model interpretability is crucial, especially in high-stakes domains like medicine, to establish trust and ensure safety, which requires human engagement. We introduce SUFO, a systematic framework that enhances interpretability of fine-tuned transformer feature spaces. SUFO utilizes a range of analytic and visualization techniques, including Supervised probing, Unsupervised similarity analysis, Feature dynamics, and Outlier analysis to address key questions about model trust and interpretability (e.g. model suitability for a task, feature space evolution during fine-tuning, and interpretation of fine-tuned features and failure modes). We conduct a case study investigating the impact of pre-training data where we focus on real-world pathology classification tasks, and validate our findings on MedNLI. We evaluate five 110M-sized pre-trained transformer models, categorized into general-domain (BERT, TNLR), mixed-domain (BioBERT, Clinical BioBERT), and domain-specific (PubMedBERT) groups. Our SUFO analyses reveal that: (1) while PubMedBERT, the domain-specific model, contains valuable information for fine-tuning, it can overfit to minority classes when class imbalances exist. In contrast, mixed-domain models exhibit greater resistance to overfitting, suggesting potential improvements in domain-specific model robustness; (2) in-domain pre-training accelerates feature disambiguation during fine-tuning; and (3) feature spaces undergo significant sparsification during this process, enabling clinicians to identify common outlier modes among fine-tuned models as demonstrated in this paper. These findings showcase the utility of SUFO in enhancing trust and safety when using transformers in medicine, and we believe SUFO can aid practitioners in evaluating fine-tuned language models (LMs) for other applications in medicine and in more critical domains. | https://openreview.net/pdf/4396dc3858433d144c7809bad60e3be5f5a5ebae.pdf |
Mechanistically analyzing the effects of fine-tuning on procedurally defined tasks | https://openreview.net/forum?id=A0HKeKl4Nl | https://openreview.net/forum?id=A0HKeKl4Nl | Samyak Jain,Robert Kirk,Ekdeep Singh Lubana,Robert P. Dick,Hidenori Tanaka,Tim Rocktäschel,Edward Grefenstette,David Krueger | ICLR 2024,Poster | Fine-tuning large pre-trained models has become the de facto strategy for developing both task-specific and general-purpose machine learning systems, including developing models that are safe to deploy. Despite its clear importance, there has been minimal work that explains how fine-tuning alters the underlying capabilities learned by a model during pretraining: does fine-tuning yield entirely novel capabilities or does it just modulate existing ones? We address this question empirically in synthetic, controlled settings where we can use mechanistic interpretability tools (e.g., network pruning and probing) to understand how the model's underlying capabilities are changing. We perform an extensive analysis of the effects of fine-tuning in these settings, and show that: (i) fine-tuning rarely alters the underlying model capabilities; (ii) a minimal transformation, which we call a `wrapper', is typically learned on top of the underlying model capabilities, creating the illusion that they have been modified; and (iii) further fine-tuning on a task where such ``wrapped capabilities'' are relevant leads to sample-efficient revival of the capability, i.e., the model begins reusing these capabilities after only a few gradient steps. This indicates that practitioners can unintentionally remove a model's safety wrapper merely by fine-tuning it on a, e.g., superficially unrelated, downstream task. We additionally perform analysis on language models trained on the TinyStories dataset to support our claims in a more realistic setup. | https://openreview.net/pdf/b46dce52717c9ba5bb6dc047b34dd04064101c1d.pdf |
RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems | https://openreview.net/forum?id=pPjZIOuQuF | https://openreview.net/forum?id=pPjZIOuQuF | Tianyang Liu,Canwen Xu,Julian McAuley | ICLR 2024,Poster | Large Language Models (LLMs) have greatly advanced code auto-completion systems, with a potential for substantial productivity enhancements for developers. However, current benchmarks mainly focus on single-file tasks, leaving an assessment gap for more complex, real-world, multi-file programming scenarios. To fill this gap, we introduce RepoBench, a new benchmark specifically designed for evaluating repository-level code auto-completion systems. RepoBench consists of three interconnected evaluation tasks: RepoBench-R (Retrieval), RepoBench-C (Code Completion), and RepoBench-P (Pipeline). Each task respectively measures the system's ability to retrieve the most relevant code snippets from other files as cross-file context, predict the next line of code with cross-file and in-file context, and handle complex tasks that require a combination of both retrieval and next-line prediction. RepoBench aims to facilitate a more complete comparison of performance and encouraging continuous improvement in auto-completion systems. RepoBench is actively maintained with the latest code, serving as a live benchmark publicly available at https://github.com/Leolty/repobench. | https://openreview.net/pdf/904b94b516d638671ae5c0877f71de8c576853cb.pdf |
Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective | https://openreview.net/forum?id=mIEHIcHGOo | https://openreview.net/forum?id=mIEHIcHGOo | Ming Zhong,Chenxin An,Weizhu Chen,Jiawei Han,Pengcheng He | ICLR 2024,Poster | Large Language Models (LLMs) inherently encode a wealth of knowledge within their parameters through pre-training on extensive corpora. While prior research has delved into operations on these parameters to manipulate the underlying implicit knowledge — encompassing detection, editing, and merging — there remains an ambiguous understanding regarding their transferability across models with varying scales. In this paper, we seek to empirically investigate knowledge transfer from larger to smaller models through a parametric perspective. To achieve this, we employ sensitivity-based techniques to extract and align knowledge-specific parameters between different LLMs. Moreover, the LoRA module is used as the intermediary mechanism for injecting the extracted knowledge into smaller models. Evaluations across four benchmarks validate the efficacy of our proposed method. Our findings highlight the critical factors contributing to the process of parametric knowledge transfer, underscoring the transferability of model parameters across LLMs of different scales. Project website: https://maszhongming.github.io/ParaKnowTransfer. | https://openreview.net/pdf/403d80fb4f79d7b3e0026bbe47d1dbef35d9b3a4.pdf |
Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning | https://openreview.net/forum?id=RXFVcynVe1 | https://openreview.net/forum?id=RXFVcynVe1 | Xiaoxin He,Xavier Bresson,Thomas Laurent,Adam Perold,Yann LeCun,Bryan Hooi | ICLR 2024,Poster | Representation learning on text-attributed graphs (TAGs) has become a critical research problem in recent years. A typical example of a TAG is a paper citation graph, where the text of each paper serves as node attributes. Initial graph neural network (GNN) pipelines handled these text attributes by transforming them into shallow or hand-crafted features, such as skip-gram or bag-of-words features. Recent efforts have focused on enhancing these pipelines with language models (LMs), which typically demand intricate designs and substantial computational resources. With the advent of powerful large language models (LLMs) such as GPT or Llama2, which demonstrate an ability to reason and to utilize general knowledge, there is a growing need for techniques which combine the textual modelling abilities of LLMs with the structural learning capabilities of GNNs. Hence, in this work, we focus on leveraging LLMs to capture textual information as features, which can be used to boost GNN performance on downstream tasks. A key innovation is our use of \emph{explanations as features}: we prompt an LLM to perform zero-shot classification, request textual explanations for its decision-making process, and design an \emph{LLM-to-LM interpreter} to translate these explanations into informative features for downstream GNNs. Our experiments demonstrate that our method achieves state-of-the-art results on well-established TAG datasets, including \texttt{Cora}, \texttt{PubMed}, \texttt{ogbn-arxiv}, as well as our newly introduced dataset, \texttt{tape-arxiv23}. Furthermore, our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on \texttt{ogbn-arxiv}. Lastly, we believe the versatility of the proposed method extends beyond TAGs and holds the potential to enhance other tasks involving graph-text data~\footnote{Our codes and datasets are available at: \url{https://github.com/XiaoxinHe/TAPE}}. | https://openreview.net/pdf/0db04867c257dc081f5a8f03268da344deb07417.pdf |
SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs | https://openreview.net/forum?id=w4DW6qkRmt | https://openreview.net/forum?id=w4DW6qkRmt | Jaehyung Kim,Jaehyun Nam,Sangwoo Mo,Jongjin Park,Sang-Woo Lee,Minjoon Seo,Jung-Woo Ha,Jinwoo Shin | ICLR 2024,Poster | Large language models (LLMs) have made significant advancements in various natural language processing tasks, including question answering (QA) tasks. While incorporating new information with the retrieval of relevant passages is a promising way to improve QA with LLMs, the existing methods often require additional fine-tuning which becomes infeasible with recent LLMs. Augmenting retrieved passages via prompting has the potential to address this limitation, but this direction has been limitedly explored. To this end, we design a simple yet effective framework to enhance open-domain QA (ODQA) with LLMs, based on the summarized retrieval (SuRe). SuRe helps LLMs predict more accurate answers for a given question, which are well-supported by the summarized retrieval that could be viewed as an explicit rationale extracted from the retrieved passages. Specifically, SuRe first constructs summaries of the retrieved passages for each of the multiple answer candidates. Then, SuRe confirms the most plausible answer from the candidate set by evaluating the validity and ranking of the generated summaries. Experimental results on diverse ODQA benchmarks demonstrate the superiority of SuRe, with improvements of up to 4.6\% in exact match (EM) and 4.0\% in F1 score over standard prompting approaches. SuRe also can be integrated with a broad range of retrieval methods and LLMs. Finally, the generated summaries from SuRe show additional advantages to measure the importance of retrieved passages and serve as more preferred rationales by models and humans. | https://openreview.net/pdf/9192639fe8e3dbb64d5431c85984894b9e1b089d.pdf |
Retrieval meets Long Context Large Language Models | https://openreview.net/forum?id=xw5nxFWMlo | https://openreview.net/forum?id=xw5nxFWMlo | Peng Xu,Wei Ping,Xianchao Wu,Lawrence McAfee,Chen Zhu,Zihan Liu,Sandeep Subramanian,Evelina Bakhturina,Mohammad Shoeybi,Bryan Catanzaro | ICLR 2024,Poster | Extending the context window of large language models (LLMs) is getting popular recently, while the solution of augmenting LLMs with retrieval has existed for years. The natural questions are: i) Retrieval-augmentation versus long context window, which one is better for downstream tasks? ii) Can both methods be combined to get the best of both worlds? In this work, we answer these questions by studying both solutions using two state-of-the-art pretrained LLMs, i.e., a proprietary 43B GPT and Llama2-70B. Perhaps surprisingly, we find that LLM with 4K context window using simple retrieval-augmentation at generation can achieve comparable performance to finetuned LLM with 16K context window via positional interpolation on long context tasks, while taking much less computation. More importantly, we demonstrate that retrieval can significantly improve the performance of LLMs regardless of their extended context window sizes. Our best model, retrieval-augmented Llama2-70B with 32K context window, outperforms GPT-3.5-turbo-16k and Davinci003 in terms of average score on nine long context tasks including question answering, query-based summarization, and in-context few-shot learning tasks. It also outperforms its non-retrieval Llama2-70B-32k baseline by a margin, while being much faster at generation. Our study provides general insights on the choice of retrieval-augmentation versus long context extension of LLM for practitioners. | https://openreview.net/pdf/4cd5150fe82d2f05e1ac91ccde87cca2e5f6d8e2.pdf |
Neural Spectral Methods: Self-supervised learning in the spectral domain | https://openreview.net/forum?id=2DbVeuoa6a | https://openreview.net/forum?id=2DbVeuoa6a | Yiheng Du,Nithin Chalapathi,Aditi S. Krishnapriyan | ICLR 2024,Poster | We present Neural Spectral Methods, a technique to solve parametric Partial Differential Equations (PDEs), grounded in classical spectral methods. Our method uses orthogonal bases to learn PDE solutions as mappings between spectral coefficients, instantiating a spectral-based neural operator. In contrast to current machine learning approaches which enforce PDE constraints by minimizing the numerical quadrature of the residuals in the spatiotemporal domain, we leverage Parseval's identity and introduce a new training strategy through a spectral loss. Our spectral loss enables more efficient differentiation through the neural network, and substantially reduces training complexity. At inference time, the computational cost of our method remains constant, regardless of the spatiotemporal resolution of the domain. Our experimental results demonstrate that our method significantly outperforms previous machine learning approaches in terms of speed and accuracy by one to two orders of magnitude on multiple different problems, including reaction-diffusion, and forced and unforced Navier-Stokes equations. When compared to numerical solvers of the same accuracy, our method demonstrates a $10\times$ increase in performance speed. Our source code is publicly available at https://github.com/ASK-Berkeley/Neural-Spectral-Methods. | https://openreview.net/pdf/9aa74e5bf7d501d1a636aee71ec751a621b15eee.pdf |
Kosmos-G: Generating Images in Context with Multimodal Large Language Models | https://openreview.net/forum?id=he6mX9LTyE | https://openreview.net/forum?id=he6mX9LTyE | Xichen Pan,Li Dong,Shaohan Huang,Zhiliang Peng,Wenhu Chen,Furu Wei | ICLR 2024,Poster | Recent advancements in subject-driven image generation have made significant strides. However, current methods still fall short in diverse application scenarios, as they require test-time tuning and cannot accept interleaved multi-image and text input. These limitations keep them far from the ultimate goal of "image as a foreign language in image generation." This paper presents Kosmos-G, a model that leverages the advanced multimodal perception capabilities of Multimodal Large Language Models (MLLMs) to tackle the aforementioned challenge. Our approach aligns the output space of MLLM with CLIP using the textual modality as an anchor and performs compositional instruction tuning on curated data. Kosmos-G demonstrates an impressive capability of zero-shot subject-driven generation with interleaved multi-image and text input. Notably, the score distillation instruction tuning requires no modifications to the image decoder. This allows for a seamless substitution of CLIP and effortless integration with a myriad of U-Net techniques ranging from fine-grained controls to personalized image decoder variants. We posit Kosmos-G as an initial attempt towards the goal of "image as a foreign language in image generation." | https://openreview.net/pdf/8ffe0cc3c3fb4e1f945894b836d9a97b0dbaf9b5.pdf |
Assessing Uncertainty in Similarity Scoring: Performance & Fairness in Face Recognition | https://openreview.net/forum?id=lAhQCHuANV | https://openreview.net/forum?id=lAhQCHuANV | Jean-Rémy Conti,Stephan Clémençon | ICLR 2024,Poster | The ROC curve is the major tool for assessing not only the performance but also the fairness properties of a similarity scoring function. In order to draw reliable conclusions based on empirical ROC analysis, accurately evaluating the uncertainty level related to statistical versions of the ROC curves of interest is absolutely necessary, especially for applications with considerable societal impact such as Face Recognition. In this article, we prove asymptotic guarantees for empirical ROC curves of similarity functions as well as for by-product metrics useful to assess fairness. We also explain that, because the false acceptance/rejection rates are of the form of U-statistics in the case of similarity scoring, the naive bootstrap approach may jeopardize the assessment procedure. A dedicated recentering technique must be used instead. Beyond the theoretical analysis carried out, various experiments using real face image datasets provide strong empirical evidence of the practical relevance of the methods promoted here, when applied to several ROC-based measures such as popular fairness metrics. | https://openreview.net/pdf/2aa88e74521ae92488dd1b2c23c8c9f5996dc778.pdf |
LitCab: Lightweight Language Model Calibration over Short- and Long-form Responses | https://openreview.net/forum?id=jH67LHVOIO | https://openreview.net/forum?id=jH67LHVOIO | Xin Liu,Muhammad Khalifa,Lu Wang | ICLR 2024,Poster | A model is considered well-calibrated when its probability estimate aligns with the actual likelihood of the output being correct. Calibrating language models (LMs) is crucial, as it plays a vital role in detecting and mitigating hallucinations of LMs as well as building more trustworthy models. However, standard calibration techniques may not be suited for LM calibration. For instance, post-processing methods such as temperature scaling do not reorder the candidate generations. On the other hand, training-based methods require fine-tuning the entire model, which is impractical for LMs of large scale. We present LitCab, a lightweight calibration mechanism consisting of a single linear layer that takes the input text representation and predicts a bias term, which is then added to the LM output logits. LitCab improves model calibration by only adding < 2% of the original model parameters. For evaluation, we construct CaT, a benchmark consisting of eight text generation tasks, covering responses ranging from short phrases to paragraphs. We test LitCab with Llama2-7B, where it improves calibration across all tasks, reducing the average ECE score by as large as 30%. We further conduct a comprehensive evaluation with multiple popular open-sourced LMs from GPT and LLaMA families, yielding the following key findings: (i) Larger models within the same family exhibit better calibration on tasks with short generation tasks, but not necessarily for longer ones. (ii) GPT-family models show superior calibration compared to LLaMA, Llama2, and Vicuna models, despite having much fewer parameters. (iii) Fine-tuning pretrained model (e.g., LLaMA) with samples of limited purpose (e.g., conversations) may lead to worse calibration, highlighting the importance of fine-tuning setups for calibrating LMs. | https://openreview.net/pdf/060aede7175d70a3fe37974ccb7cb976fcfa6486.pdf |
Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources | https://openreview.net/forum?id=cPgh4gWZlz | https://openreview.net/forum?id=cPgh4gWZlz | Xingxuan Li,Ruochen Zhao,Yew Ken Chia,Bosheng Ding,Shafiq Joty,Soujanya Poria,Lidong Bing | ICLR 2024,Poster | We present chain-of-knowledge (CoK), a novel framework that augments large language models (LLMs) by dynamically incorporating grounding information from heterogeneous sources. It results in more factual rationales and reduced hallucination in generation.
Specifically, CoK consists of three stages: reasoning preparation, dynamic knowledge adapting, and answer consolidation.
Given a knowledge-intensive question, CoK first prepares several preliminary rationales and answers while identifying the relevant knowledge domains.
If there is no majority consensus among the answers from samples, CoK corrects the rationales step by step by adapting knowledge from the identified domains.
These corrected rationales can plausibly serve as a better foundation for the final answer consolidation.
Unlike prior studies that primarily use unstructured data, CoK also leverages structured knowledge sources such as Wikidata and tables that provide more reliable factual information.
To access both unstructured and structured knowledge sources in the dynamic knowledge adapting stage, we propose an adaptive query generator that allows the generation of queries for various types of query languages, including SPARQL, SQL, and natural sentences. Moreover, to minimize error propagation between rationales, CoK corrects the rationales progressively using preceding corrected rationales to generate and correct subsequent rationales.
Extensive experiments show that CoK consistently improves the performance of LLMs on knowledge-intensive tasks across different domains. | https://openreview.net/pdf/99bf7907ce66cffb067fbb21e933967471cbfdb7.pdf |
Policy Rehearsing: Training Generalizable Policies for Reinforcement Learning | https://openreview.net/forum?id=m3xVPaZp6Z | https://openreview.net/forum?id=m3xVPaZp6Z | Chengxing Jia,Chenxiao Gao,Hao Yin,Fuxiang Zhang,Xiong-Hui Chen,Tian Xu,Lei Yuan,Zongzhang Zhang,Zhi-Hua Zhou,Yang Yu | ICLR 2024,Poster | Human beings can make adaptive decisions in a preparatory manner, i.e., by making preparations in advance, which offers significant advantages in scenarios where both online and offline experiences are expensive and limited. Meanwhile, current reinforcement learning methods commonly rely on numerous environment interactions but hardly obtain generalizable policies. In this paper, we introduce the idea of \textit{rehearsal} into policy optimization, where the agent plans for all possible outcomes in mind and acts adaptively according to actual responses from the environment. To effectively rehearse, we propose ReDM, an algorithm that generates a diverse and eligible set of dynamics models and then rehearse the policy via adaptive training on the generated model set. Rehearsal enables the policy to make decision plans for various hypothetical dynamics and to naturally generalize to previously unseen environments. Our experimental results demonstrate that ReDM is capable of learning a valid policy solely through rehearsal, even with \emph{zero} interaction data. We further extend ReDM to scenarios where limited or mismatched interaction data is available, and our experimental results reveal that ReDM produces high-performing policies compared to other offline RL baselines. | https://openreview.net/pdf/b983c88da15dde7d91c48aee3b97aa22087d7cc0.pdf |
Energy-based Automated Model Evaluation | https://openreview.net/forum?id=CHGcP6lVWd | https://openreview.net/forum?id=CHGcP6lVWd | Ru Peng,Heming Zou,Haobo Wang,Yawen Zeng,Zenan Huang,Junbo Zhao | ICLR 2024,Poster | The conventional evaluation protocols on machine learning models rely heavily on a labeled, i.i.d-assumed testing dataset, which is not often present in real-world applications.
The Automated Model Evaluation (AutoEval) shows an alternative to this traditional workflow, by forming a proximal prediction pipeline of the testing performance without the presence of ground-truth labels.
Despite its recent successes, the AutoEval frameworks still suffer from an overconfidence issue, substantial storage and computational cost.
In that regard, we propose a novel measure --- Meta-Distribution Energy (MDE) that allows the AutoEval framework to be both more efficient and effective.
The core of the MDE is to establish a meta-distribution statistic, on the information (energy) associated with individual samples, then offer a smoother representation enabled by energy-based learning.
We further provide our theoretical insights by connecting the MDE with the classification loss.
We provide extensive experiments across modalities, datasets and different architectural backbones to validate MDE's validity, together with its superiority compared with prior approaches.
We also prove MDE's versatility by showing its seamless integration with large-scale models, and easy adaption to learning scenarios with noisy- or imbalanced- labels. | https://openreview.net/pdf/b67cd952981636bb89569bc035666d70b30d02bc.pdf |
Deceptive Fairness Attacks on Graphs via Meta Learning | https://openreview.net/forum?id=iS5ADHNg2A | https://openreview.net/forum?id=iS5ADHNg2A | Jian Kang,Yinglong Xia,Ross Maciejewski,Jiebo Luo,Hanghang Tong | ICLR 2024,Poster | We study deceptive fairness attacks on graphs to answer the following question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively? We answer this question via a bi-level optimization problem and propose a meta learning-based framework named FATE. FATE is broadly applicable with respect to various fairness definitions and graph learning models, as well as arbitrary choices of manipulation operations. We further instantiate FATE to attack statistical parity or individual fairness on graph neural networks. We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification. The experimental results demonstrate that FATE could amplify the bias of graph neural networks with or without fairness consideration while maintaining the utility on the downstream task. We hope this paper provides insights into the adversarial robustness of fair graph learning and can shed light on designing robust and fair graph learning in future studies. | https://openreview.net/pdf/ec8c9c4fa87c072aa9815fe7601e5b37a8b938b2.pdf |
What Matters to You? Towards Visual Representation Alignment for Robot Learning | https://openreview.net/forum?id=CTlUHIKF71 | https://openreview.net/forum?id=CTlUHIKF71 | Thomas Tian,Chenfeng Xu,Masayoshi Tomizuka,Jitendra Malik,Andrea Bajcsy | ICLR 2024,Poster | When operating in service of people, robots need to optimize rewards aligned with end-user preferences. Since robots will rely on raw perceptual inputs, their rewards will inevitably use visual representations. Recently there has been excitement in using representations from pre-trained visual models, but key to making these work in robotics is fine-tuning, which is typically done via proxy tasks like dynamics prediction or enforcing temporal cycle-consistency. However, all these proxy tasks bypass the human’s input on what matters to them, exacerbating spurious correlations and ultimately leading to behaviors that are misaligned with user preferences. In this work, we propose that robots should leverage human feedback to align their visual representations with the end-user and disentangle what matters for the task. We propose Representation-Aligned Preference-based Learning (RAPL), a method for solving the visual representation alignment problem and visual reward learning problem through the lens of preference-based learning and optimal transport. Across experiments in X MAGICAL and in robotic manipulation, we find that RAPL’s reward consistently generates preferred robot behaviors with high sample efficiency, and shows strong zero-shot generalization when the visual representation is learned from a different embodiment than the robot’s. | https://openreview.net/pdf/b089a2d0ee33f551f8ee252854a9a7630830fe59.pdf |
FedDA: Faster Adaptive Gradient Methods for Federated Constrained Optimization | https://openreview.net/forum?id=kjn99xFUF3 | https://openreview.net/forum?id=kjn99xFUF3 | Junyi Li,Feihu Huang,Heng Huang | ICLR 2024,Poster | Federated learning (FL) is an emerging learning paradigm where a set of distributed clients learns a task under the coordination of a server. The FedAvg algorithm is one of the most widely used methods in FL. In FedAvg, the learning rate is a constant rather than changing adaptively. Adaptive gradient methods have demonstrated superior performance over the constant learning rate schedules in non-distributed settings, and they have recently been adapted to FL. However, the majority of these methods are designed for unconstrained settings. Meanwhile, many crucial FL applications, like disease diagnosis and biomarker identification, often rely on constrained formulations such as Lasso and group Lasso. It remains an open question as to whether adaptive gradient methods can be effectively applied to FL problems with constrains. In this work, we introduce \textbf{FedDA}, a novel adaptive gradient framework for FL. This framework utilizes a restarted dual averaging technique and is compatible with a range of gradient estimation methods and adaptive learning rate schedules. Specifically, an instantiation of our framework FedDA-MVR achieves sample complexity $\tilde{O}(K^{-1}\epsilon^{-1.5})$ and communication complexity $\tilde{O}(K^{-0.25}\epsilon^{-1.25})$ for finding a stationary point $\epsilon$ in the constrained setting with $K$ be the number of clients. We conduct experiments over both constrained and unconstrained tasks to confirm the effectiveness of our approach. | https://openreview.net/pdf/db0f03c6b9016274933d2407dddcaef05789c2f2.pdf |
Extending Power of Nature from Binary to Real-Valued Graph Learning in Real World | https://openreview.net/forum?id=qT7DXUmX7j | https://openreview.net/forum?id=qT7DXUmX7j | Chunshu Wu,Ruibing Song,Chuan Liu,Yunan Yang,Ang Li,Michael Huang,Tong Geng | ICLR 2024,Poster | Nature performs complex computations constantly at clearly lower cost and higher performance than digital computers. It is crucial to understand how to harness the unique computational power of nature in Machine Learning (ML). In the past decade, besides the development of Neural Networks (NNs), the community has also relentlessly explored nature-powered ML paradigms. Although most of them are still predominantly theoretical, a new practical paradigm enabled by the recent advent of CMOS-compatible room-temperature nature-based computers has emerged. By harnessing a dynamical system's intrinsic behavior of chasing the lowest energy state, this paradigm can solve some simple binary problems delivering considerable speedup and energy savings compared with NNs, while maintaining comparable accuracy. Regrettably, its values to the real world are highly constrained by its binary nature. A clear pathway to its extension to real-valued problems remains elusive. This paper aims to unleash this pathway by proposing a novel end-to-end Nature-Powered Graph Learning (NP-GL) framework. Specifically, through a three-dimensional co-design, NP-GL can leverage the spontaneous energy decrease in nature to efficiently solve real-valued graph learning problems. Experimental results across 4 real-world applications with 6 datasets demonstrate that NP-GL delivers, on average, $6.97\times 10^3$ speedup and $10^5$ energy consumption reduction with comparable or even higher accuracy than Graph Neural Networks (GNNs). | https://openreview.net/pdf/c3a5eccec09e9fea31f8e2a25c42986e31463191.pdf |
Meta-VBO: Utilizing Prior Tasks in Optimizing Risk Measures with Gaussian Processes | https://openreview.net/forum?id=ElykcDu5YK | https://openreview.net/forum?id=ElykcDu5YK | Quoc Phong Nguyen,Bryan Kian Hsiang Low,Patrick Jaillet | ICLR 2024,Poster | Research on optimizing the risk measure of a blackbox function using Gaussian processes, especially Bayesian optimization (BO) of risk measures, has become increasingly important due to the inevitable presence of uncontrollable variables in real-world applications. Nevertheless, existing works on BO of risk measures start the optimization from scratch for every new task without considering the results of prior tasks. In contrast, its vanilla BO counterpart has received a thorough investigation on utilizing prior tasks to speed up the current task through the body of works on meta-BO which, however, have not considered risk measures. To bridge this gap, this paper presents the first algorithm for meta-BO of risk measures (i.e., value-at-risk (VaR) and the conditional VaR), namely meta-VBO, by introducing a novel adjustment to the upper confidence bound acquisition function. Our proposed algorithm exhibits two desirable properties: (i) invariance to scaling and vertical shifting of the blackbox function and (ii) robustness to prior harmful tasks. We provide a theoretical performance guarantee for our algorithm and empirically demonstrate its performance using several synthetic function benchmarks and real-world objective functions. | https://openreview.net/pdf/438a0598bfd4e1457d158d87209039d77cfd4c53.pdf |
Data Debugging with Shapley Importance over Machine Learning Pipelines | https://openreview.net/forum?id=qxGXjWxabq | https://openreview.net/forum?id=qxGXjWxabq | Bojan Karlaš,David Dao,Matteo Interlandi,Sebastian Schelter,Wentao Wu,Ce Zhang | ICLR 2024,Poster | When a machine learning (ML) model exhibits poor quality (e.g., poor accuracy or fairness), the problem can often be traced back to errors in the training data. Being able to discover the data examples that are the most likely culprits is a fundamental concern that has received a lot of attention recently. One prominent way to measure "data importance" with respect to model quality is the Shapley value. Unfortunately, existing methods only focus on the ML model in isolation, without considering the broader ML pipeline for data preparation and feature extraction, which appears in the majority of real-world ML code. This presents a major limitation to applying existing methods in practical settings. In this paper, we propose Datascope, a method for efficiently computing Shapley-based data importance over ML pipelines. We introduce several approximations that lead to dramatic improvements in terms of computational speed. Finally, our experimental evaluation demonstrates that our methods are capable of data error discovery that is as effective as existing Monte Carlo baselines, and in some cases even outperform them. We release our code as an open-source data debugging library available at https://github.com/easeml/datascope. | https://openreview.net/pdf/102cae20a16e15e1c544b446d7ec05ad7b8f036a.pdf |
Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning | https://openreview.net/forum?id=4kLVvIh8cp | https://openreview.net/forum?id=4kLVvIh8cp | Qiwei Di,Heyang Zhao,Jiafan He,Quanquan Gu | ICLR 2024,Poster | Offline reinforcement learning (RL), where the agent aims to learn the optimal policy based on the data collected by a behavior policy, has attracted increasing attention in recent years. While offline RL with linear function approximation has been extensively studied with optimal results achieved under certain assumptions, many works shift their interest to offline RL with non-linear function approximation.
However, limited works on offline RL with non-linear function approximation have instance-dependent regret guarantees.
In this paper, we propose an oracle-efficient algorithm, dubbed Pessimistic Nonlinear Least-Square Value Iteration (PNLSVI), for offline RL with non-linear function approximation. Our algorithmic design comprises three innovative components: (1) a variance-based weighted regression scheme that can be applied to a wide range of function classes, (2) a subroutine for variance estimation, and (3) a planning phase that utilizes a pessimistic value iteration approach. Our algorithm enjoys a regret bound that has a tight dependency on the function class complexity and achieves minimax optimal instance-dependent regret when specialized to linear function approximation. Our work extends the previous instance-dependent results within simpler function classes, such as linear and differentiable function to a more general framework. To the best of our knowledge, this is the first statistically optimal algorithm for nonlinear offline RL. | https://openreview.net/pdf/eccb6a13ad39f4a83f32bd2e74f2359e20086e81.pdf |
SKILL-MIX: a Flexible and Expandable Family of Evaluations for AI Models | https://openreview.net/forum?id=Jf5gplvglq | https://openreview.net/forum?id=Jf5gplvglq | Dingli Yu,Simran Kaur,Arushi Gupta,Jonah Brown-Cohen,Anirudh Goyal,Sanjeev Arora | ICLR 2024,Poster | With LLMs shifting their role from statistical modeling of language to serving as general-purpose AI agents, how should LLM evaluations change? Arguably, a key ability of an AI agent is to flexibly combine, as needed, the basic skills it has learned. The capability to combine skills plays an important role in (human) pedagogy and also in a paper on emergence phenomena (Arora & Goyal, 2023).
This work introduces SKILL-MIX, a new evaluation to measure ability to combine skills. Using a list of $N$ skills the evaluator repeatedly picks random subsets of $k$ skills and asks the LLM to produce text combining that subset of skills. Since the number of subsets grows like $N^k$, for even modest $k$ this evaluation will, with high probability, require the LLM to produce text significantly different from any text in the training set.
The paper develops a methodology for (a) designing and administering such an evaluation, and (b) automatic grading (plus spot-checking by humans) of the results using GPT-4 as well as the open LLaMA-2 70B model.
Administering a version of SKILL-MIX to popular chatbots gave results that, while generally in line with prior expectations, contained surprises. Sizeable differences exist among model capabilities that are not captured by their ranking on popular LLM leaderboards ("cramming for the leaderboard"). Furthermore, simple probability calculations indicate that GPT-4's reasonable performance on $k=5$ is suggestive of going beyond "stochastic parrot" behavior (Bender et al., 2021), i.e., it combines skills in ways that it had not seen during training.
We sketch how the methodology can lead to a SKILL-MIX based eco-system of open evaluations for AI capabilities of future models. We maintain a leaderboard of SKILL-MIX at [https://skill-mix.github.io](https://skill-mix.github.io). | https://openreview.net/pdf/2f6d42f9e2ebcdc95ea494a075e5cd3fc7e5e119.pdf |
A Quadratic Synchronization Rule for Distributed Deep Learning | https://openreview.net/forum?id=yroyhkhWS6 | https://openreview.net/forum?id=yroyhkhWS6 | Xinran Gu,Kaifeng Lyu,Sanjeev Arora,Jingzhao Zhang,Longbo Huang | ICLR 2024,Poster | In distributed deep learning with data parallelism, synchronizing gradients at each training step can cause a huge communication overhead, especially when many nodes work together to train large models.
Local gradient methods, such as Local SGD, address this issue by allowing workers to compute locally for $H$ steps without synchronizing with others, hence reducing communication frequency.
While $H$ has been viewed as a hyperparameter to trade optimization efficiency for communication cost, recent research indicates that setting a proper $H$ value can lead to generalization improvement. Yet, selecting a proper $H$ is elusive. This work proposes a theory-grounded method for determining $H$, named the Quadratic Synchronization Rule (QSR), which recommends dynamically setting $H$ in proportion to $\frac{1}{\eta^2}$ as the learning rate $\eta$ decays over time.
Extensive ImageNet experiments on ResNet and ViT show that local gradient methods with QSR consistently improve the test accuracy over other synchronization strategies. Compared to the standard data parallel training, QSR enables Local AdamW to cut the training time on 16 or 64 GPUs down from 26.7 to 20.2 hours or from 8.6 to 5.5 hours and, at the same time, achieves 1.16% or 0.84% higher top-1 validation accuracy. | https://openreview.net/pdf/d7d0725ff6973ff72e22785c8490bab873333e1c.pdf |
ArchLock: Locking DNN Transferability at the Architecture Level with a Zero-Cost Binary Predictor | https://openreview.net/forum?id=e2YOVTenU9 | https://openreview.net/forum?id=e2YOVTenU9 | Tong Zhou,Shaolei Ren,Xiaolin Xu | ICLR 2024,Poster | Deep neural network (DNN) models, despite their impressive performance, are vulnerable to exploitation by attackers who attempt to transfer them to other tasks for their own benefit. Current defense strategies mainly address this vulnerability at the model parameter level, leaving the potential of architectural-level defense largely unexplored. This paper, for the first time, addresses the issue of model protection by reducing transferability at the architecture level. Specifically, we present a novel neural architecture search (NAS)-enabled algorithm that employs zero-cost proxies and evolutionary search, to explore model architectures with low transferability. Our method, namely ArchLock, aims to achieve high performance on the source task, while degrading the performance on potential target tasks, i.e., locking the transferability of a DNN model. To achieve efficient cross-task search without accurately knowing the training data owned by the attackers, we utilize zero-cost proxies to speed up architecture evaluation and simulate potential target task embeddings to assist cross-task search with a binary performance predictor. Extensive experiments on NAS-Bench-201 and TransNAS-Bench-101 demonstrate that ArchLock reduces transferability by up to 30% and 50%, respectively, with negligible performance degradation on source tasks (<2%). The code is available at https://github.com/Tongzhou0101/ArchLock. | https://openreview.net/pdf/c801da252c8a18ba94f5b374ffd9515a915c541d.pdf |
Leftover Lunch: Advantage-based Offline Reinforcement Learning for Language Models | https://openreview.net/forum?id=ZDGKPbF0VQ | https://openreview.net/forum?id=ZDGKPbF0VQ | Ashutosh Baheti,Ximing Lu,Faeze Brahman,Ronan Le Bras,Maarten Sap,Mark Riedl | ICLR 2024,Poster | Reinforcement Learning with Human Feedback (RLHF) is the most prominent method for Language Model (LM) alignment. However, RLHF is an unstable and data-hungry process that continually requires new high-quality LM-generated data for finetuning. We introduce Advantage-Leftover Lunch RL (A-LoL), a new class of offline policy gradient algorithms that enable RL training on any pre-existing data. By assuming the entire LM output sequence as a single action, A-LoL allows incorporating sequence-level classifiers or human-designed scoring functions as
rewards. Subsequently, by using LM’s value estimate, A-LoL only trains on positive advantage (leftover) data points, making it resilient to noise. Overall, A-LoL is an easy-to-implement, sample-efficient, and stable LM training recipe.
We demonstrate the effectiveness of A-LoL and its variants with a set of four different language generation tasks. We compare against both online RL (PPO) and recent preference-based (DPO, PRO) and reward-based (GOLD) offline RL baselines. On the commonly-used RLHF benchmark, Helpful and Harmless Assistant (HHA), LMs trained with A-LoL methods achieve the highest diversity while also being rated more safe and helpful than the baselines according to humans. Additionally, in the remaining three tasks, A-LoL could optimize multiple distinct reward functions even when using noisy or suboptimal training data. | https://openreview.net/pdf/12ee80d1f10cff1b285c1198d93c7a1190f0dac3.pdf |
RECOMP: Improving Retrieval-Augmented LMs with Context Compression and Selective Augmentation | https://openreview.net/forum?id=mlJLVigNHp | https://openreview.net/forum?id=mlJLVigNHp | Fangyuan Xu,Weijia Shi,Eunsol Choi | ICLR 2024,Poster | Retrieval-augmented language models improve language models (LMs) by retrieving documents and prepending them in-context.
However, these documents, often spanning hundreds of words, make inference substantially less efficient. We propose compressing the retrieved documents into textual summaries prior to in-context integration. This not only reduces the computational costs but also relieve the burden of LMs to identify relevant information in long retrieved documents. We present two compressors -- an extractive compressor which selects useful sentences from retrieved documents and an abstractive compressor which generates summary by synthesizing information from multiple documents. Both are trained to achieve performance gain in LMs when we prepend the generated summary from the compressor to LMs' input, while minimizing the summary length. When retrieved documents are irrelevant to the input or offer no additional information to LM, our compressors output an empty string, enabling selective augmentation. We evaluate our approach on the language modeling task and open domain question answering task. We achieve a compression rate of as low as 6% with minimal loss in performance for both tasks, significantly outperforming the off-the-shelf summarization models. We show that our compressors trained for one LM can transfer to other LMs on the language modeling task and provide a summary largely faithful to the retrieved documents. | https://openreview.net/pdf/43938c98697c89512480fceb61ff554001727889.pdf |
Gen-Z: Generative Zero-Shot Text Classification with Contextualized Label Descriptions | https://openreview.net/forum?id=rkplYfqUr0 | https://openreview.net/forum?id=rkplYfqUr0 | Sachin Kumar,Chan Young Park,Yulia Tsvetkov | ICLR 2024,Poster | Language model (LM) prompting—a popular paradigm for solving NLP tasks—has been shown to be susceptible to miscalibration and brittleness to slight prompt variations, caused by its discriminative prompting approach, i.e., predicting the label given the input. To address these issues, we propose Gen-Z—a generative prompting framework for zero-shot text classification. GEN-Z is generative, as it measures the LM likelihood of input text, conditioned on natural language descriptions of labels. The framework is multivariate, as label descriptions allow us to seamlessly integrate additional contextual information about the labels to improve task performance. On various standard classification benchmarks, with six open-source LM families, we show that zero-shot classification with simple contextualization of the data source of the evaluation set consistently outperforms both zero-shot and few-shot baselines while improving robustness to prompt variations. Further, our approach enables personalizing classification in a zero-shot manner by incorporating author, subject, or reader information in the label descriptions. | https://openreview.net/pdf/2c0276dcc674aef018e1899f39fed7767c226540.pdf |
In-Context Learning Dynamics with Random Binary Sequences | https://openreview.net/forum?id=62K7mALO2q | https://openreview.net/forum?id=62K7mALO2q | Eric J Bigelow,Ekdeep Singh Lubana,Robert P. Dick,Hidenori Tanaka,Tomer Ullman | ICLR 2024,Poster | Large language models (LLMs) trained on huge text datasets demonstrate intriguing capabilities, achieving state-of-the-art performance on tasks they were not explicitly trained for. The precise nature of LLM capabilities is often mysterious, and different prompts can elicit different capabilities through in-context learning. We propose a framework that enables us to analyze in-context learning dynamics to understand latent concepts underlying LLMs’ behavioral patterns. This provides a more nuanced understanding than success-or-failure evaluation benchmarks, but does not require observing internal activations as a mechanistic interpretation of circuits would. Inspired by the cognitive science of human randomness perception, we use random binary sequences as context and study dynamics of in-context learning by manipulating properties of context data, such as sequence length. In the latest GPT-3.5+ models, we find emergent abilities to generate seemingly random numbers and learn basic formal languages, with striking in-context learning dynamics where model outputs transition sharply from seemingly random behaviors to deterministic repetition. | https://openreview.net/pdf/b3b2da0b2a8f41b25ce90b041f1fad321dd98835.pdf |
Dichotomy of Early and Late Phase Implicit Biases Can Provably Induce Grokking | https://openreview.net/forum?id=XsHqr9dEGH | https://openreview.net/forum?id=XsHqr9dEGH | Kaifeng Lyu,Jikai Jin,Zhiyuan Li,Simon Shaolei Du,Jason D. Lee,Wei Hu | ICLR 2024,Poster | Recent work by Power et al. (2022) highlighted a surprising "grokking" phenomenon in learning arithmetic tasks: a neural net first "memorizes" the training set, resulting in perfect training accuracy but near-random test accuracy, and after training for sufficiently longer, it suddenly transitions to perfect test accuracy. This paper studies the grokking phenomenon in theoretical setups and shows that it can be induced by a dichotomy of early and late phase implicit biases. Specifically, when training homogeneous neural nets with large initialization and small weight decay on both classification and regression tasks, we prove that the training process gets trapped at a solution corresponding to a kernel predictor for a long time, and then a very sharp transition to min-norm/max-margin predictors occurs, leading to a dramatic change in test accuracy. | https://openreview.net/pdf/a44b7ffbd53a80fbd4d969f2e39aa59edf5c8012.pdf |
Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference | https://openreview.net/forum?id=52fz5sUAy2 | https://openreview.net/forum?id=52fz5sUAy2 | Haoxuan Li,Chunyuan Zheng,Sihao Ding,Peng Wu,Zhi Geng,Fuli Feng,Xiangnan He | ICLR 2024,Poster | Selection bias in recommender system arises from the recommendation process of system filtering and the interactive process of user selection. Many previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model, but ignore the fact that potential outcomes for a given user-item pair may vary with the treatments assigned to other user-item pairs, named neighborhood effect. To fill the gap, this paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference, and introduces a treatment representation to capture the neighborhood effect. On this basis, we propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect. We further develop two new estimators for estimating the proposed ideal loss. We theoretically establish the connection between the proposed and previous debiasing methods ignoring the neighborhood effect, showing that the proposed methods can achieve unbiased learning when both selection bias and neighborhood effects are present, while the existing methods are biased. Extensive semi-synthetic and real-world experiments are conducted to demonstrate the effectiveness of the proposed methods. | https://openreview.net/pdf/9205f9cf9861ea57ce78d3007b54e1bcec2e5df6.pdf |
PromptAgent: Strategic Planning with Language Models Enables Expert-level Prompt Optimization | https://openreview.net/forum?id=22pyNMuIoa | https://openreview.net/forum?id=22pyNMuIoa | Xinyuan Wang,Chenxi Li,Zhen Wang,Fan Bai,Haotian Luo,Jiayou Zhang,Nebojsa Jojic,Eric Xing,Zhiting Hu | ICLR 2024,Poster | Expert-level prompts, carefully engineered by human experts who have a deep understanding of both large language models (LLMs) and domain knowledge, are the future of prompting and pivotal to harnessing the full power of advanced LLMs. Discovering such prompts with an automated process remains a sought-after and unresolved challenge. Existing prompt optimization techniques, though automated through iterative sampling, often fall short in injecting domain knowledge and exploring the vast prompt space for complex expert-level prompts efficiently. To address this pressing need and achieve expert-level prompting, we introduce PromptAgent, which autonomously discovers prompts equivalent in quality to those handcrafted by experts. At its core, PromptAgent views prompt optimization as a strategic planning problem and employs a principled planning algorithm (rooted in Monte Carlo Tree Search) to strategically explore the vast expert-level prompt space. PromptAgent interacts with the LLM in a human-like trial-and-error manner during the planning, and injects expert-level knowledge by reflecting on model errors and generating insightful error feedback. This novel formulation allows it to iteratively evaluate intermediate prompts, refine them based on errors, simulate future rewards, and search for high-reward paths leading to expert-level prompts. We apply PromptAgent to 12 tasks spanning three practical domains: BIG-Bench Hard (BBH), domain-expert, and general NLU tasks, showing PromptAgent consistently outperforms strong prompting and prompt optimization baselines by great margins. Our qualitative analysis further emphasizes PromptAgent's capability to distill insightful errors into expert-level prompts. | https://openreview.net/pdf/d4dbdee0d105cd3020bffdc2f56d99c33429d49c.pdf |
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning | https://openreview.net/forum?id=JnRStoIuTe | https://openreview.net/forum?id=JnRStoIuTe | Patrik Okanovic,Roger Waleffe,Vasilis Mageirakos,Konstantinos Nikolakakis,Amin Karbasi,Dionysios Kalogerias,Nezihe Merve Gürel,Theodoros Rekatsinas | ICLR 2024,Poster | Methods for carefully selecting or generating a small set of training data to learn from, i.e., data pruning, coreset selection, and dataset distillation, have been shown to be effective in reducing the ever-increasing cost of training neural networks. Behind this success are rigorously designed, yet expensive, strategies for identifying the most informative training examples out of large datasets. In this work, we revisit these methods to understand if the additional computational costs associated with such strategies are justified from the perspective of time-to-accuracy, which has become a critical efficiency measure of deep neural network training over large datasets. Surprisingly, we find that many of the recently proposed methods underperform what we call Repeated Sampling of Random Subsets (RSRS or RS2), a powerful yet overlooked extension of the standard random baseline that learns from repeatedly sampled data throughout training instead of a fixed random subset. We test RS2 against thirty-two state-of-the-art data pruning and distillation methods across four datasets including ImageNet. Our results demonstrate that RS2 significantly reduces time-to-accuracy, particularly in practical regimes where accuracy, but not runtime, is similar to that of training on full dataset. For example, when training ResNet-18 on ImageNet, with 10\% of the dataset each epoch RS2 reaches an accuracy of 66\% versus 69\% when training with the full dataset. The best competing method achieves only 55\% while training 1.6$\times$ slower than RS2. Beyond the above meta-study, we discuss the theoretical properties of RS2 such as its convergence rate and generalization error. Our primary goal is to highlight that future works that aim to minimize total training cost by using subset selection, need to consider 1) the total computation cost (including preparing the subset) and 2) should aim to outperform a simple extension of random sampling (i.e., RS2). | https://openreview.net/pdf/d7e4c9daefbd1d5624c0875538fb1fb95b9a2ce9.pdf |
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs | https://openreview.net/forum?id=kGteeZ18Ir | https://openreview.net/forum?id=kGteeZ18Ir | Shashank Gupta,Vaishnavi Shrivastava,Ameet Deshpande,Ashwin Kalyan,Peter Clark,Ashish Sabharwal,Tushar Khot | ICLR 2024,Poster | Recent works have showcased the ability of large-scale language models (LLMs) to embody diverse personas in their responses, exemplified by prompts like ‘_You are Yoda. Explain the Theory of Relativity._’ While this ability allows personalization of LLMs and enables human behavior simulation, its effect on LLMs’ capabilities remains unclear. To fill this gap, we present the first extensive study of the unintended side-effects of persona assignment on the ability of LLMs to perform _basic reasoning tasks_. Our study covers 24 reasoning datasets (spanning mathematics, law, medicine, morals, and more), 4 LLMs (2 versions of ChatGPT-3.5, GPT-4-Turbo, and Llama-2-70b-chat), and 19 diverse personas (e.g., ‘an Asian person’) spanning 5 socio-demographic groups: race, gender, religion, disability, and political affiliation. Our experiments unveil that LLMs harbor deep rooted bias against various socio-demographics underneath a veneer of fairness. While they overtly reject stereotypes when explicitly asked (‘_Are Black people less skilled at mathematics?_’), they manifest stereotypical and often erroneous presumptions when prompted to answer questions while adopting a persona. These can be observed as abstentions in the model’s response, e.g., ‘_As a Black person, I am unable to answer this question as it requires math knowledge_’, and generally result in a substantial drop in performance on reasoning tasks. Our experiments with ChatGPT-3.5 show that this bias is _ubiquitous_—80% of our personas demonstrate bias; it is _significant_—some datasets show performance drops of 70%+; and can be especially _harmful for certain groups_—some personas suffer statistically significant drops on 80%+ of the datasets. Overall, all four LLMs exhibit persona-induced bias to varying extents, with GPT-4-Turbo showing the least but still a problematic amount of bias (evident in 42% of the personas). Further analysis shows that these persona-induced errors can be hard-to-discern as they do not always manifest as explicit abstentions, and can also be hard-to-avoid—we find de-biasing prompts to have minimal to no effect. Our findings serve as a cautionary tale that the practice of assigning personas to LLMs—a trend on the rise—can surface their deep-rooted biases and have unforeseeable and detrimental side-effects. | https://openreview.net/pdf/80ad8992d5c1096ee5f775cfb3ce54c4de41a376.pdf |
Enhancing Instance-Level Image Classification with Set-Level Labels | https://openreview.net/forum?id=AZW3qlCGTe | https://openreview.net/forum?id=AZW3qlCGTe | Renyu Zhang,Aly A Khan,Yuxin Chen,Robert L. Grossman | ICLR 2024,Poster | Instance-level image classification tasks have traditionally relied on single-instance labels to train models, e.g., few-shot learning and transfer learning. However, set-level coarse-grained labels that capture relationships among instances can provide richer information in real-world scenarios. In this paper, we present a novel approach to enhance instance-level image classification by leveraging set-level labels. We provide a theoretical analysis of the proposed method, including recognition conditions for fast excess risk rate, shedding light on the theoretical foundations of our approach. We conducted experiments on two distinct categories of datasets: natural image datasets and histopathology image datasets. Our experimental results demonstrate the effectiveness of our approach, showcasing improved classification performance compared to traditional single-instance label-based methods. Notably, our algorithm achieves 13\% improvement in classification accuracy compared to the strongest baseline on the histopathology image classification benchmarks. Importantly, our experimental findings align with the theoretical analysis, reinforcing the robustness and reliability of our proposed method. This work bridges the gap between instance-level and set-level image classification, offering a promising avenue for advancing the capabilities of image classification models with set-level coarse-grained labels. | https://openreview.net/pdf/c61a439433658edb2a1f929a97ec263da9c31fea.pdf |
Pushing Boundaries: Mixup's Influence on Neural Collapse | https://openreview.net/forum?id=jTSKkcbEsj | https://openreview.net/forum?id=jTSKkcbEsj | Quinn LeBlanc Fisher,Haoming Meng,Vardan Papyan | ICLR 2024,Poster | Mixup is a data augmentation strategy that employs convex combinations of training instances and their respective labels to improve the robustness and calibration of deep neural networks. Despite its widespread adoption, the nuanced mechanisms that underpin its success are not entirely understood. The observed phenomenon of Neural Collapse, where the last-layer activations and classifier of deep networks converge to a simplex equiangular tight frame (ETF), provides a compelling motivation to explore whether mixup induces alternative geometric configurations and whether those could explain its success. In this study, we delve into the last-layer activations of training data for deep networks subjected to mixup, aiming to uncover insights into its operational efficacy. Our investigation, spanning various architectures and dataset pairs, reveals that mixup's last-layer activations predominantly converge to a distinctive configuration different than one might expect. In this configuration, activations from mixed-up examples of identical classes align with the classifier, while those from different classes delineate channels along the decision boundary. These findings are unexpected, as mixed-up features are not simple convex combinations of feature class means (as one might get, for example, by training mixup with the mean squared error loss). By analyzing this distinctive geometric configuration, we elucidate the mechanisms by which mixup enhances model calibration. To further validate our empirical observations, we conduct a theoretical analysis under the assumption of an unconstrained features model, utilizing the mixup loss. Through this, we characterize and derive the optimal last-layer features under the assumption that the classifier forms a simplex ETF. | https://openreview.net/pdf/fc3d9e6d2cf3f29c440ba403f9ad95a4ef697601.pdf |
sRGB Real Noise Modeling via Noise-Aware Sampling with Normalizing Flows | https://openreview.net/forum?id=2XBBumBGeP | https://openreview.net/forum?id=2XBBumBGeP | Dongjin Kim,Donggoo Jung,Sungyong Baik,Tae Hyun Kim | ICLR 2024,Poster | Noise poses a widespread challenge in signal processing, particularly when it comes to denoising images. Although convolutional neural networks (CNNs) have exhibited remarkable success in this field, they are predicated upon the belief that noise follows established distributions, which restricts their practicality when dealing with real-world noise. To overcome this limitation, several efforts have been taken to collect noisy image datasets from the real world. Generative methods, employing techniques such as generative adversarial networks (GANs) and normalizing flows (NFs), have emerged as a solution for generating realistic noisy images. Recent works model noise using camera metadata, however requiring metadata even for sampling phase. In contrast, in this work, we aim to estimate the underlying camera settings, enabling us to improve noise modeling and generate diverse noise distributions. To this end, we introduce a new NF framework that allows us to both classify noise based on camera settings and generate various noisy images. Through experimental results, our model demonstrates exceptional noise quality and leads in denoising performance on benchmark datasets. | https://openreview.net/pdf/4e4c68a8b09ae4ef7e4d0ff1f101a225642f3723.pdf |
Uncertainty-aware Graph-based Hyperspectral Image Classification | https://openreview.net/forum?id=8dN7gApKm3 | https://openreview.net/forum?id=8dN7gApKm3 | Linlin Yu,Yifei Lou,Feng Chen | ICLR 2024,Poster | Hyperspectral imaging (HSI) technology captures spectral information across a broad wavelength range, providing richer pixel features compared to traditional color images with only three channels. Although pixel classification in HSI has been extensively studied, especially using graph convolution neural networks (GCNs), quantifying epistemic and aleatoric uncertainties associated with the HSI classification (HSIC) results remains an unexplored area. These two uncertainties are effective for out-of-distribution (OOD) and misclassification detection, respectively. In this paper, we adapt two advanced uncertainty quantification models, evidential GCNs (EGCN) and graph posterior networks (GPN), designed for node classifications in graphs, into the realm of HSIC. We first reveal theoretically that a popular uncertainty cross-entropy (UCE) loss function is insufficient to produce good epistemic uncertainty when learning EGCNs. To mitigate the limitations, we propose two regularization terms. One leverages the inherent property of HSI data where each feature vector is a linear combination of the spectra signatures of the confounding materials, while the other is the total variation (TV) regularization to enforce the spatial smoothness of the evidence with edge-preserving. We demonstrate the effectiveness of the proposed regularization terms on both EGCN and GPN on three real-world HSIC datasets for OOD and misclassification detection tasks. The code is available at GitHub. | https://openreview.net/pdf/5cb7dbbaad37d6d8ad2e4be0826caf667a69732a.pdf |
Generative Adversarial Equilibrium Solvers | https://openreview.net/forum?id=TlyiaPXaVN | https://openreview.net/forum?id=TlyiaPXaVN | Denizalp Goktas,David C. Parkes,Ian Gemp,Luke Marris,Georgios Piliouras,Romuald Elie,Guy Lever,Andrea Tacchetti | ICLR 2024,Poster | We introduce the use of generative adversarial learning to compute equilibria in general game-theoretic settings, specifically the generalized Nash equilibrium (GNE) in pseudo-games, and its specific instantiation as the competitive equilibrium (CE) in Arrow-Debreu competitive economies. Pseudo-games are a generalization of games in which players' actions affect not only the payoffs of other players but also their feasible action spaces. Although the computation of GNE and CE is intractable in the worst-case, i.e., PPAD-hard, in practice, many applications only require solutions with high accuracy in expectation over a distribution of problem instances. We introduce Generative Adversarial Equilibrium Solvers (GAES): a family of generative adversarial neural networks that can learn GNE and CE from only a sample of problem instances. We provide computational and sample complexity bounds for Lipschitz-smooth function approximators in a large class of concave pseudo-games, and apply the framework to finding Nash equilibria in normal-form games, CE in Arrow-Debreu competitive economies, and GNE in an environmental economic model of the Kyoto mechanism. | https://openreview.net/pdf/9b540423b50ab612d08d9f12ea387e4bb85a7477.pdf |
Graph Transformers on EHRs: Better Representation Improves Downstream Performance | https://openreview.net/forum?id=pe0Vdv7rsL | https://openreview.net/forum?id=pe0Vdv7rsL | Raphael Poulain,Rahmatollah Beheshti | ICLR 2024,Poster | Following the success of transformer-based methods across various machine learning applications, their adoption for healthcare predictive tasks using electronic health records (EHRs) has also expanded extensively. Similarly, graph-based methods have been shown to be very effective in capturing inherent graph-type relationships in EHRs, leading to improved downstream performance. Although integrating these two families of approaches seems like a natural next step, in practice, creating such a design is challenging and has not been done. This is partly due to known EHR problems, such as high sparsity, making extracting meaningful temporal representations of medical visits challenging. In this study, we propose GT-BEHRT, a new approach that leverages temporal visit embeddings extracted from a graph transformer and uses a BERT-based model to obtain more robust patient representations, especially on longer EHR sequences. The graph-based approach allows GT-BEHRT to implicitly capture the intrinsic graphical relationships between medical observations, while the BERT model extracts the temporal relationships between visits, loosely mimicking the clinicians' decision-making process. As part of our method, we also present a two-step pre-training strategy for learning better graphical and temporal representations. Our proposed method achieves state-of-the-art performance in a variety of standard medical predictive tasks, demonstrating the versatility of our approach. | https://openreview.net/pdf/cc3e4cf0f1122fe0c256b5e62246f05026012d2a.pdf |
On the Scalability and Memory Efficiency of Semidefinite Programs for Lipschitz Constant Estimation of Neural Networks | https://openreview.net/forum?id=dwzLn78jq7 | https://openreview.net/forum?id=dwzLn78jq7 | Zi Wang,Bin Hu,Aaron J Havens,Alexandre Araujo,Yang Zheng,Yudong Chen,Somesh Jha | ICLR 2024,Poster | Lipschitz constant estimation plays an important role in understanding generalization, robustness, and fairness in deep learning. Unlike naive bounds based on the network weight norm product, semidefinite programs (SDPs) have shown great promise in providing less conservative Lipschitz bounds with polynomial-time complexity guarantees. However, due to the memory consumption and running speed, standard SDP algorithms cannot scale to modern neural network architectures. In this paper, we transform the SDPs for Lipschitz constant estimation into an eigenvalue optimization problem, which aligns with the modern large-scale optimization paradigms based on first-order methods. This is amenable to autodiff frameworks such as PyTorch and TensorFlow, requiring significantly less memory than standard SDP algorithms. The transformation also allows us to leverage various existing numerical techniques for eigenvalue optimization, opening the way for further memory improvement and computational speedup. The essential technique of our eigenvalue-problem transformation is to introduce redundant quadratic constraints and then utilize both Lagrangian and Shor's SDP relaxations under a certain trace constraint. Notably, our numerical study successfully scales the SDP-based Lipschitz constant estimation to address large neural networks on ImageNet. Our numerical examples on CIFAR10 and ImageNet demonstrate that our technique is more scalable than existing approaches. Our code is available at https://github.com/z1w/LipDiff. | https://openreview.net/pdf/333a8145de80d282eac48f6722bab292ed03b563.pdf |
Large Language Models as Automated Aligners for benchmarking Vision-Language Models | https://openreview.net/forum?id=kZEXgtMNNo | https://openreview.net/forum?id=kZEXgtMNNo | Yuanfeng Ji,Chongjian GE,Weikai Kong,Enze Xie,Zhengying Liu,Zhenguo Li,Ping Luo | ICLR 2024,Poster | With the advancements in Large Language Models (LLMs), Vision-Language Models (VLMs) have reached a new level of sophistication, showing notable competence in executing intricate cognition and reasoning tasks. However, existing evaluation benchmarks, primarily relying on rigid, hand-crafted datasets to measure task-specific performance, face significant limitations in assessing the alignment of these increasingly anthropomorphic models with human intelligence. In this work, we address the limitations via Auto-Bench, which delves into exploring LLMs as proficient aligners, measuring the alignment between VLMs and human intelligence and value through automatic data curation and assessment. Specifically, for data curation, Auto-Bench utilizes LLMs (e.g., GPT-4) to automatically generate a vast set of question-answer-reasoning triplets via prompting on visual symbolic representations (e.g., captions, object locations, instance relationships, and etc. The curated data closely matches human intent, owing to the extensive world knowledge embedded in LLMs. Through this pipeline, a total of 28.5K human-verified and 3,504K unfiltered question-answer-reasoning triplets have been curated, covering 4 primary abilities and 16 sub-abilities. We subsequently engage LLMs like GPT-3.5 to serve as judges, implementing the quantitative and qualitative automated assessments to facilitate a comprehensive evaluation of VLMs. Our validation results reveal that LLMs are proficient in both evaluation data curation and model assessment, achieving an average agreement rate of 85%. We envision Auto-Bench as a flexible, scalable, and comprehensive benchmark for evaluating the evolving sophisticated VLMs. | https://openreview.net/pdf/fec2f0e416c0a90d47240e5522b34b70940223f4.pdf |
CLaM-TTS: Improving Neural Codec Language Model for Zero-Shot Text-to-Speech | https://openreview.net/forum?id=ofzeypWosV | https://openreview.net/forum?id=ofzeypWosV | Jaehyeon Kim,Keon Lee,Seungjun Chung,Jaewoong Cho | ICLR 2024,Poster | With the emergence of neural audio codecs, which encode multiple streams of discrete tokens from audio, large language models have recently gained attention as a promising approach for zero-shot Text-to-Speech (TTS) synthesis. Despite the ongoing rush towards scaling paradigms, audio tokenization ironically amplifies the scalability challenge, stemming from its long sequence length and the complexity of modelling the multiple sequences. To mitigate these issues, we present CLaM-TTS that employs a probabilistic residual vector quantization to (1) achieve superior compression in the token length, and (2) allow a language model to generate multiple tokens at once, thereby eliminating the need for cascaded modeling to handle the number of token streams. Our experimental results demonstrate that CLaM-TTS is better than or comparable to state-of-the-art neural codec-based TTS models regarding naturalness, intelligibility, speaker similarity, and inference speed. In addition, we examine the impact of the pretraining extent of the language models and their text tokenization strategies on performances. | https://openreview.net/pdf/4aa68c6552ace824647a0f32a7f3b5ff97a6cd58.pdf |
Unveiling the Unseen: Identifiable Clusters in Trained Depthwise Convolutional Kernels | https://openreview.net/forum?id=4VgBjsOC8k | https://openreview.net/forum?id=4VgBjsOC8k | Zahra Babaiee,Peyman Kiasari,Daniela Rus,Radu Grosu | ICLR 2024,Poster | Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures, that surpass the performance of classical CNNs, by a considerable scalability and accuracy margin. This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers. Through an extensive analysis of millions of trained filters, with different sizes and from various models, we employed unsupervised clustering with autoencoders, to categorize these filters. Astonishingly, the patterns converged into a few main clusters, each resembling the difference of Gaussian (DoG) functions, and their first and second-order derivatives. Notably, we classify over 95\% and 90\% of the filters from state-of-the-art ConvNeXtV2 and ConvNeXt models, respectively. This finding is not merely a technological curiosity; it echoes the foundational models neuroscientists have long proposed for the vision systems of mammals. Our results thus deepen our understanding of the emergent properties of trained DS-CNNs and provide a bridge between artificial and biological visual processing systems. More broadly, they pave the way for more interpretable and biologically-inspired neural network designs in the future. | https://openreview.net/pdf/3c890643bdfe86705d7ed24e33e7edf242f989d7.pdf |
UNR-Explainer: Counterfactual Explanations for Unsupervised Node Representation Learning Models | https://openreview.net/forum?id=0j9ZDzMPqr | https://openreview.net/forum?id=0j9ZDzMPqr | Hyunju Kang,Geonhee Han,Hogun Park | ICLR 2024,Poster | Node representation learning, such as Graph Neural Networks (GNNs), has become one of the important learning methods in machine learning, and the demand for reliable explanation generation is growing. Despite extensive research on explanation generation for supervised node representation learning, explaining unsupervised models has been less explored. To address this gap, we propose a method for generating counterfactual (CF) explanations in unsupervised node representation learning, aiming to identify the most important subgraphs that cause a significant change in the $k$-nearest neighbors of a node of interest in the learned embedding space upon perturbation. The $k$-nearest neighbor-based CF explanation method provides simple, yet pivotal, information for understanding unsupervised downstream tasks, such as top-$k$ link prediction and clustering. Furthermore, we introduce a Monte Carlo Tree Search (MCTS)-based explainability method for generating expressive CF explanations for **U**nsupervised **N**ode **R**epresentation learning methods, which we call **UNR-Explainer**. The proposed method demonstrates improved performance on six datasets for both unsupervised GraphSAGE and DGI. | https://openreview.net/pdf/790d3e0525600daa0b02aecf21fda646b3197859.pdf |
Are Bert Family Good Instruction Followers? A Study on Their Potential And Limitations | https://openreview.net/forum?id=x8VNtpCu1I | https://openreview.net/forum?id=x8VNtpCu1I | yisheng xiao,Juntao Li,Zechen Sun,Zechang Li,Qingrong Xia,Xinyu Duan,Zhefeng Wang,Min Zhang | ICLR 2024,Poster | Language modeling at scale has proven very effective and brought unprecedented success to natural language models. Many typical representatives, especially decoder-only models, e.g., BLOOM and LLaMA, and encoder-decoder models, e.g., Flan-T5 and AlexaTM, have exhibited incredible instruction-following capabilities while keeping strong task completion ability. These large language models can achieve superior performance in various tasks and even yield emergent capabilities, e.g., reasoning and universal generalization. Though the above two paradigms are mainstream and well explored, the potential of the BERT family, which are encoder-only based models and have ever been one of the most representative pre-trained models, also deserves attention, at least should be discussed. In this work, we adopt XML-R to explore the effectiveness of the BERT family for instruction following and zero-shot learning. We first design a simple yet effective strategy to utilize the encoder-only models for generation tasks and then conduct multi-task instruction tuning. Experimental results demonstrate that our fine-tuned model, Instruct-XMLR, outperforms Bloomz on all evaluation tasks and achieves comparable performance with mT0 on most tasks. Surprisingly, Instruct-XMLR also possesses strong task and language generalization abilities, indicating that Instruct-XMLR can also serve as a good instruction follower and zero-shot learner. Besides, Instruct-XMLR can accelerate decoding due to its non-autoregressive generation manner, achieving around 3 times speedup compared with current autoregressive large language models. Although we also witnessed several limitations through our experiments, such as the performance decline in long-generation tasks and the shortcoming of length prediction, Instruct-XMLR can still become a good member of the family of current large language models. | https://openreview.net/pdf/e58804ca5c30798461a4aa73b0cc89f9836c6880.pdf |
Exploring the Promise and Limits of Real-Time Recurrent Learning | https://openreview.net/forum?id=V2cBKtdC3a | https://openreview.net/forum?id=V2cBKtdC3a | Kazuki Irie,Anand Gopalakrishnan,Jürgen Schmidhuber | ICLR 2024,Poster | Real-time recurrent learning (RTRL) for sequence-processing recurrent neural networks (RNNs) offers certain conceptual advantages over backpropagation through time (BPTT). RTRL requires neither caching past activations nor truncating context, and enables online learning. However, RTRL's time and space complexity make it impractical. To overcome this problem, most recent work on RTRL focuses on approximation theories, while experiments are often limited to diagnostic settings. Here we explore the practical promise of RTRL in more realistic settings. We study actor-critic methods that combine RTRL and policy gradients, and test them in several subsets of DMLab-30, ProcGen, and Atari-2600 environments. On DMLab memory tasks, our system trained on fewer than 1.2B environmental frames is competitive with or outperforms well-known IMPALA and R2D2 baselines trained on 10B frames. To scale to such challenging tasks, we focus on certain well-known neural architectures with element-wise recurrence, allowing for tractable RTRL without approximation. Importantly, we also discuss rarely addressed limitations of RTRL in real-world applications, such as its complexity in the multi-layer case. | https://openreview.net/pdf/9107e97b85399a8a37e9379bb2cdb2ef3e226b56.pdf |
TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting | https://openreview.net/forum?id=YH5w12OUuU | https://openreview.net/forum?id=YH5w12OUuU | Defu Cao,Furong Jia,Sercan O Arik,Tomas Pfister,Yixiang Zheng,Wen Ye,Yan Liu | ICLR 2024,Poster | The past decade has witnessed significant advances in time series modeling with deep learning. While achieving state-of-the-art results, the best-performing architectures vary highly across applications and domains. Meanwhile, for natural language processing, the Generative Pre-trained Transformer (GPT) has demonstrated impressive performance via training one general-purpose model across various textual datasets. It is intriguing to explore whether GPT-type architectures can be effective for time series, capturing the intrinsic dynamic attributes and leading to significant accuracy improvements. In this paper, we propose a novel framework, TEMPO, that can effectively learn time series representations. We focus on utilizing two essential inductive biases of the time series task for pre-trained models: (i) decomposition of the complex interaction between trend, seasonal and residual components; and (ii) introducing the design of prompts to facilitate distribution adaptation in different types of time series. TEMPO expands the capability for dynamically modeling real-world temporal phenomena from data within diverse domains. Our experiments demonstrate the superior performance of TEMPO over state-of-the-art methods on zero shot setting for a number of time series benchmark datasets. This performance gain is observed not only in scenarios involving previously unseen datasets but also in scenarios with multi-modal inputs. This compelling finding highlights TEMPO's potential to constitute a foundational model-building framework. | https://openreview.net/pdf/aa8beab0b2913d71a83f4ec71411d87d5c2409b4.pdf |
Scaling physics-informed hard constraints with mixture-of-experts | https://openreview.net/forum?id=u3dX2CEIZb | https://openreview.net/forum?id=u3dX2CEIZb | Nithin Chalapathi,Yiheng Du,Aditi S. Krishnapriyan | ICLR 2024,Poster | Imposing known physical constraints, such as conservation laws, during neural network training introduces an inductive bias that can improve accuracy, reliability, convergence, and data efficiency for modeling physical dynamics. While such constraints can be softly imposed via loss function penalties, recent advancements in differentiable physics and optimization improve performance by incorporating PDE-constrained optimization as individual layers in neural networks. This enables a stricter adherence to physical constraints. However, imposing hard constraints significantly increases computational and memory costs, especially for complex dynamical systems. This is because it requires solving an optimization problem over a large number of points in a mesh, representing spatial and temporal discretizations, which greatly increases the complexity of the constraint. To address this challenge, we develop a scalable approach to enforce hard physical constraints using Mixture-of-Experts (MoE), which can be used with any neural network architecture. Our approach imposes the constraint over smaller decomposed domains, each of which is solved by an ``expert'' through differentiable optimization. During training, each expert independently performs a localized backpropagation step by leveraging the implicit function theorem; the independence of each expert allows for parallelization across multiple GPUs. Compared to standard differentiable optimization, our scalable approach achieves greater accuracy in the neural PDE solver setting for predicting the dynamics of challenging non-linear systems. We also improve training stability and require significantly less computation time during both training and inference stages. | https://openreview.net/pdf/0dbb1a4e1eb20fc5d0e7c94834773579d30e5b4b.pdf |
Structural Fairness-aware Active Learning for Graph Neural Networks | https://openreview.net/forum?id=bvjcMvMn7B | https://openreview.net/forum?id=bvjcMvMn7B | Haoyu Han,Xiaorui Liu,Li Ma,MohamadAli Torkamani,Hui Liu,Jiliang Tang,Makoto Yamada | ICLR 2024,Poster | Graph Neural Networks (GNNs) have seen significant achievements in semi-supervised node classification. Yet, their efficacy often hinges on access to high-quality labeled node samples, which may not always be available in real-world scenarios. While active learning is commonly employed across various domains to pinpoint and label high-quality samples based on data features, graph data present unique challenges due to their intrinsic structures that render nodes non-i.i.d. Furthermore, biases emerge from the positioning of labeled nodes; for instance, nodes closer to the labeled counterparts often yield better performance. To better leverage graph structure and mitigate structural bias in active learning, we present a unified optimization framework (SCARCE), which is also easily incorporated with node features. Extensive experiments demonstrate that the proposed method not only improves the GNNs performance but also paves the way for more fair results. | https://openreview.net/pdf/1e818b0f851160468dacdfb012e8de08fabf1511.pdf |
Neural-Symbolic Recursive Machine for Systematic Generalization | https://openreview.net/forum?id=FWJAmwE0xH | https://openreview.net/forum?id=FWJAmwE0xH | Qing Li,Yixin Zhu,Yitao Liang,Ying Nian Wu,Song-Chun Zhu,Siyuan Huang | ICLR 2024,Poster | Current learning models often struggle with human-like systematic generalization, particularly in learning compositional rules from limited data and extrapolating them to novel combinations. We introduce the Neural-Symbolic Recursive Ma- chine ( NSR), whose core is a Grounded Symbol System ( GSS), allowing for the emergence of combinatorial syntax and semantics directly from training data. The NSR employs a modular design that integrates neural perception, syntactic parsing, and semantic reasoning. These components are synergistically trained through a novel deduction-abduction algorithm. Our findings demonstrate that NSR’s design, imbued with the inductive biases of equivariance and compositionality, grants it the expressiveness to adeptly handle diverse sequence-to-sequence tasks and achieve unparalleled systematic generalization. We evaluate NSR’s efficacy across four challenging benchmarks designed to probe systematic generalization capabilities: SCAN for semantic parsing, PCFG for string manipulation, HINT for arithmetic reasoning, and a compositional machine translation task. The results affirm NSR ’s superiority over contemporary neural and hybrid models in terms of generalization and transferability. | https://openreview.net/pdf/440caa44d54237dd2cb02bb7f536e9fdfceadfba.pdf |
Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation | https://openreview.net/forum?id=ITq4ZRUT4a | https://openreview.net/forum?id=ITq4ZRUT4a | Jaemin Cho,Yushi Hu,Jason Michael Baldridge,Roopal Garg,Peter Anderson,Ranjay Krishna,Mohit Bansal,Jordi Pont-Tuset,Su Wang | ICLR 2024,Poster | Evaluating text-to-image models is notoriously difficult. A strong recent approach for assessing text-image faithfulness is based on QG/A (question generation and answering), which uses pre-trained foundational models to automatically generate a set of questions and answers from the prompt, and output images are scored based on whether these answers extracted with a visual question answering model are consistent with the prompt-based answers. This kind of evaluation is naturally dependent on the quality of the underlying QG and VQA models. We identify and address several reliability challenges in existing QG/A work: (a) QG questions should respect the prompt (avoiding hallucinations, duplications, and omissions) and (b) VQA answers should be consistent (not asserting that there is no motorcycle in an image while also claiming the motorcycle is blue). We address these issues with Davidsonian Scene Graph (DSG), an empirically grounded evaluation framework inspired by formal semantics, which is adaptable to any QG/A frameworks. DSG produces atomic and unique questions organized in dependency graphs, which (i) ensure appropriate semantic coverage and (ii) sidestep inconsistent answers. With extensive experimentation and human evaluation on a range of model configurations (LLM, VQA, and T2I), we empirically demonstrate that DSG addresses the challenges noted above. Finally, we present DSG-1k, an open-sourced evaluation benchmark that includes 1,060 prompts, covering a wide range of fine-grained semantic categories with a balanced distribution. We release the DSG-1k prompts and the corresponding DSG questions. | https://openreview.net/pdf/713250bdf458c9b54444d2bf78bfd594a06adead.pdf |
Chain of Thought Empowers Transformers to Solve Inherently Serial Problems | https://openreview.net/forum?id=3EWTEy9MTM | https://openreview.net/forum?id=3EWTEy9MTM | Zhiyuan Li,Hong Liu,Denny Zhou,Tengyu Ma | ICLR 2024,Poster | Generating a sequence of intermediate steps, \emph{a.k.a.}, a chain of thought (CoT), is a highly effective method to improve the accuracy of large language models (LLMs) on arithmetics and symbolic reasoning tasks. However, the mechanism behind CoT remains unclear.
This work provides a theoretical understanding of the power of CoT for decoder-only transformers through the lens of expressiveness. Conceptually, CoT empowers the model with the ability to perform inherently serial computation, which is otherwise lacking in transformers, especially when depth is low. Given input length $n$, previous works have constant-depth transformers with finite precision $\mathsf{poly}(n)$ embedding size can only solve problems in $\mathsf{TC}^0$ without CoT. We first show an even tighter expressiveness upper bound for constant-depth transformers with constant-bit precision, which can only solve problems in $\mathsf{AC}^0$, a proper subset of $ \mathsf{TC}^0$. However, with $T$ steps of CoT, constant-depth transformers using constant-bit precision and $O(\log n)$ embedding size can solve any problem solvable by boolean circuits of size $T$. Empirically, enabling CoT dramatically improves the accuracy for tasks that are hard for parallel computation, including the composition of permutation groups, iterated squaring, and circuit value problems, especially for low-depth transformers. | https://openreview.net/pdf/2c3e913d1014164603f487f70dace6570bb0a1d0.pdf |
Tractable MCMC for Private Learning with Pure and Gaussian Differential Privacy | https://openreview.net/forum?id=pmweVpJ229 | https://openreview.net/forum?id=pmweVpJ229 | Yingyu Lin,Yian Ma,Yu-Xiang Wang,Rachel Emily Redberg,Zhiqi Bu | ICLR 2024,Poster | Posterior sampling, i.e., exponential mechanism to sample from the posterior distribution, provides $\varepsilon$-pure differential privacy (DP) guarantees and does not suffer from potentially unbounded privacy breach introduced by $(\varepsilon,\delta)$-approximate DP. In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo (MCMC), thus re-introducing the unappealing $\delta$-approximation error into the privacy guarantees. To bridge this gap, we propose the Approximate SAample Perturbation (abbr. ASAP) algorithm which perturbs an MCMC sample with noise proportional to its Wasserstein-infinity ($W_\infty$) distance from a reference distribution that satisfies pure DP or pure Gaussian DP (i.e., $\delta=0$). We then leverage a Metropolis-Hastings algorithm to generate the sample and prove that the algorithm converges in W$_\infty$ distance. We show that by combining our new techniques with a localization step, we obtain the first nearly linear-time algorithm that achieves the optimal rates in the DP-ERM problem with strongly convex and smooth losses. | https://openreview.net/pdf/98377763ad27f9f0dab3a807c831a7d2b1e123ef.pdf |
Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms | https://openreview.net/forum?id=RsJwmWvE6Q | https://openreview.net/forum?id=RsJwmWvE6Q | Yi Li,Honghao Lin,David Woodruff | ICLR 2024,Poster | We study the problem of residual error estimation for matrix and vector norms using a linear sketch. Such estimates can be used, for example, to quickly assess how useful a more expensive low-rank approximation computation will be. The matrix case concerns the Frobenius norm and the task is to approximate the $k$-residual $\|A - A_k\|_F$ of the input matrix $A$ within a $(1+\epsilon)$-factor, where $A_k$ is the optimal rank-$k$ approximation. We provide a tight bound of $\Theta(k^2/\epsilon^4)$ on the size of bilinear sketches, which have the form of a matrix product $SAT$. This improves the previous $O(k^2/\epsilon^6)$ upper bound in (Andoni et al. SODA 2013) and gives the first non-trivial lower bound, to the best of our knowledge.
In our algorithm, our sketching matrices $S$ and $T$ can both be sparse matrices, allowing for a very fast update time.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
For the vector case, we consider the $\ell_p$-norm for $p>2$, where the task is to approximate the $k$-residual $\|x - x_k\|_p$ up to a constant factor, where $x_k$ is the optimal $k$-sparse approximation to $x$. Such vector norms are frequently studied in the data stream literature and are useful for finding frequent items or so-called heavy hitters. We establish an upper bound of $O(k^{2/p}n^{1-2/p}\operatorname{poly}(\log n))$ for constant $\epsilon$ on the dimension of a linear sketch for this problem. Our algorithm can be extended to the $\ell_p$ sparse recovery problem with the same sketching dimension, which seems to be the first such bound for $p > 2$. We also show an $\Omega(k^{2/p}n^{1-2/p})$ lower bound for the sparse recovery problem, which is tight up to a $\mathrm{poly}(\log n)$ factor. | https://openreview.net/pdf/7f94d35697799a150e50b5657014861583975c50.pdf |
Reverse Diffusion Monte Carlo | https://openreview.net/forum?id=kIPEyMSdFV | https://openreview.net/forum?id=kIPEyMSdFV | Xunpeng Huang,Hanze Dong,Yifan HAO,Yian Ma,Tong Zhang | ICLR 2024,Poster | We propose a Monte Carlo sampler from the reverse diffusion process. Unlike the practice of diffusion models, where the intermediary updates---the score functions---are learned with a neural network, we transform the score matching problem into a mean estimation one.
By estimating the means of the regularized posterior distributions, we derive a novel Monte Carlo sampling algorithm called reverse diffusion Monte Carlo (rdMC), which is distinct from the Markov chain Monte Carlo (MCMC) methods. We determine the sample size from the error tolerance and the properties of the posterior distribution to yield an algorithm that can approximately sample the target distribution with any desired accuracy. Additionally, we demonstrate and prove under suitable conditions that sampling with rdMC can be significantly faster than that with MCMC. For multi-modal target distributions such as those in Gaussian mixture models, rdMC greatly improves over the Langevin-style MCMC sampling methods both theoretically and in practice. The proposed rdMC method offers a new perspective and solution beyond classical MCMC algorithms for the challenging complex distributions. | https://openreview.net/pdf/5e05dd8867eb805ba66920ee894e0234bbfd718d.pdf |
Counting Graph Substructures with Graph Neural Networks | https://openreview.net/forum?id=qaJxPhkYtD | https://openreview.net/forum?id=qaJxPhkYtD | Charilaos Kanatsoulis,Alejandro Ribeiro | ICLR 2024,Poster | Graph Neural Networks (GNNs) are powerful representation learning tools that have achieved remarkable performance in various downstream tasks. However, there are still open questions regarding their ability to count and list substructures, which play a crucial role in biological and social networks. In this work, we fill this gap and characterize the representation {and generalization} power of GNNs in terms of their ability to produce powerful representations that count substructures. In particular, we study the message-passing operations of GNNs with random node input in a novel fashion, and show how they can produce equivariant representations that are associated with high-order statistical moments. Using these representations, we prove that GNNs can learn how to count cycles, {cliques}, quasi-cliques, and the number of connected components in a graph. We also provide new insights into the generalization capacity of GNNs. Our analysis is constructive and enables the design of a generic GNN architecture that shows remarkable performance in four distinct tasks: cycle detection, cycle counting, graph classification, and molecular property prediction. | https://openreview.net/pdf/b832a3a871aa5f5adcfb1053797c2fbd754232a2.pdf |
Are Models Biased on Text without Gender-related Language? | https://openreview.net/forum?id=w1JanwReU6 | https://openreview.net/forum?id=w1JanwReU6 | Catarina G Belém,Preethi Seshadri,Yasaman Razeghi,Sameer Singh | ICLR 2024,Poster | Gender bias research has been pivotal in revealing undesirable behaviors in large language models, exposing serious gender stereotypes associated with occupations, and emotions. A key observation in prior work is that models reinforce stereotypes as a consequence of the gendered correlations that are present in the training data. In this paper, we focus on bias where the effect from training data is unclear, and instead address the question: *Do language models still exhibit gender bias in non-stereotypical settings?* To do so, we introduce **UnStereoEval (USE)**, a novel framework tailored for investigating gender bias in stereotype-free scenarios. USE defines a sentence-level score based on pretraining data statistics to determine if the sentence contain minimal word-gender associations. To systematically benchmark the fairness of popular language models in stereotype-free scenarios, we utilize USE to automatically generate benchmarks without any gender-related language. By leveraging USE's sentence-level score, we also repurpose prior gender bias benchmarks (Winobias and Winogender) for non-stereotypical evaluation. Surprisingly, we find low fairness across all 28 tested models. Concretely, models demonstrate fair behavior in only 9%-41% of stereotype-free sentences, suggesting that bias does not solely stem from the presence of gender-related words. These results raise important questions about where underlying model biases come from and highlight the need for more systematic and comprehensive bias evaluation. We release the full dataset and code at [ucinlp.github.io/unstereo-eval](https://ucinlp.github.io/unstereo-eval). | https://openreview.net/pdf/bd1813ac5b333e7445f4c1a4ac8d3680ace9c572.pdf |
PlaSma: Procedural Knowledge Models for Language-based Planning and Re-Planning | https://openreview.net/forum?id=dFcXJgnrGB | https://openreview.net/forum?id=dFcXJgnrGB | Faeze Brahman,Chandra Bhagavatula,Valentina Pyatkin,Jena D. Hwang,Xiang Lorraine Li,Hirona Jacqueline Arai,Soumya Sanyal,Keisuke Sakaguchi,Xiang Ren,Yejin Choi | ICLR 2024,Poster | Procedural planning, which entails decomposing a high-level goal into a sequence of temporally ordered steps, is an important yet intricate task for machines. It involves integrating common-sense knowledge to reason about complex and often contextualized situations, e.g. ``scheduling a doctor's appointment without a phone''. While current approaches show encouraging results using large language models (LLMs), they are hindered by drawbacks such as costly API calls and reproducibility issues. In this paper, we advocate planning using smaller language models. We present PlaSma, a novel two-pronged approach to endow small language models with procedural knowledge and (constrained) language-based planning capabilities. More concretely, we develop *symbolic procedural knowledge distillation* to enhance the commonsense knowledge in small language models and an *inference-time algorithm* to facilitate more structured and accurate reasoning. In addition, we introduce a new related task, *Replanning*, that requires a revision of a plan to cope with a constrained situation. In both the planning and replanning settings, we show that orders-of-magnitude smaller models (770M-11B parameters) can compete and often surpass their larger teacher models' capabilities. Finally, we showcase successful application of PlaSma in an embodied environment, VirtualHome. | https://openreview.net/pdf/b88eaa0cc84120348881ebfc5dd4fe00210337df.pdf |
From Molecules to Materials: Pre-training Large Generalizable Models for Atomic Property Prediction | https://openreview.net/forum?id=PfPnugdxup | https://openreview.net/forum?id=PfPnugdxup | Nima Shoghi,Adeesh Kolluru,John R. Kitchin,Zachary Ward Ulissi,C. Lawrence Zitnick,Brandon M Wood | ICLR 2024,Poster | Foundation models have been transformational in machine learning fields such as natural language processing and computer vision. Similar success in atomic property prediction has been limited due to the challenges of training effective models across multiple chemical domains. To address this, we introduce Joint Multi-domain Pre-training (JMP), a supervised pre-training strategy that simultaneously trains on multiple datasets from different chemical domains, treating each dataset as a unique pre-training task within a multi-task framework. Our combined training dataset consists of $\sim$120M systems from OC20, OC22, ANI-1x, and Transition-1x. We evaluate performance and generalization by fine-tuning over a diverse set of downstream tasks and datasets including: QM9, rMD17, MatBench, QMOF, SPICE, and MD22. JMP demonstrates an average improvement of 59% over training from scratch and matches or sets state-of-the-art on 34 out of 40 tasks. Our work highlights the potential of pre-training strategies that utilize diverse data to advance property prediction across chemical domains, especially for low-data tasks. | https://openreview.net/pdf/7ae9e4f5f396605dfda891057be51e0ec4e42fdc.pdf |
Towards Foundational Models for Molecular Learning on Large-Scale Multi-Task Datasets | https://openreview.net/forum?id=Zc2aIcucwc | https://openreview.net/forum?id=Zc2aIcucwc | Dominique Beaini,Shenyang Huang,Joao Alex Cunha,Zhiyi Li,Gabriela Moisescu-Pareja,Oleksandr Dymov,Samuel Maddrell-Mander,Callum McLean,Frederik Wenkel,Luis Müller,Jama Hussein Mohamud,Ali Parviz,Michael Craig,Michał Koziarski,Jiarui Lu,Zhaocheng Zhu,Cristian Gabellini,Kerstin Klaser,Josef Dean,Cas Wognum,Maciej Sypetkowski,Guillaume Rabusseau,Reihaneh Rabbany,Jian Tang,Christopher Morris,Mirco Ravanelli,Guy Wolf,Prudencio Tossou,Hadrien Mary,Therence Bois,Andrew W Fitzgibbon,Blazej Banaszewski,Chad Martin,Dominic Masters | ICLR 2024,Poster | Recently, pre-trained foundation models have enabled significant advancements in multiple fields. In molecular machine learning, however, where datasets are often hand-curated, and hence typically small, the lack of datasets with labeled features, and codebases to manage those datasets, has hindered the development of foundation models. In this work, we present seven novel datasets categorized by size into three distinct categories: ToyMix, LargeMix and UltraLarge. These datasets push the boundaries in both the scale and the diversity of supervised labels for molecular learning. They cover nearly 100 million molecules and over 3000 sparsely defined tasks, totaling more than 13 billion individual labels of both quantum and biological nature. In comparison, our datasets contain 300 times more data points than the widely used OGB-LSC PCQM4Mv2 dataset, and 13 times more than the quantum-only QM1B dataset. In addition, to support the development of foundational models based on our proposed datasets, we present the Graphium graph machine learning library which simplifies the process of building and training molecular machine learning models for multi-task and multi-level molecular datasets. Finally, we present a range of baseline results as a starting point of multi-task and multi-level training on these datasets. Empirically, we observe that performance on low-resource biological datasets show improvement by also training on large amounts of quantum data. This indicates that there may be potential in multi-task and multi-level training of a foundation model and fine-tuning it to resource-constrained downstream tasks. The Graphium library is publicly available on Github and the dataset links are available in Part 1 and Part 2. | https://openreview.net/pdf/070fcd7e5f031fc5d671ef14723f848bdc7a540b.pdf |
Independent-Set Design of Experiments for Estimating Treatment and Spillover Effects under Network Interference | https://openreview.net/forum?id=w50MQ9Vfty | https://openreview.net/forum?id=w50MQ9Vfty | Chencheng Cai,Xu Zhang,Edoardo Airoldi | ICLR 2024,Poster | Interference is ubiquitous when conducting causal experiments over networks. Except for certain network structures, causal inference on the network in the presence of interference is difficult due to the entanglement between the treatment assignments and the interference levels. In this article, we conduct causal inference under interference on an observed, sparse, but connected network, and we propose a novel design of experiments based on an independent set. Compared to conventional designs, the independent-set design focuses on an independent subset of data and controls their interference exposures through the assignments to the rest (auxiliary set). We provide a lower bound on the size of the independent set from a greedy algorithm and justify the theoretical performance of estimators under the proposed design. Our approach is capable of estimating both spillover effects and treatment effects. We justify its superiority over conventional methods and illustrate the empirical performance through simulations. | https://openreview.net/pdf/4a5c46e185cf5147131948b3a81949edc248583b.pdf |
FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores | https://openreview.net/forum?id=gPKTTAfYBp | https://openreview.net/forum?id=gPKTTAfYBp | Daniel Y Fu,Hermann Kumbong,Eric Nguyen,Christopher Re | ICLR 2024,Poster | Convolution models with long filters have demonstrated state-of-the-art reasoning abilities in many long-sequence tasks but lag behind the most optimized Transformers in wall-clock time.
A major bottleneck is the Fast Fourier Transform (FFT)---which allows long convolutions to run in $O(N\log N)$ time in sequence length $N$ but has poor hardware utilization.
In this paper, we study how to optimize the FFT convolution.
We find two key bottlenecks: the FFT does not effectively use specialized matrix multiply units, and it incurs expensive I/O between layers of the memory hierarchy.
In response, we propose FlashFFTConv.
FlashFFTConv uses a matrix decomposition that computes the FFT using matrix multiply units and enables kernel fusion for long sequences, reducing I/O.
We also present two sparse convolution algorithms---1) partial convolutions and 2) frequency-sparse convolutions---which can be implemented simply by skipping blocks in the matrix decomposition, enabling further opportunities for memory and compute savings.
FlashFFTConv speeds up exact FFT convolutions by up to 8.7$\times$ over PyTorch and achieves up to 4.4$\times$ speedup end-to-end.
Given the same compute budget, FlashFFTConv allows Hyena-GPT-s to achieve 2.3 points better perplexity and M2-BERT-base to achieve 3.3 points higher GLUE score---matching models with twice the parameter count.
FlashFFTConv also achieves 96.1% accuracy on Path-512, a high-resolution vision task where no model had previously achieved better than 50%.
Furthermore, partial convolutions enable longer-sequence models---yielding the first DNA model that can process the longest human genes (2.3M base pairs)---and frequency-sparse convolutions speed up pretrained models while maintaining or improving model quality. | https://openreview.net/pdf/c77f3c8f339aae3682e72c37d33ce5bf90cd1134.pdf |
Transformer-VQ: Linear-Time Transformers via Vector Quantization | https://openreview.net/forum?id=oDdzXQzP2F | https://openreview.net/forum?id=oDdzXQzP2F | Lucas Dax Lingle | ICLR 2024,Poster | We introduce Transformer-VQ, a decoder-only transformer computing softmax-based dense self-attention in linear time. Transformer-VQ's efficient attention is enabled by vector-quantized keys and a novel caching mechanism.
In our large-scale experiments, Transformer-VQ is shown highly competitive in quality, obtaining 0.99 bpb on Enwik8, 26.6 ppl on PG-19, and 3.16 bpb on ImageNet64. In addition, the optimized implementation of Transformer-VQ is over 3x faster than a comparable quadratic-time transformer at sequence length 8k, is over 12x faster at 32k, and can scale to 131k with similar throughput. Code available: \url{https://github.com/transformer-vq/transformer_vq} | https://openreview.net/pdf/9dfab016ded80c8754b4868d9dc2a054a8f347b6.pdf |
The Hedgehog & the Porcupine: Expressive Linear Attentions with Softmax Mimicry | https://openreview.net/forum?id=4g02l2N2Nx | https://openreview.net/forum?id=4g02l2N2Nx | Michael Zhang,Kush Bhatia,Hermann Kumbong,Christopher Re | ICLR 2024,Poster | Linear attentions have shown promise for improving Transformer efficiency, reducing attention's quadratic complexity to linear in sequence length. This holds exciting promise for (1) training linear Transformers from scratch, (2) `inetuned-conversion of task-specific Transformers into linear versions that recover task performance, and (3) pretrained-conversion of Transformers, such as language models, into linear versions readily finetunable on downstream tasks. However, linear attentions often underperform compared to standard softmax attention. To close this performance gap, we study the behaviors of softmax and linear attentions in various train-from-scratch and finetuned-conversion settings. We find prior linear attentions lack key properties of softmax attention tied to good performance: low-entropy (or spiky) weights and dot-product monotonicity. We further observe surprisingly simple feature maps that retain these properties match softmax performance, but are inefficient to compute in linear attention. We thus propose Hedgehog, a learnable linear attention that retains the spiky and monotonic properties of softmax attention while maintaining linear complexity. Hedgehog uses simple, trainable MLPs to produce attention weights mimicking softmax attention. Experiments show Hedgehog recovers over 99\% of standard Transformer performance in train-from-scratch and finetuned-conversion settings, outperforming prior linear attentions by up to 6 perplexity points on WikiText-103 when training causal GPT models from scratch, and up to 8.7 GLUE score points when converting finetuned bidirectional BERT models. Hedgehog also enables pretrained-conversion. Converting a pretrained GPT-2 into a linear attention variant achieves state-of-the-art 16.7 perplexity on WikiText-103 for 125M subquadratic decoder models. We finally turn a pretrained Llama-2 7B into a viable linear attention Llama. With low-rank adaptation, Hedgehog-Llama-2 7B achieves 28.1 higher ROUGE-1 points over the base standard attention model, where prior linear attentions lead to 16.5 point drops. | https://openreview.net/pdf/253a4c0c2132cbb269cad956934997223cc2c5c0.pdf |
Abstractors and relational cross-attention: An inductive bias for explicit relational reasoning in Transformers | https://openreview.net/forum?id=XNa6r6ZjoB | https://openreview.net/forum?id=XNa6r6ZjoB | Awni Altabaa,Taylor Whittington Webb,Jonathan D. Cohen,John Lafferty | ICLR 2024,Poster | An extension of Transformers is proposed that enables explicit relational reasoning through a novel module called the *Abstractor*. At the core of the Abstractor is a variant of attention called *relational cross-attention*. The approach is motivated by an architectural inductive bias for relational learning that disentangles relational information from object-level features. This enables explicit relational reasoning, supporting abstraction and generalization from limited data. The Abstractor is first evaluated on simple discriminative relational tasks and compared to existing relational architectures. Next, the Abstractor is evaluated on purely relational sequence-to-sequence tasks, where dramatic improvements are seen in sample efficiency compared to standard Transformers. Finally, Abstractors are evaluated on a collection of tasks based on mathematical problem solving, where consistent improvements in performance and sample efficiency are observed. | https://openreview.net/pdf/24529d393a107ced3db4542a44c29da2edba8d83.pdf |
Doubly Robust Instance-Reweighted Adversarial Training | https://openreview.net/forum?id=OF5x1dzWSS | https://openreview.net/forum?id=OF5x1dzWSS | Daouda Sow,Sen Lin,Zhangyang Wang,Yingbin Liang | ICLR 2024,Poster | Assigning importance weights to adversarial data has achieved great success in training adversarially robust networks under limited model capacity. However, existing instance-reweighted adversarial training (AT) methods heavily depend on heuristics and/or geometric interpretations to determine those importance weights, making these algorithms lack rigorous theoretical justification/guarantee. Moreover, recent research has shown that adversarial training suffers from a severe non-uniform robust performance across the training distribution, e.g., data points belonging to some classes can be much more vulnerable to adversarial attacks than others. To address both issues, in this paper, we propose a novel doubly-robust instance reweighted AT framework, which allows to obtain the importance weights via exploring distributionally robust optimization (DRO) techniques, and at the same time boosts the robustness on the most vulnerable examples. In particular, our importance weights are obtained by optimizing the KL-divergence regularized loss function, which allows us to devise new algorithms with a theoretical convergence guarantee.
Experiments on standard classification datasets demonstrate that our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance, and at the same time improves the robustness against attacks on the weakest data points. Codes can be found in the Supplement. | https://openreview.net/pdf/c8cb1e0f5f66335caae7b2b8cebef4a72602b4ad.pdf |
Training Diffusion Models with Reinforcement Learning | https://openreview.net/forum?id=YCWjhGrJFD | https://openreview.net/forum?id=YCWjhGrJFD | Kevin Black,Michael Janner,Yilun Du,Ilya Kostrikov,Sergey Levine | ICLR 2024,Poster | Diffusion models are a class of flexible generative models trained with an approximation to the log-likelihood objective. However, most use cases of diffusion models are not concerned with likelihoods, but instead with downstream objectives such as human-perceived image quality or drug effectiveness. In this paper, we investigate reinforcement learning methods for directly optimizing diffusion models for such objectives. We describe how posing denoising as a multi-step decision-making problem enables a class of policy gradient algorithms, which we refer to as denoising diffusion policy optimization ( DDPO), that are more effective than alternative reward-weighted likelihood approaches. Empirically, DDPO can adapt text-to-image diffusion models to objectives that are difficult to express via prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Finally, we show that DDPO can improve prompt-image alignment using feedback from a vision-language model without the need for additional data collection or human annotation. The project’s website can be found at http://rl-diffusion.github.io. | https://openreview.net/pdf/39611b93653580f659f3d4d491f00250c4874376.pdf |
Finite-Time Analysis of On-Policy Heterogeneous Federated Reinforcement Learning | https://openreview.net/forum?id=D2eOVqPX9g | https://openreview.net/forum?id=D2eOVqPX9g | Chenyu Zhang,Han Wang,Aritra Mitra,James Anderson | ICLR 2024,Poster | Federated reinforcement learning (FRL) has emerged as a promising paradigm for reducing the sample complexity of reinforcement learning tasks by exploiting information from different agents. However, when each agent interacts with a potentially different environment, little to nothing is known theoretically about the non-asymptotic performance of FRL algorithms. The lack of such results can be attributed to various technical challenges and their intricate interplay: Markovian sampling, linear function approximation, multiple local updates to save communication, heterogeneity in the reward functions and transition kernels of the agents' MDPs, and continuous state-action spaces. Moreover, in the on-policy setting, the behavior policies vary with time, further complicating the analysis. In response, we introduce FedSARSA, a novel federated on-policy reinforcement learning scheme, equipped with linear function approximation, to address these challenges and provide a comprehensive finite-time error analysis. Notably, we establish that FedSARSA converges to a policy that is near-optimal for all agents, with the extent of near-optimality proportional to the level of heterogeneity. Furthermore, we prove that FedSARSA leverages agent collaboration to enable linear speedups as the number of agents increases, which holds for both fixed and adaptive step-size configurations. | https://openreview.net/pdf/f97ca95cc9cf273902a1eba91646203707a42223.pdf |
Federated Q-Learning: Linear Regret Speedup with Low Communication Cost | https://openreview.net/forum?id=fe6ANBxcKM | https://openreview.net/forum?id=fe6ANBxcKM | Zhong Zheng,Fengyu Gao,Lingzhou Xue,Jing Yang | ICLR 2024,Poster | In this paper, we consider federated reinforcement learning for tabular episodic Markov Decision Processes (MDP) where, under the coordination of a central server, multiple agents collaboratively explore the environment and learn an optimal policy without sharing their raw data. While linear speedup in the number of agents has been achieved for some metrics, such as convergence rate and sample complexity, in similar settings, it is unclear whether it is possible to design a *model-free* algorithm to achieve linear *regret* speedup with low communication cost. We propose two federated Q-Learning algorithms termed as FedQ-Hoeffding and FedQ-Bernstein, respectively, and show that the corresponding total regrets achieve a linear speedup compared with their single-agent counterparts, while the communication cost scales logarithmically in the total number of time steps $T$. Those results rely on an event-triggered synchronization mechanism between the agents and the server, a novel step size selection when the server aggregates the local estimates of the state-action values to form the global estimates, and a set of new concentration inequalities to bound the sum of non-martingale differences. This is the first work showing that linear regret speedup and logarithmic communication cost can be achieved by model-free algorithms in federated reinforcement learning. | https://openreview.net/pdf/6ae0807140d8835f40da63717a9baa1749faec87.pdf |
The Trickle-down Impact of Reward Inconsistency on RLHF | https://openreview.net/forum?id=MeHmwCDifc | https://openreview.net/forum?id=MeHmwCDifc | Lingfeng Shen,Sihao Chen,Linfeng Song,Lifeng Jin,Baolin Peng,Haitao Mi,Daniel Khashabi,Dong Yu | ICLR 2024,Poster | Standard practice within Reinforcement Learning from Human Feedback (RLHF) involves optimizing against a Reward Model (RM), which itself is trained to reflect human preferences for desirable generations. A notable subject that is understudied is the (in-)consistency of RMs --- whether they can recognize the semantic changes to different prompts and
appropriately adapt their reward assignments
--- and their impact on the downstream RLHF model.
In this paper, we visit a series of research questions relevant to RM inconsistency:
(1) How can we measure the consistency of reward models?
(2) How consistent are the existing RMs and how can we improve them?
(3) In what ways does reward inconsistency influence the chatbots resulting from the RLHF model training?
We propose **Contrast Instruction** -- a benchmarking strategy for the consistency of RM.
Each example in **Contrast Instruction** features a pair of lexically similar instructions with different ground truth responses. A consistent RM is expected to rank the corresponding instruction and response higher than other combinations. We observe that current RMs trained with the standard ranking objective fail miserably on \contrast{} compared to average humans. To show that RM consistency can be improved efficiently without using extra training budget, we propose two techniques **ConvexDA** and **RewardFusion**, which enhance reward consistency
through extrapolation during the RM training and inference stage, respectively.
We show that RLHF models trained with a more consistent RM yield more useful responses, suggesting that reward inconsistency exhibits a trickle-down effect on the downstream RLHF process. | https://openreview.net/pdf/a8dad7978440d43316bc0727f7c324cbffe5e4c0.pdf |
Efficient Modulation for Vision Networks | https://openreview.net/forum?id=ip5LHJs6QX | https://openreview.net/forum?id=ip5LHJs6QX | Xu Ma,Xiyang Dai,Jianwei Yang,Bin Xiao,Yinpeng Chen,Yun Fu,Lu Yuan | ICLR 2024,Poster | In this work, we present efficient modulation, a novel design for efficient vision networks. We revisit the modulation mechanism, which operates input through convolutional context modeling and feature projection layers, and fuses features via element-wise multiplication and an MLP block. We demonstrate that the abstracted modulation mechanism is particularly well suited for efficient networks and further tailor the modulation design by proposing the efficient modulation (EfficientMod) block, which is considered the essential building block for our networks. Bene- fiting from the prominent representational ability of modulation mechanism and the efficiency of efficient modulation design, our network can accomplish better accuracy-efficiency trade-offs and set new state-of-the-art performance for efficient networks. When integrating EfficientMod block with the vanilla self-attention block, we obtain the hybrid architecture and further improve the performance without sacrificing the efficiency. We carry out comprehensive experiments to verify EfficientMod’s performance. With fewer parameters, our EfficientMod-s performs 0.6 top-1 accuracy better than the prior state-of-the-art approach EfficientFormerV2-s2 without any training tricks and is 25% faster on GPU. Additionally, our method presents a notable improvement in downstream tasks, outperforming EfficientFormerV2-s by 3.6 mIoU on the ADE20K benchmark. Code and checkpoints are available at https://github.com/ma-xu/EfficientMod. | https://openreview.net/pdf/60b84aa807789b9bf4b5e8f2ff637e481c3cd2c8.pdf |
Pre-training LiDAR-based 3D Object Detectors through Colorization | https://openreview.net/forum?id=fB1iiH9xo7 | https://openreview.net/forum?id=fB1iiH9xo7 | Tai-Yu Pan,Chenyang Ma,Tianle Chen,Cheng Perng Phoo,Katie Z Luo,Yurong You,Mark Campbell,Kilian Q Weinberger,Bharath Hariharan,Wei-Lun Chao | ICLR 2024,Poster | Accurate 3D object detection and understanding for self-driving cars heavily relies on LiDAR point clouds, necessitating large amounts of labeled data to train. In this work, we introduce an innovative pre-training approach, Grounded Point Colorization (GPC), to bridge the gap between data and labels by teaching the model to colorize LiDAR point clouds, equipping it with valuable semantic cues. To tackle challenges arising from color variations and selection bias, we incorporate color as "context" by providing ground-truth colors as hints during colorization.
Experimental results on the KITTI and Waymo datasets demonstrate GPC's remarkable effectiveness. Even with limited labeled data, GPC significantly improves fine-tuning performance; notably, on just 20% of the KITTI dataset, GPC outperforms training from scratch with the entire dataset.
In sum, we introduce a fresh perspective on pre-training for 3D object detection, aligning the objective with the model's intended role and ultimately advancing the accuracy and efficiency of 3D object detection for autonomous vehicles. | https://openreview.net/pdf/bcbf057d60c81f10beb0dc381109a0616f19d030.pdf |
An Emulator for Fine-tuning Large Language Models using Small Language Models | https://openreview.net/forum?id=Eo7kv0sllr | https://openreview.net/forum?id=Eo7kv0sllr | Eric Mitchell,Rafael Rafailov,Archit Sharma,Chelsea Finn,Christopher D Manning | ICLR 2024,Poster | Widely used language models (LMs) are typically built by scaling up a two-stage training pipeline: a pre-training stage that uses a very large, diverse dataset of text and a fine-tuning (sometimes, 'alignment') stage that uses targeted examples or other specifications of desired behaviors. While it has been hypothesized that knowledge and skills come from pre-training, and fine-tuning mostly filters this knowledge and skillset, this intuition has not been extensively tested. To aid in doing so, we introduce a novel technique for decoupling the knowledge and skills gained in these two stages, enabling a direct answer to the question, *What would happen if we combined the knowledge learned by a large model during pre-training with the knowledge learned by a small model during fine-tuning (or vice versa)?* Using an RL-based framework derived from recent developments in learning from human preferences, we introduce *emulated fine-tuning (EFT)*, a principled and practical method for sampling from a distribution that approximates (or 'emulates') the result of pre-training and fine-tuning at different scales. Our experiments with EFT show that scaling up fine-tuning tends to improve helpfulness, while scaling up pre-training tends to improve factuality. Beyond decoupling scale, we show that EFT enables test-time adjustment of competing behavioral traits like helpfulness and harmlessness without additional training. Finally, a special case of emulated fine-tuning, which we call LM *up-scaling*, avoids resource-intensive fine-tuning of large pre-trained models by ensembling them with small fine-tuned models, essentially emulating the result of fine-tuning the large pre-trained model. Up-scaling consistently improves helpfulness and factuality of instruction-following models in the Llama, Llama-2, and Falcon families, without additional hyperparameters or training. For reference implementation, see [https://github.com/eric-mitchell/emulated-fine-tuning](https://github.com/eric-mitchell/emulated-fine-tuning). | https://openreview.net/pdf/b0342e1462535a7e1a40bac079cefdfd493a9912.pdf |
Toward Student-oriented Teacher Network Training for Knowledge Distillation | https://openreview.net/forum?id=wsWGcw6qKD | https://openreview.net/forum?id=wsWGcw6qKD | Chengyu Dong,Liyuan Liu,Jingbo Shang | ICLR 2024,Poster | How to conduct teacher training for knowledge distillation is still an open problem. It has been widely observed that a best-performing teacher does not necessarily yield the best-performing student, suggesting a fundamental discrepancy between the current teacher training practice and the ideal teacher training strategy. To fill this gap, we explore the feasibility of training a teacher that is oriented toward student performance with empirical risk minimization (ERM). Our analyses are inspired by the recent findings that the effectiveness of knowledge distillation hinges on the teacher’s capability to approximate the true label distribution of training inputs. We theoretically establish that ERM minimizer can approximate the true label distribution of training data as long as the feature extractor of the learner network is Lipschitz continuous and is robust to feature transformations. In light of our theory, we propose a teacher training method SoTeacher which incorporates Lipschitz regularization and consistency regularization into ERM. Experiments on benchmark datasets using various knowledge distillation algorithms and teacher-student pairs confirm that SoTeacher can improve student accuracy consistently. | https://openreview.net/pdf/ce44a21c493bb4865d221811484ede3d170750a4.pdf |
Language Models Represent Space and Time | https://openreview.net/forum?id=jE8xbmvFin | https://openreview.net/forum?id=jE8xbmvFin | Wes Gurnee,Max Tegmark | ICLR 2024,Poster | The capabilities of large language models (LLMs) have sparked debate over whether such systems just learn an enormous collection of superficial statistics or a set of more coherent and grounded representations that reflect the real world. We find evidence for the latter by analyzing the learned representations of three spatial datasets (world, US, NYC places) and three temporal datasets (historical figures, artworks, news headlines) in the Llama-2 family of models. We discover that LLMs learn linear representations of space and time across multiple scales. These representations are robust to prompting variations and unified across different entity types (e.g. cities and landmarks). In addition, we identify individual "space neurons" and "time neurons" that reliably encode spatial and temporal coordinates. While further investigation is needed, our results suggest modern LLMs learn rich spatiotemporal representations of the real world and possess basic ingredients of a world model. | https://openreview.net/pdf/c50ebbefe4d77434016a143c31d247f30948dd6c.pdf |
Causal Modelling Agents: Causal Graph Discovery through Synergising Metadata- and Data-driven Reasoning | https://openreview.net/forum?id=pAoqRlTBtY | https://openreview.net/forum?id=pAoqRlTBtY | Ahmed Abdulaal,adamos hadjivasiliou,Nina Montana-Brown,Tiantian He,Ayodeji Ijishakin,Ivana Drobnjak,Daniel C. Castro,Daniel C. Alexander | ICLR 2024,Poster | Scientific discovery hinges on the effective integration of metadata, which refers to a set of 'cognitive' operations such as determining what information is relevant for inquiry, and data, which encompasses physical operations such as observation and experimentation. This paper introduces the Causal Modelling Agent (CMA), a novel framework that synergizes the metadata-based reasoning capabilities of Large Language Models (LLMs) with the data-driven modelling of Deep Structural Causal Models (DSCMs) for the task of causal discovery. We evaluate the CMA's performance on a number of benchmarks, as well as on the real-world task of modelling the clinical and radiological phenotype of Alzheimer's Disease (AD). Our experimental results indicate that the CMA can outperform previous data-driven or metadata-driven approaches to causal discovery. In our real-world application, we use the CMA to derive new insights into the causal relationships among biomarkers of AD. | https://openreview.net/pdf/62fc3766e10c6f5fa2f2a9b44b46098519f89596.pdf |
Fast-ELECTRA for Efficient Pre-training | https://openreview.net/forum?id=8OBuqbLb8h | https://openreview.net/forum?id=8OBuqbLb8h | Chengyu Dong,Liyuan Liu,Hao Cheng,Jingbo Shang,Jianfeng Gao,Xiaodong Liu | ICLR 2024,Poster | ELECTRA pre-trains language models by detecting tokens in a sequence that have been replaced by an auxiliary model. Although ELECTRA offers a significant boost in efficiency, its potential is constrained by the training cost brought by the auxiliary model. Notably, this model, which is jointly trained with the main model, only serves to assist the training of the main model and is discarded post-training. This results in a substantial amount of training cost being expended in vain. To mitigate this issue, we propose Fast-ELECTRA, which leverages an existing language model as the auxiliary model. To construct a learning curriculum for the main model, we smooth its output distribution via temperature scaling following a descending schedule. Our approach rivals the performance of state-of-the-art ELECTRA-style pre-training methods, while significantly eliminating the computation and memory cost brought by the joint training of the auxiliary model. Our method also reduces the sensitivity to hyper-parameters and enhances the pre-training stability. | https://openreview.net/pdf/1d4b13edd818d04501c0e1edf4751b54a5858f09.pdf |
Maximum Entropy Model Correction in Reinforcement Learning | https://openreview.net/forum?id=kNpSUN0uCc | https://openreview.net/forum?id=kNpSUN0uCc | Amin Rakhsha,Mete Kemertas,Mohammad Ghavamzadeh,Amir-massoud Farahmand | ICLR 2024,Poster | We propose and theoretically analyze an approach for planning with an approximate model in reinforcement learning that can reduce the adverse impact of model error. If the model is accurate enough, it accelerates the convergence to the true value function too. One of its key components is the MaxEnt Model Correction (MoCo) procedure that corrects the model’s next-state distributions based on a Maximum Entropy density estimation formulation. Based on MoCo, we introduce the Model Correcting Value Iteration (MoCoVI) algorithm, and its sampled-based variant MoCoDyna. We show that MoCoVI and MoCoDyna’s convergence can be much faster than the conventional model-free algorithms. Unlike traditional model-based algorithms, MoCoVI and MoCoDyna effectively utilize an approximate model and still converge to the correct value function. | https://openreview.net/pdf/1f91adc5c8d10f07321994671b62ab5b8ced10cb.pdf |
SpaCE: The Spatial Confounding Environment | https://openreview.net/forum?id=D9rJdtmIG6 | https://openreview.net/forum?id=D9rJdtmIG6 | Mauricio Tec,Ana Trisovic,Michelle Audirac,Sophie Mirabai Woodward,Jie Kate Hu,Naeem Khoshnevis,Francesca Dominici | ICLR 2024,Poster | Spatial confounding poses a significant challenge in scientific studies involving spatial data, where unobserved spatial variables can influence both treatment and outcome, possibly leading to spurious associations. To address this problem, we introduce SpaCE: The Spatial Confounding Environment, the first toolkit to provide realistic benchmark datasets and tools for systematically evaluating causal inference methods designed to alleviate spatial confounding. Each dataset includes training data, true counterfactuals, a spatial graph with coordinates, and smoothness and confounding scores characterizing the effect of a missing spatial confounder. It also includes realistic semi-synthetic outcomes and counterfactuals, generated using state-of-the-art machine learning ensembles, following best practices for causal inference benchmarks. The datasets cover real treatment and covariates from diverse domains, including climate, health and social sciences. SpaCE facilitates an automated end-to-end pipeline, simplifying data loading, experimental setup, and evaluating machine learning and causal inference models. The SpaCE project provides several dozens of datasets of diverse sizes and spatial complexity. It is publicly available as a Python package, encouraging community feedback and contributions. | https://openreview.net/pdf/eeea8a3c3f7c7a89a04d83b01ce69520da74f097.pdf |
Language Model Detectors Are Easily Optimized Against | https://openreview.net/forum?id=4eJDMjYZZG | https://openreview.net/forum?id=4eJDMjYZZG | Charlotte Nicks,Eric Mitchell,Rafael Rafailov,Archit Sharma,Christopher D Manning,Chelsea Finn,Stefano Ermon | ICLR 2024,Poster | The fluency and general applicability of large language models (LLMs) has motivated significant interest in detecting whether a piece of text was written by a language model. While both academic and commercial detectors have been deployed in some settings, particularly education, other research has highlighted the fragility of these systems. In this paper, we demonstrate a data-efficient attack that fine-tunes language models to confuse existing detectors, leveraging recent developments in reinforcement learning of language models. We use the `human-ness' score (often just a log probability) of various open-source and commercial detectors as a reward function for reinforcement learning, subject to a KL-divergence constraint that the resulting model does not differ significantly from the original. For a 7B parameter Llama-2 model, fine-tuning for under a day reduces the AUROC of the OpenAI RoBERTa-Large detector from 0.84 to 0.63, while perplexity on OpenWebText increases from 8.7 to only 9.0; with a larger perplexity budget, we can drive AUROC to 0.30 (worse than random). Similar to traditional adversarial attacks, we find that this increase in 'detector evasion' generalizes to other detectors not used during training. In light of our empirical results, we advise against continued reliance on LLM-generated text detectors. Models, datasets, and selected experiment code will be released at https://github.com/charlottttee/llm-detector-evasion. | https://openreview.net/pdf/9a04cc3f1effc953fdd1e29092804ea28ce0eb7f.pdf |
Zero-Shot Robotic Manipulation with Pre-Trained Image-Editing Diffusion Models | https://openreview.net/forum?id=c0chJTSbci | https://openreview.net/forum?id=c0chJTSbci | Kevin Black,Mitsuhiko Nakamoto,Pranav Atreya,Homer Rich Walke,Chelsea Finn,Aviral Kumar,Sergey Levine | ICLR 2024,Poster | If generalist robots are to operate in truly unstructured environments, they need to be able to recognize and reason about novel objects and scenarios. Such objects and scenarios might not be present in the robot’s own training data. We propose SuSIE, a method that leverages an image-editing diffusion model to act as a high-level planner by proposing intermediate subgoals that a low-level controller can accomplish. Specifically, we finetune InstructPix2Pix on video data, consisting of both human videos and robot rollouts, such that it outputs hypothetical future “subgoal” observations given the robot’s current observation and a language command. We also use the robot data to train a low-level goal-conditioned policy to act as the aforementioned low-level controller. We find that the high-level subgoal predictions can utilize Internet scale pretraining and visual understanding to guide the low-level goal-conditioned policy, achieving significantly better generalization and precision than conventional language-conditioned policies. We achieve state-of-the-art results on the CALVIN benchmark, and also demonstrate robust generalization on real-world manipulation tasks, beating strong baselines that have access to privileged information or that utilize orders of magnitude more compute and training data. The project website can be found at http://rail-berkeley.github.io/susie. | https://openreview.net/pdf/c0ae16a0a57aa4ec4f933f90e44a5e9f250f076e.pdf |
Simple Hierarchical Planning with Diffusion | https://openreview.net/forum?id=kXHEBK9uAY | https://openreview.net/forum?id=kXHEBK9uAY | Chang Chen,Fei Deng,Kenji Kawaguchi,Caglar Gulcehre,Sungjin Ahn | ICLR 2024,Poster | Diffusion-based generative methods have proven effective in modeling trajectories with offline datasets. However, they often face computational challenges and can falter in generalization, especially in capturing temporal abstractions for long-horizon tasks. To overcome this, we introduce the Hierarchical Diffuser, a simple, fast, yet effective planning method combining the advantages of hierarchical and diffusion-based planning. Our model adopts a “jumpy” planning strategy at the high level, which allows it to have a larger receptive field but at a lower computational cost—a crucial factor for diffusion-based planning methods, as we have empirically verified. Additionally, the jumpy sub-goals guide our low-level planner, facilitating a fine-tuning stage and further improving our approach’s effectiveness. We conducted empirical evaluations on standard offline reinforcement learning benchmarks, demonstrating our method’s superior performance and efficiency in terms of training and planning speed compared to the non-hierarchical Diffuser as well as other hierarchical planning methods. Moreover, we explore our model’s generalization capability, particularly on how our method improves generalization capabilities on compositional out-of-distribution tasks. | https://openreview.net/pdf/cd376c92489e21ca9086764bc0ac0d95877b8ad5.pdf |
Stochastic Gradient Descent for Gaussian Processes Done Right | https://openreview.net/forum?id=fj2E5OcLFn | https://openreview.net/forum?id=fj2E5OcLFn | Jihao Andreas Lin,Shreyas Padhy,Javier Antoran,Austin Tripp,Alexander Terenin,Csaba Szepesvari,José Miguel Hernández-Lobato,David Janz | ICLR 2024,Poster | As is well known, both sampling from the posterior and computing the mean of the posterior in Gaussian process regression reduces to solving a large linear system of equations. We study the use of stochastic gradient descent for solving this linear system, and show that when done right---by which we mean using specific insights from the optimisation and kernel communities---stochastic gradient descent is highly effective. To that end, we introduce a particularly simple stochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices through a series of ablation studies. Further experiments demonstrate that our new method is highly competitive. In particular, our evaluations on the UCI regression tasks and on Bayesian optimisation set our approach apart from preconditioned conjugate gradients and variational Gaussian process approximations. Moreover, our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction. | https://openreview.net/pdf/1a24ae39cc44caaeb65f4c46067a7b5c53a0ed95.pdf |
GAFormer: Enhancing Timeseries Transformers Through Group-Aware Embeddings | https://openreview.net/forum?id=c56TWtYp0W | https://openreview.net/forum?id=c56TWtYp0W | Jingyun Xiao,Ran Liu,Eva L Dyer | ICLR 2024,Poster | Analyzing multivariate time series is important in many domains. However, it has been difficult to learn robust and generalizable representations within multivariate datasets due to complex inter-channel relationships and dynamic shifts. In this paper, we introduce a novel approach for learning spatiotemporal structure and using it to improve the application of transformers to timeseries datasets. Our framework learns a set of group tokens, and builds an instance-specific group embedding (GE) layer that assigns input tokens to a small number of group tokens to incorporate structure into learning. We then introduce a novel architecture, Group-Aware transFormer (GAFormer), which incorporates both spatial and temporal group embeddings to achieve state-of-the-art performance on a number of time-series classification and regression tasks. In evaluations on a number of diverse timeseries datasets, we show that GE on its own can provide a nice enhancement to a number of backbones, and that by coupling spatial and temporal group embeddings, the GAFormer can outperform the existing baselines. Finally, we show how our approach discerns latent structures in data even without information about the spatial ordering of channels, and yields a more interpretable decomposition of spatial and temporal structure underlying complex multivariate datasets. | https://openreview.net/pdf/c6fe3e477e52b832a4b26eaa7a2d211c301b44b7.pdf |
Why is SAM Robust to Label Noise? | https://openreview.net/forum?id=3aZCPl3ZvR | https://openreview.net/forum?id=3aZCPl3ZvR | Christina Baek,J Zico Kolter,Aditi Raghunathan | ICLR 2024,Poster | Sharpness-Aware Minimization (SAM) is most known for achieving state-of the-art performances on natural image and language tasks. However, its most pronounced improvements (of tens of percent) is rather in the presence of label noise. Understanding SAM's label noise robustness requires a departure from characterizing the robustness of minimas lying in ``flatter'' regions of the loss landscape. In particular, the peak performance under label noise occurs with early stopping, far before the loss converges. We decompose SAM's robustness into two effects: one induced by changes to the logit term and the other induced by changes to the network Jacobian. The first can be observed in linear logistic regression where SAM provably up-weights the gradient contribution from clean examples. Although this explicit up-weighting is also observable in neural networks, when we intervene and modify SAM to remove this effect, surprisingly, we see no visible degradation in performance. We infer that SAM's effect in deeper networks is instead explained entirely by the effect SAM has on the network Jacobian. We theoretically derive the implicit regularization induced by this Jacobian effect in two layer linear networks. Motivated by our analysis, we see that cheaper alternatives to SAM that explicitly induce these regularization effects largely recover the benefits in deep networks trained on real-world datasets. | https://openreview.net/pdf/71206b659a568dfd25c11ddc958dfbf262274392.pdf |
Revisiting the Last-Iterate Convergence of Stochastic Gradient Methods | https://openreview.net/forum?id=xxaEhwC1I4 | https://openreview.net/forum?id=xxaEhwC1I4 | Zijian Liu,Zhengyuan Zhou | ICLR 2024,Poster | In the past several years, the last-iterate convergence of the Stochastic Gradient Descent (SGD) algorithm has triggered people's interest due to its good performance in practice but lack of theoretical understanding. For Lipschitz convex functions, different works have established the optimal $O(\log(1/\delta)\log T/\sqrt{T})$ or $O(\sqrt{\log(1/\delta)/T})$ high-probability convergence rates for the final iterate, where $T$ is the time horizon and $\delta$ is the failure probability. However, to prove these bounds, all the existing works are either limited to compact domains or require almost surely bounded noises. It is natural to ask whether the last iterate of SGD can still guarantee the optimal convergence rate but without these two restrictive assumptions. Besides this important question, there are still lots of theoretical problems lacking an answer. For example, compared with the last-iterate convergence of SGD for non-smooth problems, only few results for smooth optimization have yet been developed. Additionally, the existing results are all limited to a non-composite objective and the standard Euclidean norm. It still remains unclear whether the last-iterate convergence can be provably extended to wider composite optimization and non-Euclidean norms. In this work, to address the issues mentioned above, we revisit the last-iterate convergence of stochastic gradient methods and provide the first unified way to prove the convergence rates both in expectation and in high probability to accommodate general domains, composite objectives, non-Euclidean norms, Lipschitz conditions, smoothness, and (strong) convexity simultaneously. | https://openreview.net/pdf/49e36604c5405004e38defe39ca3ff6ecf070ca6.pdf |
CNN Kernels Can Be the Best Shapelets | https://openreview.net/forum?id=O8ouVV8PjF | https://openreview.net/forum?id=O8ouVV8PjF | Eric Qu,Yansen Wang,Xufang Luo,Wenqiang He,Kan Ren,Dongsheng Li | ICLR 2024,Poster | Shapelets and CNN are two typical approaches to model time series. Shapelets aim at finding a set of sub-sequences that extract feature-based interpretable shapes, but may suffer from accuracy and efficiency issues. CNN performs well by encoding sequences with a series of hidden representations, but lacks interpretability. In this paper, we demonstrate that shapelets are essentially equivalent to a specific type of CNN kernel with a squared norm and pooling. Based on this finding, we propose ShapeConv, an interpretable CNN layer with its kernel serving as shapelets to conduct time-series modeling tasks in both supervised and unsupervised settings. By incorporating shaping regularization, we enforce the similarity for maximum interpretability. We also find human knowledge can be easily injected to ShapeConv by adjusting its initialization and model performance is boosted with it. Experiments show that ShapeConv can achieve state-of-the-art performance on time-series benchmarks without sacrificing interpretability and controllability. | https://openreview.net/pdf/69b308cc2c2320f9051d94361939bb8848074ab0.pdf |
Fine-Tuning Language Models for Factuality | https://openreview.net/forum?id=WPZ2yPag4K | https://openreview.net/forum?id=WPZ2yPag4K | Katherine Tian,Eric Mitchell,Huaxiu Yao,Christopher D Manning,Chelsea Finn | ICLR 2024,Poster | The fluency and creativity of large pre-trained language models (LLMs) have led to their widespread use, sometimes even as a replacement for traditional search engines. Yet language models are prone to making convincing but factually inaccurate claims, often referred to as `hallucinations.' These errors can inadvertently spread misinformation or harmfully perpetuate misconceptions. Further, manual fact-checking of model responses is a time-consuming process, making human factuality labels expensive to acquire. In this work, we fine-tune language models to be more factual, without human labeling and targeting more open-ended generation settings than past work. We leverage two key recent innovations in NLP to do so. First, several recent works have proposed methods for judging the factuality of open-ended text by measuring consistency with an external knowledge base or simply a large model's confidence scores. Second, the Direct Preference Optimization algorithm enables straightforward fine-tuning of language models on objectives other than supervised imitation, using a preference ranking over possible model responses. We show that learning from automatically generated factuality preference rankings, generated either through existing retrieval systems or our novel retrieval-free approach, significantly improves the factuality (percent of generated claims that are correct) of Llama-2 on held-out topics compared with RLHF or decoding strategies targeted at factuality. At 7B scale, compared to Llama-2-Chat, we observe 53% and 50% reduction in factual error rate when generating biographies and answering medical questions, respectively. A reference implementation can be found at https://github.com/kttian/llm_factuality_tuning. | https://openreview.net/pdf/f90a225c9859565a8e1ed01840ea046b406c7d4f.pdf |
Soft Robust MDPs and Risk-Sensitive MDPs: Equivalence, Policy Gradient, and Sample Complexity | https://openreview.net/forum?id=dEz3ge8QSo | https://openreview.net/forum?id=dEz3ge8QSo | Runyu Zhang,Yang Hu,Na Li | ICLR 2024,Poster | Robust Markov Decision Processes (MDPs) and risk-sensitive MDPs are both powerful tools for making decisions in the presence of uncertainties. Previous efforts have aimed to establish their connections, revealing equivalences in specific formulations. This paper introduces a new formulation for risk-sensitive MDPs, which assesses risk in a slightly different manner compared to the classical Markov risk measure [Ruszczy ́nski 2010], and establishes its equivalence with a class of soft robust MDP (RMDP) problems, including the standard RMDP as a special case. Leveraging this equivalence, we further derive the policy gradient theorem for both problems, proving gradient domination and global convergence of the exact policy gradient method under the tabular setting with direct parameterization. This forms a sharp contrast to the Markov risk measure, known to be potentially non-gradient-dominant [Huang et al. 2021]. We also propose a sample-based offline learning algorithm, namely the robust fitted-Z iteration (RFZI), for a specific soft RMDP problem with a KL-divergence regularization term (or equivalently the risk-sensitive MDP with an entropy risk measure). We showcase its streamlined
design and less stringent assumptions due to the equivalence and analyze its sample complexity. | https://openreview.net/pdf/317b11f5d5ce4a86be220bbd6715b66f4a55103a.pdf |
Tensor Programs VI: Feature Learning in Infinite Depth Neural Networks | https://openreview.net/forum?id=17pVDnpwwl | https://openreview.net/forum?id=17pVDnpwwl | Greg Yang,Dingli Yu,Chen Zhu,Soufiane Hayou | ICLR 2024,Poster | Empirical studies have consistently demonstrated that increasing the size of neural networks often yields superior performance in practical applications. However, there is a lack of consensus regarding the appropriate scaling strategy, particularly when it comes to increasing the depth of neural networks. In practice, excessively large depths can lead to model performance degradation. In this paper, we introduce Depth-$\mu$P, a principled approach for depth scaling, allowing for the training of arbitrarily deep architectures while maximizing feature learning and diversity among nearby layers. Our method involves dividing the contribution of each residual block and the parameter update by the square root of the depth. Through the use of Tensor Programs, we rigorously establish the existence of a limit for infinitely deep neural networks under the proposed scaling scheme. This scaling strategy ensures more stable training for deep neural networks and guarantees the transferability of hyperparameters from shallow to deep models. To substantiate the efficacy of our scaling method, we conduct empirical validation on neural networks with depths up to $2^{10}$. | https://openreview.net/pdf/a2d69b77708b87f741baac0303581cfb7924d0b7.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.