title
stringlengths 15
138
| url
stringlengths 42
42
| detail_url
stringlengths 42
42
| authors
stringlengths 7
526
| tags
stringclasses 3
values | abstract
stringlengths 480
3.09k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Protein-Ligand Interaction Prior for Binding-aware 3D Molecule Diffusion Models | https://openreview.net/forum?id=qH9nrMNTIW | https://openreview.net/forum?id=qH9nrMNTIW | Zhilin Huang,Ling Yang,Xiangxin Zhou,Zhilong Zhang,Wentao Zhang,Xiawu Zheng,Jie Chen,Yu Wang,Bin CUI,Wenming Yang | ICLR 2024,Poster | Generating 3D ligand molecules that bind to specific protein targets via diffusion models has shown great promise for structure-based drug design. The key idea is to disrupt molecules into noise through a fixed forward process and learn its reverse process to generate molecules from noise in a denoising way. However, existing diffusion models primarily focus on incorporating protein-ligand interaction information solely in the reverse process, and neglect the interactions in the forward process. The inconsistency between forward and reverse processes may impair the binding affinity of generated molecules towards target protein. In this paper, we propose a novel Interaction Prior-guided Diffusion model (IPDiff) for the protein-specific 3D molecular generation by introducing geometric protein-ligand interactions into both diffusion and sampling process. Specifically, we begin by pretraining a protein-ligand interaction prior network (IPNet) by utilizing the binding affinity signals as supervision. Subsequently, we leverage the pretrained prior network to (1) integrate interactions between the target protein and the molecular ligand into the forward process for adapting the molecule diffusion trajectories (prior-shifting), and (2) enhance the binding-aware molecule sampling process (prior-conditioning). Empirical studies on CrossDocked2020 dataset show IPDiff can generate molecules with more realistic 3D structures and state-of-the-art binding affinities towards the protein targets, with up to -6.42 Avg. Vina Score, while maintaining proper molecular properties. https://github.com/YangLing0818/IPDiff | https://openreview.net/pdf/267395374ebd7267af8fd7a2fc29d6b14fd3902d.pdf |
Think-on-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph | https://openreview.net/forum?id=nnVO1PvbTv | https://openreview.net/forum?id=nnVO1PvbTv | Jiashuo Sun,Chengjin Xu,Lumingyuan Tang,Saizhuo Wang,Chen Lin,Yeyun Gong,Lionel Ni,Heung-Yeung Shum,Jian Guo | ICLR 2024,Poster | Although large language models (LLMs) have achieved significant success in various tasks, they often struggle with hallucination problems, especially in scenarios requiring deep and responsible reasoning. These issues could be partially addressed by introducing external knowledge graphs (KG) in LLM reasoning. In this paper, we propose a new LLM-KG integrating paradigm ``$\hbox{LLM}\otimes\hbox{KG}$'' which treats the LLM as an agent to interactively explore related entities and relations on KGs and perform reasoning based on the retrieved knowledge. We further implement this paradigm by introducing a new approach called Think-on-Graph (ToG), in which the LLM agent iteratively executes beam search on KG, discovers the most promising reasoning paths, and returns the most likely reasoning results. We use a number of well-designed experiments to examine and illustrate the following advantages of ToG: 1) compared with LLMs, ToG has better deep reasoning power; 2) ToG has the ability of knowledge traceability and knowledge correctability by leveraging LLMs reasoning and expert feedback; 3) ToG provides a flexible plug-and-play framework for different LLMs, KGs and prompting strategies without any additional training cost; 4) the performance of ToG with small LLM models could exceed large LLM such as GPT-4 in certain scenarios and this reduces the cost of LLM deployment and application. As a training-free method with lower computational cost and better generality, ToG achieves overall SOTA in 6 out of 9 datasets where most previous SOTAs rely on additional training. | https://openreview.net/pdf/0757798b24b660d5ff9f6542c94db905d04fc77f.pdf |
Self-Supervised Heterogeneous Graph Learning: a Homophily and Heterogeneity View | https://openreview.net/forum?id=3FJOKjooIj | https://openreview.net/forum?id=3FJOKjooIj | Yujie Mo,Feiping Nie,Ping Hu,Heng Tao Shen,Zheng Zhang,Xinchao Wang,Xiaofeng Zhu | ICLR 2024,Poster | Self-supervised heterogeneous graph learning has achieved promising results in various real applications, but it still suffers from the following issues: (i) meta-paths can be employed to capture the homophily in the heterogeneous graph, but meta-paths are human-defined, requiring substantial expert knowledge and computational costs; and (ii) the heterogeneity in the heterogeneous graph is usually underutilized, leading to the loss of task-related information. To solve these issues, this paper proposes to capture both homophily and heterogeneity in the heterogeneous graph without pre-defined meta-paths. Specifically, we propose to learn a self-expressive matrix to capture the homophily from the subspace and nearby neighbors. Meanwhile, we propose to capture the heterogeneity by aggregating the information of nodes from different types. We further design a consistency loss and a specificity loss, respectively, to extract the consistent information between homophily and heterogeneity and to preserve their specific task-related information. We theoretically analyze that the learned homophilous representations exhibit the grouping effect to capture the homophily, and considering both homophily and heterogeneity introduces more task-related information. Extensive experimental results verify the superiority of the proposed method on different downstream tasks. | https://openreview.net/pdf/6d0439ac7fd7931cb8efaf52238625708ec73d24.pdf |
CoBIT: A Contrastive Bi-directional Image-Text Generation Model | https://openreview.net/forum?id=8ISRqgtjPc | https://openreview.net/forum?id=8ISRqgtjPc | Haoxuan You,Mandy Guo,Zhecan Wang,Kai-Wei Chang,Jason Michael Baldridge,Jiahui Yu | ICLR 2024,Poster | The field of Vision-and-Language (VL) has witnessed a proliferation of pretrained foundation models. Current techniques typically employ only one type of training objective, whether it's (1) contrastive objectives (like CLIP), (2) image-to-text generative objectives (like PaLI), or (3) text-to-image generative objectives (like Parti). However, all these three objectives are mutually relevant and are all based on image-text pairs. Intuitively, the first two objectives can be considered as complementary projections between two modalities, and contrastive learning can preserve global alignment and generations facilitate fine-grained understanding. Inspired by this, we present a Contrastive Bi-directional Image-Text generation model (CoBIT) to first time unify the three pre-training objectives in one framework. Specifically, CoBIT employs a novel unicoder-decoder structure consisting of an image unicoder, a text unicoder, and a cross-modal decoder. The image/text unicoders can switch between encoding and decoding in different tasks, enabling flexibility and shared knowledge that benefits both image-to-text and text-to-image generations. CoBIT achieves superior performance in image understanding, image-text understanding (Retrieval, Captioning, VQA, SNLI-VE), and text-based content creation, particularly in zero-shot scenarios. | https://openreview.net/pdf/ac1a12fdacceedb87b909b69525d28df715407e7.pdf |
Protein Multimer Structure Prediction via Prompt Learning | https://openreview.net/forum?id=OHpvivXrQr | https://openreview.net/forum?id=OHpvivXrQr | Ziqi Gao,Xiangguo Sun,Zijing Liu,Yu Li,Hong Cheng,Jia Li | ICLR 2024,Poster | Understanding the 3D structures of protein multimers is crucial, as they play a vital role in regulating various cellular processes. It has been empirically confirmed that the multimer structure prediction (MSP) can be well handled in a step-wise assembly fashion using provided dimer structures and predicted protein-protein interactions (PPIs). However, due to the biological gap in the formation of dimers and larger multimers, directly applying PPI prediction techniques can often cause a poor generalization to the MSP task. To address this challenge, we aim to extend the PPI knowledge to multimers of different scales (i.e., chain numbers). Specifically, we propose PromptMSP, a pre-training and Prompt tuning framework for Multimer Structure Prediction. First, we tailor the source and target tasks for effective PPI knowledge learning and efficient inference, respectively. We design PPI-inspired prompt learning to narrow the gaps of two task formats and generalize the PPI knowledge to multimers of different scales. We provide a meta-learning strategy to learn a reliable initialization of the prompt model, enabling our prompting framework to effectively adapt to limited data for large-scale multimers. Empirically, we achieve both significant accuracy (RMSD and TM-Score) and efficiency improvements compared to advanced MSP models. | https://openreview.net/pdf/6df9f0082c5d94cbbab85b3e0c61db3a72a689b7.pdf |
Domain-Agnostic Molecular Generation with Chemical Feedback | https://openreview.net/forum?id=9rPyHyjfwP | https://openreview.net/forum?id=9rPyHyjfwP | Yin Fang,Ningyu Zhang,Zhuo Chen,Lingbing Guo,Xiaohui Fan,Huajun Chen | ICLR 2024,Poster | The generation of molecules with desired properties has become increasingly popular, revolutionizing the way scientists design molecular structures and providing valuable support for chemical and drug design. However, despite the potential of language models in molecule generation, they face challenges such as generating syntactically or chemically flawed molecules, having narrow domain focus, and struggling to create diverse and feasible molecules due to limited annotated data or external molecular databases.
To tackle these challenges, we introduce MolGen, a pre-trained molecular language model tailored specifically for molecule generation. Through the reconstruction of over 100 million molecular SELFIES, MolGen internalizes structural and grammatical insights. This is further enhanced by domain-agnostic molecular prefix tuning, fostering robust knowledge transfer across diverse domains. Importantly, our chemical feedback paradigm steers the model away from "molecular hallucinations", ensuring alignment between the model's estimated probabilities and real-world chemical preferences. Extensive experiments on well-known benchmarks underscore MolGen's optimization capabilities in properties such as penalized logP, QED, and molecular docking. Additional analyses confirm its proficiency in accurately capturing molecule distributions, discerning intricate structural patterns, and efficiently exploring the chemical space (https://github.com/zjunlp/MolGen). | https://openreview.net/pdf/9fb09309ed88ae5e1812530b4a6bf6bdb41c590e.pdf |
LLM-grounded Video Diffusion Models | https://openreview.net/forum?id=exKHibougU | https://openreview.net/forum?id=exKHibougU | Long Lian,Baifeng Shi,Adam Yala,Trevor Darrell,Boyi Li | ICLR 2024,Poster | Text-conditioned diffusion models have emerged as a promising tool for neural video generation. However, current models still struggle with intricate spatiotemporal prompts and often generate restricted or incorrect motion. To address these limitations, we introduce LLM-grounded Video Diffusion (LVD). Instead of directly generating videos from the text inputs, LVD first leverages a large language model (LLM) to generate dynamic scene layouts based on the text inputs and subsequently uses the generated layouts to guide a diffusion model for video generation. We show that LLMs are able to understand complex spatiotemporal dynamics from text alone and generate layouts that align closely with both the prompts and the object motion patterns typically observed in the real world. We then propose to guide video diffusion models with these layouts by adjusting the attention maps. Our approach is training-free and can be integrated into any video diffusion model that admits classifier guidance. Our results demonstrate that LVD significantly outperforms its base video diffusion model and several strong baseline methods in faithfully generating videos with the desired attributes and motion patterns. | https://openreview.net/pdf/5518c693c6a71d24a512e5bca0230e3853cb6c9c.pdf |
Periodicity Decoupling Framework for Long-term Series Forecasting | https://openreview.net/forum?id=dp27P5HBBt | https://openreview.net/forum?id=dp27P5HBBt | Tao Dai,Beiliang Wu,Peiyuan Liu,Naiqi Li,Jigang Bao,Yong Jiang,Shu-Tao Xia | ICLR 2024,Poster | Convolutional neural network (CNN)-based and Transformer-based methods have recently made significant strides in time series forecasting, which excel at modeling local temporal variations or capturing long-term dependencies. However, real-world time series usually contain intricate temporal patterns, thus making it challenging for existing methods that mainly focus on temporal variations modeling from the 1D time series directly. Based on the intrinsic periodicity of time series, we propose a novel Periodicity Decoupling Framework (PDF) to capture 2D temporal variations of decoupled series for long-term series forecasting. Our PDF mainly consists of three components: multi-periodic decoupling block (MDB), dual variations modeling block (DVMB), and variations aggregation block (VAB). Unlike the previous methods that model 1D temporal variations, our PDF mainly models 2D temporal variations, decoupled from 1D time series by MDB. After that, DVMB attempts to further capture short-term and long-term variations, followed by VAB to make final predictions. Extensive experimental results across seven real-world long-term time series datasets demonstrate the superiority of our method over other state-of-the-art methods, in terms of both forecasting performance and computational efficiency. Code is available at https://github.com/Hank0626/PDF. | https://openreview.net/pdf/930c90425bd54115335d9bda13d4a63c60a432f7.pdf |
Imitation Learning from Observation with Automatic Discount Scheduling | https://openreview.net/forum?id=pPJTQYOpNI | https://openreview.net/forum?id=pPJTQYOpNI | Yuyang Liu,Weijun Dong,Yingdong Hu,Chuan Wen,Zhao-Heng Yin,Chongjie Zhang,Yang Gao | ICLR 2024,Poster | Humans often acquire new skills through observation and imitation. For robotic agents, learning from the plethora of unlabeled video demonstration data available on the Internet necessitates imitating the expert without access to its action, presenting a challenge known as Imitation Learning from Observation (ILfO). A common approach to tackle ILfO problems is to convert them into inverse reinforcement learning problems, utilizing a proxy reward computed from the agent's and the expert's observations. Nonetheless, we identify that tasks characterized by a progress dependency property pose significant challenges for such approaches; in these tasks, the agent needs to initially learn the expert's preceding behaviors before mastering the subsequent ones. Our investigation reveals that the main cause is that the reward signals assigned to later steps hinder the learning of initial behaviors. To address this challenge, we present a novel ILfO framework that enables the agent to master earlier behaviors before advancing to later ones. We introduce an Automatic Discount Scheduling (ADS) mechanism that adaptively alters the discount factor in reinforcement learning during the training phase, prioritizing earlier rewards initially and gradually engaging later rewards only when the earlier behaviors have been mastered. Our experiments, conducted on nine Meta-World tasks, demonstrate that our method significantly outperforms state-of-the-art methods across all tasks, including those that are unsolvable by them. Our code is available at https://il-ads.github.io. | https://openreview.net/pdf/f73926c464d89bfaaa3eea9f126267d2979abe23.pdf |
iGraphMix: Input Graph Mixup Method for Node Classification | https://openreview.net/forum?id=a2ljjXeDcE | https://openreview.net/forum?id=a2ljjXeDcE | Jongwon Jeong,Hoyeop Lee,Hyui Geon Yoon,Beomyoung Lee,Junhee Heo,Geonsoo Kim,Kim Jin Seon | ICLR 2024,Poster | Recently, Input Mixup, which augments virtual samples by interpolating input features and corresponding labels, is one of the promising methods to alleviate the over-fitting problem on various domains including image classification and natural language processing because of its ability to generate a variety of virtual samples, and ease of usability and versatility. However, designing Input Mixup for the node classification is still challenging due to the irregularity issue that each node contains a different number of neighboring nodes for input and the alignment issue that how to align and interpolate two sets of neighboring nodes is not well-defined when two nodes are interpolated. To address the issues, this paper proposes a novel Mixup method, called iGraphMix, tailored to node classification. Our method generates virtual nodes and their edges by interpolating input features and labels, and attaching sampled neighboring nodes. The virtual graphs generated by iGraphMix serve as inputs for graph neural networks (GNNs) training, thereby facilitating its easy application to various GNNs and enabling effective combination with other augmentation methods. We mathematically prove that training GNNs with iGraphMix leads to better generalization performance compared to that without augmentation, and our experiments support the theoretical findings. | https://openreview.net/pdf/20bbcc0771871990708b0c30a5f73c039af0205b.pdf |
Noise Map Guidance: Inversion with Spatial Context for Real Image Editing | https://openreview.net/forum?id=mhgm0IXtHw | https://openreview.net/forum?id=mhgm0IXtHw | Hansam Cho,Jonghyun Lee,Seoung Bum Kim,Tae-Hyun Oh,Yonghyun Jeong | ICLR 2024,Poster | Text-guided diffusion models have become a popular tool in image synthesis, known for producing high-quality and diverse images. However, their application to editing real images often encounters hurdles primarily due to the text condition deteriorating the reconstruction quality and subsequently affecting editing fidelity. Null-text Inversion (NTI) has made strides in this area, but it fails to capture spatial context and requires computationally intensive per-timestep optimization. Addressing these challenges, we present Noise Map Guidance (NMG), an inversion method rich in a spatial context, tailored for real-image editing. Significantly, NMG achieves this without necessitating optimization, yet preserves the editing quality. Our empirical investigations highlight NMG's adaptability across various editing techniques and its robustness to variants of DDIM inversions. | https://openreview.net/pdf/fb11ff531234fde98b816c3842cd03693d07c9c5.pdf |
Label-Focused Inductive Bias over Latent Object Features in Visual Classification | https://openreview.net/forum?id=cH3oufN8Pl | https://openreview.net/forum?id=cH3oufN8Pl | Ilmin Kang,HyounYoung Bae,Kangil Kim | ICLR 2024,Poster | Most neural networks for classification primarily learn features differentiated by input-domain related information such as visual similarity of objects in an image. While this focus is natural behavior, it can inadvertently introduce an inductive bias that conflicts with unseen relations in an implicit output-domain determined by human labeling based on their own world knowledge. Such conflicts can limit generalization of models by potential dominance of the input-domain focused bias in inference.
To overcome this limitation without external resources, we introduce Output-Domain focused Biasing (ODB) training strategy that constructs inductive biases on features differentiated by only output labels. It has four steps: 1) it learns intermediate latent object features in an unsupervised manner; 2) it decouples their visual dependencies by assigning new independent embedding parameters; 3) it captures structured features optimized for the original classification task; and 4) it integrates the structured features with the original visual features for the final prediction.
We implement the ODB on a vision transformer architecture, and achieved significant improvements on image classification benchmarks. This paper offers a straightforward and effective method to obtain and utilize output-domain focused inductive bias for classification mapping two different domains. | https://openreview.net/pdf/a4274f2a5c2e31176a8060bf6de5f150008a831d.pdf |
Simple Minimax Optimal Byzantine Robust Algorithm for Nonconvex Objectives with Uniform Gradient Heterogeneity | https://openreview.net/forum?id=1ii8idH4tH | https://openreview.net/forum?id=1ii8idH4tH | Tomoya Murata,Kenta Niwa,Takumi Fukami,Iifan Tyou | ICLR 2024,Poster | In this study, we consider nonconvex federated learning problems with the existence of Byzantine workers. We propose a new simple Byzantine robust algorithm called Momentum Screening. The algorithm is adaptive to the Byzantine fraction, i.e., all its hyperparameters do not depend on the number of Byzantine workers. We show that our method achieves the best optimization error of $O(\delta^2\zeta_\mathrm{max}^2)$ for nonconvex smooth local objectives satisfying $\zeta_\mathrm{max}$-uniform gradient heterogeneity condition under $\delta$-Byzantine fraction, which can be better than the best known error rate of $O(\delta\zeta_\mathrm{mean}^2)$ for local objectives satisfying $\zeta_\mathrm{mean}$-mean heterogeneity condition when $\delta \leq (\zeta_\mathrm{max}/\zeta_\mathrm{mean})^2$. Furthermore, we derive an algorithm independent lower bound for local objectives satisfying $\zeta_\mathrm{max}$-uniform gradient heterogeneity condition and show the minimax optimality of our proposed method on this class. In numerical experiments, we validate the superiority of our method over the existing robust aggregation algorithms and verify our theoretical results. | https://openreview.net/pdf/599f4ac37f2888c6a15d70854919cd2e36c676f7.pdf |
TopoMLP: A Simple yet Strong Pipeline for Driving Topology Reasoning | https://openreview.net/forum?id=0gTW5JUFTW | https://openreview.net/forum?id=0gTW5JUFTW | Dongming Wu,Jiahao Chang,Fan Jia,Yingfei Liu,Tiancai Wang,Jianbing Shen | ICLR 2024,Poster | Topology reasoning aims to comprehensively understand road scenes and present drivable routes in autonomous driving. It requires detecting road centerlines (lane) and traffic elements, further reasoning their topology relationship, \textit{i.e.}, lane-lane topology, and lane-traffic topology. In this work, we first present that the topology score relies heavily on detection performance on lane and traffic elements. Therefore, we introduce a powerful 3D lane detector and an improved 2D traffic element detector to extend the upper limit of topology performance. Further, we propose TopoMLP, a simple yet high-performance pipeline for driving topology reasoning. Based on the impressive detection performance, we develop two simple MLP-based heads for topology generation. TopoMLP achieves state-of-the-art performance on OpenLane-V2 dataset, \textit{i.e.}, 41.2\% OLS with ResNet-50 backbone. It is also the 1st solution for 1st OpenLane Topology in Autonomous Driving Challenge. We hope such simple and strong pipeline can provide some new insights to the community. Code is at https://github.com/wudongming97/TopoMLP. | https://openreview.net/pdf/f076c0a599aa4e6e280d89038847290b746308de.pdf |
Personalize Segment Anything Model with One Shot | https://openreview.net/forum?id=6Gzkhoc6YS | https://openreview.net/forum?id=6Gzkhoc6YS | Renrui Zhang,Zhengkai Jiang,Ziyu Guo,Shilin Yan,Junting Pan,Hao Dong,Yu Qiao,Peng Gao,Hongsheng Li | ICLR 2024,Poster | Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful promptable framework, revolutionizing the segmentation field. Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under-explored, e.g., automatically segmenting your pet dog in numerous images. In this paper, we introduce a training-free Personalization approach for SAM, termed PerSAM. Given only one-shot data, i.e., a single image with a reference mask, we first obtain a positive-negative location prior for the target concept in new images. Then, aided by target visual semantics, we empower SAM for personalized object segmentation via two proposed techniques: target-guided attention and target-semantic prompting. In this way, we can effectively customize the general-purpose SAM for private use without any training. To further alleviate the ambiguity of segmentation scales, we present an efficient one-shot fine-tuning variant, PerSAM-F. Freezing the entire SAM, we introduce a scale-aware fine-tuning to aggregate multi-scale masks, which only tunes 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new dataset, PerSeg, for the evaluation of personalized object segmentation, and also test our methods on various one-shot image and video segmentation benchmarks. Besides, we propose to leverage PerSAM to improve DreamBooth for personalized text-to-image synthesis. By mitigating the disturbance of training-set backgrounds, our approach showcases better target appearance generation and higher fidelity to the input text prompt. Code is released at https://github.com/ZrrSkywalker/Personalize-SAM. | https://openreview.net/pdf/0ca6383f3b6c6645b9d9985b76a8460efa1c2d94.pdf |
Integrating Planning and Deep Reinforcement Learning via Automatic Induction of Task Substructures | https://openreview.net/forum?id=PR6RMsxuW7 | https://openreview.net/forum?id=PR6RMsxuW7 | Jung-Chun Liu,Chi-Hsien Chang,Shao-Hua Sun,Tian-Li Yu | ICLR 2024,Poster | Despite recent advancements, deep reinforcement learning (DRL) still struggles at learning sparse-reward goal-directed tasks. Classical planning excels at addressing hierarchical tasks by employing symbolic knowledge, yet most of the methods rely on assumptions about pre-defined subtasks. To bridge the best of both worlds, we propose a framework that integrates DRL with classical planning by automatically inducing task structures and substructures from a few demonstrations. Specifically, genetic programming is used for substructure induction where the program model reflects prior domain knowledge of effect rules. We compare the proposed framework to state-of-the-art DRL algorithms, imitation learning methods, and an exploration approach in various domains. Experimental results show that our proposed framework outperforms all the abovementioned algorithms in terms of sample efficiency and task performance. Moreover, our framework achieves strong generalization performance by effectively inducing new rules and composing task structures. Ablation studies justify the design of our induction module and the proposed genetic programming procedure. | https://openreview.net/pdf/59a01780dc7397738db2d446142aadd04a5008c7.pdf |
PoSE: Efficient Context Window Extension of LLMs via Positional Skip-wise Training | https://openreview.net/forum?id=3Z1gxuAQrA | https://openreview.net/forum?id=3Z1gxuAQrA | Dawei Zhu,Nan Yang,Liang Wang,Yifan Song,Wenhao Wu,Furu Wei,Sujian Li | ICLR 2024,Poster | Large Language Models (LLMs) are trained with a pre-defined context length, restricting their use in scenarios requiring long inputs. Previous efforts for adapting LLMs to a longer length usually requires fine-tuning with this target length (Full-length fine-tuning), suffering intensive training cost. To decouple train length from target length for efficient context window extension, we propose Positional Skip-wisE (PoSE) training that smartly simulates long inputs using a fixed context window. This is achieved by first dividing the original context window into several chunks, then designing distinct skipping bias terms to manipulate the position indices of each chunk. These bias terms and the lengths of each chunk are altered for every training example, allowing the model to adapt to all positions within target length. Experimental results show that PoSE greatly reduces memory and time overhead compared with Full-length fine-tuning, with minimal impact on performance. Leveraging this advantage, we have successfully extended the LLaMA model to 128k tokens using a 2k training context window. Furthermore, we empirically confirm that PoSE is compatible with all RoPE-based LLMs and position interpolation strategies. Notably, our method can potentially support infinite length, limited only by memory usage in inference. With ongoing progress for efficient inference, we believe PoSE can further scale the context window beyond 128k. | https://openreview.net/pdf/b35b193d0b69b49c3015a69125a45dcbac89d191.pdf |
LoTa-Bench: Benchmarking Language-oriented Task Planners for Embodied Agents | https://openreview.net/forum?id=ADSxCpCu9s | https://openreview.net/forum?id=ADSxCpCu9s | Jae-Woo Choi,Youngwoo Yoon,Hyobin Ong,Jaehong Kim,Minsu Jang | ICLR 2024,Poster | Large language models (LLMs) have recently received considerable attention as alternative solutions for task planning. However, comparing the performance of language-oriented task planners becomes difficult, and there exists a dearth of detailed exploration regarding the effects of various factors such as pre-trained model selection and prompt construction. To address this, we propose a benchmark system for automatically quantifying performance of task planning for home-service embodied agents. Task planners are tested on two pairs of datasets and simulators: 1) ALFRED and AI2-THOR, 2) an extension of Watch-And-Help and VirtualHome. Using the proposed benchmark system, we perform extensive experiments with LLMs and prompts, and explore several enhancements of the baseline planner. We expect that the proposed benchmark tool would accelerate the development of language-oriented task planners. | https://openreview.net/pdf/35abce446ca9b6e7a9136f7c38556084c63538ec.pdf |
Progressive3D: Progressively Local Editing for Text-to-3D Content Creation with Complex Semantic Prompts | https://openreview.net/forum?id=O072Rc8uUy | https://openreview.net/forum?id=O072Rc8uUy | Xinhua Cheng,Tianyu Yang,Jianan Wang,Yu Li,Lei Zhang,Jian Zhang,Li Yuan | ICLR 2024,Poster | Recent text-to-3D generation methods achieve impressive 3D content creation capacity thanks to the advances in image diffusion models and optimizing strategies. However, current methods struggle to generate correct 3D content for a complex prompt in semantics, i.e., a prompt describing multiple interacted objects binding with different attributes. In this work, we propose a general framework named Progressive3D, which decomposes the entire generation into a series of locally progressive editing steps to create precise 3D content for complex prompts, and we constrain the content change to only occur in regions determined by user-defined region prompts in each editing step. Furthermore, we propose an overlapped semantic component suppression technique to encourage the optimization process to focus more on the semantic differences between prompts. Extensive experiments demonstrate that the proposed Progressive3D framework generates precise 3D content for prompts with complex semantics through progressive editing steps and is general for various text-to-3D methods driven by different 3D representations. | https://openreview.net/pdf/2a1b5b232b77d3ebf1e1f3372960b6116ef32843.pdf |
The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning | https://openreview.net/forum?id=wxJ0eXwwda | https://openreview.net/forum?id=wxJ0eXwwda | Bill Yuchen Lin,Abhilasha Ravichander,Ximing Lu,Nouha Dziri,Melanie Sclar,Khyathi Chandu,Chandra Bhagavatula,Yejin Choi | ICLR 2024,Poster | Alignment tuning has become the de facto standard practice for enabling base large language models (LLMs) to serve as open-domain AI assistants. The alignment tuning process typically involves instruction learning through supervised fine-tuning (SFT) and preference tuning via reinforcement learning from human feedback (RLHF). A recent study, LIMA (Zhou et al., 2023), shows that using merely 1K examples for SFT can achieve significant alignment performance as well, suggesting that the effect of alignment tuning might be "superficial." This raises questions about how exactly the alignment tuning transforms a base LLM.
We analyze the effect of alignment tuning by examining the token distribution shift between base LLMs and their aligned counterparts (e.g., Llama-2 and Llama-2-chat). Our findings reveal that base LLMs and their alignment-tuned versions perform nearly identically in decoding on the majority of token positions (i.e., they share the top-ranked tokens). Most distribution shifts occur with stylistic tokens (e.g., discourse markers, safety disclaimers). This direct evidence strongly supports the hypothesis that alignment tuning primarily learns to adopt the language style of AI assistants, and that the knowledge required for answering user queries predominantly comes from the base LLMs themselves.
Based on these findings, we rethink the alignment of LLMs by posing the research question: how effectively can we align base LLMs without SFT or RLHF? To address this, we introduce a simple, tuning-free alignment method, URIAL (Untuned LLMs with Restyled In-context Alignment). URIAL achieves effective alignment purely through in-context learning (ICL) with base LLMs, requiring as few as three constant stylistic examples and a system prompt. We conduct a fine-grained and interpretable evaluation on a diverse set of examples, named just-eval-instruct. Results demonstrate that base LLMs with URIAL can match or even surpass the performance of LLMs aligned with SFT (Mistral-7b-Instruct) or SFT+RLHF (Llama-2-70b-chat). We show that the gap between tuning-free and tuning-based alignment methods can be significantly reduced through strategic prompting and ICL. Our findings on the superficial nature of alignment tuning and results with URIAL suggest that deeper analysis and theoretical understanding of alignment is crucial to future LLM research. | https://openreview.net/pdf/37119b72ac2c8ae4c42bb771d1227515479982f1.pdf |
Towards Best Practices of Activation Patching in Language Models: Metrics and Methods | https://openreview.net/forum?id=Hf17y6u9BC | https://openreview.net/forum?id=Hf17y6u9BC | Fred Zhang,Neel Nanda | ICLR 2024,Poster | Mechanistic interpretability seeks to understand the internal mechanisms of
machine learning models, where localization—identifying the important model
components—is a key step. Activation patching, also known as causal tracing or
interchange intervention, is a standard technique for this task (Vig et al., 2020), but
the literature contains many variants with little consensus on the choice of hyperparameters or methodology. In this work, we systematically examine the impact
of methodological details in activation patching, including evaluation metrics and
corruption methods. In several settings of localization and circuit discovery in language models, we find that varying these hyperparameters could lead to disparate
interpretability results. Backed by empirical observations, we give conceptual arguments for why certain metrics or methods may be preferred. Finally, we provide
recommendations for the best practices of activation patching going forwards. | https://openreview.net/pdf/d5329b0a5ef68fd1770649fba6e69e0ce8709b3c.pdf |
On the Analysis of GAN-based Image-to-Image Translation with Gaussian Noise Injection | https://openreview.net/forum?id=sLregLuXpn | https://openreview.net/forum?id=sLregLuXpn | Chaohua Shi,Kexin Huang,Lu GAN,Hongqing Liu,Mingrui Zhu,Nannan Wang,Xinbo Gao | ICLR 2024,Poster | Image-to-image (I2I) translation is vital in computer vision tasks like style transfer and domain adaptation. While recent advances in GAN have enabled high-quality sample generation, real-world challenges such as noise and distortion remain significant obstacles. Although Gaussian noise injection during training has been utilized, its theoretical underpinnings have been unclear. This work provides a robust theoretical framework elucidating the role of Gaussian noise injection in I2I translation models. We address critical questions on the influence of noise variance on distribution divergence, resilience to unseen noise types, and optimal noise intensity selection. Our contributions include connecting $f$-divergence and score matching, unveiling insights into the impact of Gaussian noise on aligning probability distributions, and demonstrating generalized robustness implications. We also explore choosing an optimal training noise level for consistent performance in noisy environments. Extensive experiments validate our theoretical findings, showing substantial improvements over various I2I baseline models in noisy settings. Our research rigorously grounds Gaussian noise injection for I2I translation, offering a sophisticated theoretical understanding beyond heuristic applications. | https://openreview.net/pdf/8b31700bf32cc08125d6b5c8ab240b2dd0902c35.pdf |
FedHyper: A Universal and Robust Learning Rate Scheduler for Federated Learning with Hypergradient Descent | https://openreview.net/forum?id=Kl9CqKf7h6 | https://openreview.net/forum?id=Kl9CqKf7h6 | Ziyao Wang,Jianyu Wang,Ang Li | ICLR 2024,Poster | The theoretical landscape of federated learning (FL) undergoes rapid evolution, but its practical application encounters a series of intricate challenges, and hyperparameter optimization is one of these critical challenges. Amongst the diverse adjustments in hyperparameters, the adaptation of the learning rate emerges as a crucial component, holding the promise of significantly enhancing the efficacy of FL systems. In response to this critical need, this paper presents FedHyper, a novel hypergradient-based learning rate adaptation algorithm specifically designed for FL. FedHyper serves as a universal learning rate scheduler that can adapt both global and local rates as the training progresses. In addition, FedHyper not only showcases unparalleled robustness to a spectrum of initial learning rate configurations but also significantly alleviates the necessity for laborious empirical learning rate adjustments. We provide a comprehensive theoretical analysis of FedHyper’s convergence rate and conduct extensive experiments on vision and language benchmark datasets. The results demonstrate that FEDHYPER consistently converges 1.1-3× faster than FedAvg and the competing baselines while achieving superior final accuracy. Moreover, FEDHYPER catalyzes a remarkable surge in accuracy, augmenting it by up to 15% compared to FedAvg under suboptimal initial learning rate settings. | https://openreview.net/pdf/e80eaeea7d5cc7b1a57d9a87253a3d2301739f79.pdf |
FreeReg: Image-to-Point Cloud Registration Leveraging Pretrained Diffusion Models and Monocular Depth Estimators | https://openreview.net/forum?id=BPb5AhT2Vf | https://openreview.net/forum?id=BPb5AhT2Vf | Haiping Wang,Yuan Liu,Bing WANG,YUJING SUN,Zhen Dong,Wenping Wang,Bisheng Yang | ICLR 2024,Poster | Matching cross-modality features between images and point clouds is a fundamental problem for image-to-point cloud registration. However, due to the modality difference between images and points, it is difficult to learn robust and discriminative cross-modality features by existing metric learning methods for feature matching. Instead of applying metric learning on cross-modality data, we propose to unify the modality between images and point clouds by pretrained large-scale models first, and then establish robust correspondence within the same modality. We show that the intermediate features, called diffusion features, extracted by depth-to-image diffusion models are semantically consistent between images and point clouds, which enables the building of coarse but robust cross-modality correspondences. We further extract geometric features on depth maps produced by the monocular depth estimator. By matching such geometric features, we significantly improve the accuracy of the coarse correspondences produced by diffusion features. Extensive experiments demonstrate that without any task-specific training, direct utilization of both features produces accurate image-to-point cloud registration. On three public indoor and outdoor benchmarks, the proposed method averagely achieves a 20.6 percent improvement in Inlier Ratio, a $3.0\times$ higher Inlier Number, and a 48.6 percent improvement in Registration Recall than existing state-of-the-arts. The code and additional results are available at \url{https://whu-usi3dv.github.io/FreeReg/}. | https://openreview.net/pdf/64f13fc1be90ea1123c4999b6fdcda3f1cfb687d.pdf |
Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models | https://openreview.net/forum?id=3bq3jsvcQ1 | https://openreview.net/forum?id=3bq3jsvcQ1 | Huaixiu Steven Zheng,Swaroop Mishra,Xinyun Chen,Heng-Tze Cheng,Ed H. Chi,Quoc V Le,Denny Zhou | ICLR 2024,Poster | We present STEP-BACK PROMPTING, a simple prompting technique that enables LLMs to do abstractions to derive high-level concepts and first principles from instances containing specific details. Using the concepts and principles to guide reasoning, LLMs significantly improve their abilities in following a correct reasoning path towards the solution. We conduct experiments of STEP-BACK PROMPTING with PaLM-2L, GPT-4 and Llama2-70B models, and observe substantial performance gains on various challenging reasoning-intensive tasks including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, STEP-BACK PROMPTING improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7% and 11% respectively, TimeQA by 27%, and MuSiQue by 7%. | https://openreview.net/pdf/9cc4563c5081574fc9befa21d0a01fb79ff68635.pdf |
ImagenHub: Standardizing the evaluation of conditional image generation models | https://openreview.net/forum?id=OuV9ZrkQlc | https://openreview.net/forum?id=OuV9ZrkQlc | Max Ku,Tianle Li,Kai Zhang,Yujie Lu,Xingyu Fu,Wenwen Zhuang,Wenhu Chen | ICLR 2024,Poster | Recently, a myriad of conditional image generation and editing models have been developed to serve different downstream tasks, including text-to-image generation, text-guided image editing, subject-driven image generation, control-guided image generation, etc. However, we observe huge inconsistencies in experimental conditions: datasets, inference, and evaluation metrics -- render fair comparisons difficult.
This paper proposes ImagenHub, which is a one-stop library to standardize the inference and evaluation of all the conditional image generation models. Firstly, we define seven prominent tasks and curate high-quality evaluation datasets for them. Secondly, we built a unified inference pipeline to ensure fair comparison. Thirdly, we design two human evaluation scores, i.e. Semantic Consistency and Perceptual Quality, along with comprehensive guidelines to evaluate generated images. We train expert raters to evaluate the model outputs based on the proposed metrics. Our human evaluation achieves a high inter-worker agreement of Krippendorff’s alpha on 76\% models with a value higher than 0.4. We comprehensively evaluated a total of around 30 models and observed three key takeaways: (1) the existing models’ performance is generally unsatisfying except for Text-guided Image Generation and Subject-driven Image Generation, with 74\% models achieving an overall score lower than 0.5. (2) we examined the claims from published papers and found 83\% of them hold with a few exceptions. (3) None of the existing automatic metrics has a Spearman's correlation higher than 0.2 except subject-driven image generation. Moving forward, we will continue our efforts to evaluate newly published models and update our leaderboard to keep track of the progress in conditional image generation. | https://openreview.net/pdf/b328895c899d4ae6d0a4d4ae9cf892d1a9393bb4.pdf |
UC-NERF: Neural Radiance Field for Under-Calibrated Multi-View Cameras in Autonomous Driving | https://openreview.net/forum?id=bLKcCe7hYh | https://openreview.net/forum?id=bLKcCe7hYh | Kai Cheng,Xiaoxiao Long,Wei Yin,Jin Wang,Zhiqiang Wu,Yuexin Ma,Kaixuan Wang,Xiaozhi Chen,Xuejin Chen | ICLR 2024,Poster | Multi-camera setups find widespread use across various applications, such as autonomous driving, as they greatly expand sensing capabilities.
Despite the fast development of Neural radiance field (NeRF) techniques and their wide applications in both indoor and outdoor scenes, applying NeRF to multi-camera systems remains very challenging. This is primarily due to the inherent under-calibration issues in multi-camera setup, including inconsistent imaging effects stemming from separately calibrated image signal processing units in diverse cameras, and system errors arising from mechanical vibrations during driving that affect relative camera poses.
In this paper, we present UC-NeRF, a novel method tailored for novel view synthesis in under-calibrated multi-view camera systems.
Firstly, we propose a layer-based color correction to rectify the color inconsistency in different image regions. Second, we propose virtual warping to generate more viewpoint-diverse but color-consistent virtual views for color correction and 3D recovery. Finally, a spatiotemporally constrained pose refinement is designed for more robust and accurate pose calibration in multi-camera systems.
Our method not only achieves state-of-the-art performance of novel view synthesis in multi-camera setups, but also effectively facilitates depth estimation in large-scale outdoor scenes with the synthesized novel views. | https://openreview.net/pdf/daf90636fc5f113bc9a6cf634e72fb87a7138d72.pdf |
Adapting Large Language Models via Reading Comprehension | https://openreview.net/forum?id=y886UXPEZ0 | https://openreview.net/forum?id=y886UXPEZ0 | Daixuan Cheng,Shaohan Huang,Furu Wei | ICLR 2024,Poster | We explore how continued pre-training on domain-specific corpora influences large language models, revealing that training on the raw corpora endows the model with domain knowledge, but drastically hurts its prompting ability for question answering. Taken inspiration from human learning via reading comprehension--practice after reading improves the ability to answer questions based on the learned knowledge--we propose a simple method for transforming raw corpora into reading comprehension texts. Each raw text is enriched with a series of tasks related to its content. Our method, highly scalable and applicable to any pre-training corpora, consistently enhances performance across various tasks in three different domains: biomedicine, finance, and law. Notably, our 7B language model achieves competitive performance with domain-specific models of much larger scales, such as BloombergGPT-50B. Furthermore, we demonstrate that domain-specific reading comprehension texts can improve the model's performance even on general benchmarks, showing the potential to develop a general model across even more domains. Our model, code, and data are available at https://github.com/microsoft/LMOps. | https://openreview.net/pdf/b260d1757002a203666ccc45a89aa564014467ac.pdf |
DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models | https://openreview.net/forum?id=f8S3aLm0Vp | https://openreview.net/forum?id=f8S3aLm0Vp | Zhenting Wang,Chen Chen,Lingjuan Lyu,Dimitris N. Metaxas,Shiqing Ma | ICLR 2024,Poster | Recent text-to-image diffusion models have shown surprising performance in generating high-quality images. However, concerns have arisen regarding the unauthorized data usage during the training or fine-tuning process. One example is when a model trainer collects a set of images created by a particular artist and attempts to train a model capable of generating similar images without obtaining permission and giving credit to the artist. To address this issue, we propose a method for detecting such unauthorized data usage by planting the injected memorization into the text-to-image diffusion models trained on the protected dataset. Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions that are nearly imperceptible to humans but can be captured and memorized by diffusion models. By analyzing whether the model has memorized the injected content (i.e., whether the generated images are processed by the injected post-processing function), we can detect models that had illegally utilized the unauthorized data. Experiments on Stable Diffusion and VQ Diffusion with different model training or fine-tuning methods (i.e, LoRA, DreamBooth, and standard training) demonstrate the effectiveness of our proposed method in detecting unauthorized data usages. Code: https://github.com/ZhentingWang/DIAGNOSIS. | https://openreview.net/pdf/7cb4071c7046d49bdd5f4d4ef6642e728fbd31f3.pdf |
LEMON: Lossless model expansion | https://openreview.net/forum?id=3Vw7DQqq7U | https://openreview.net/forum?id=3Vw7DQqq7U | Yite Wang,Jiahao Su,Hanlin Lu,Cong Xie,Tianyi Liu,Jianbo Yuan,Haibin Lin,Ruoyu Sun,Hongxia Yang | ICLR 2024,Poster | Scaling of deep neural networks, especially Transformers, is pivotal for their surging performance and has further led to the emergence of sophisticated reasoning capabilities in foundation models.
Such scaling generally requires training large models from scratch with random initialization, failing to leverage the knowledge acquired by their smaller counterparts, which are already resource-intensive to obtain.
To tackle this inefficiency, we present $\textbf{L}$ossl$\textbf{E}$ss $\textbf{MO}$del Expansio$\textbf{N}$ (LEMON), a recipe
to initialize scaled models using the weights of their smaller but pre-trained counterparts. This is followed by model training with an optimized learning rate scheduler tailored explicitly for the scaled models, substantially reducing the training time compared to training from scratch.
Notably, LEMON is versatile, ensuring compatibility with various network structures, including models like Vision Transformers and BERT.
Our empirical results demonstrate that LEMON reduces computational costs by 56.7\% for Vision Transformers and 33.2\% for BERT when compared to training from scratch. | https://openreview.net/pdf/ccabcaf9ab001571d5020c28ba4c174fc6a91b48.pdf |
A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation | https://openreview.net/forum?id=Js5PJPHDyY | https://openreview.net/forum?id=Js5PJPHDyY | Zhengbo Wang,Jian Liang,Lijun Sheng,Ran He,Zilei Wang,Tieniu Tan | ICLR 2024,Poster | Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods, such as prompt learning and adapter, to enhance CLIP's performance in downstream tasks.
However, these methods still require additional training time and computational resources, which is undesirable for devices with limited resources.
In this paper, we revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
Typically, GDA assumes that features of each class follow Gaussian distributions with identical covariance.
By leveraging Bayes' formula, the classifier can be expressed in terms of the class means and covariance, which can be estimated from the data without the need for training.
To integrate knowledge from both visual and textual modalities, we ensemble it with the original zero-shot classifier within CLIP.
Extensive results on 17 datasets validate that our method surpasses or achieves comparable results with state-of-the-art methods on few-shot classification, imbalanced learning, and out-of-distribution generalization.
In addition, we extend our method to base-to-new generalization and unsupervised learning, once again demonstrating its superiority over competing approaches.
Our code is publicly available at https://github.com/mrflogs/ICLR24. | https://openreview.net/pdf/c80e6662529f0880f759810cfcd99df91821e220.pdf |
MiniLLM: Knowledge Distillation of Large Language Models | https://openreview.net/forum?id=5h0qf7IBZZ | https://openreview.net/forum?id=5h0qf7IBZZ | Yuxian Gu,Li Dong,Furu Wei,Minlie Huang | ICLR 2024,Poster | Knowledge Distillation (KD) is a promising technique for reducing the high computational demand of large language models (LLMs). However, previous KD methods are primarily applied to white-box classification models or training small models to imitate black-box model APIs like ChatGPT. How to effectively distill the knowledge of white-box LLMs into small models is still under-explored, which becomes more important with the prosperity of open-source LLMs. In this work, we propose a KD approach that distills LLMs into smaller language models. We first replace the forward Kullback-Leibler divergence (KLD) objective in the standard KD approaches with reverse KLD, which is more suitable for KD on generative language models, to prevent the student model from overestimating the low-probability regions of the teacher distribution. Then, we derive an effective optimization approach to learn this objective. The student models are named MiniLLM. Extensive experiments in the instruction-following setting show that MiniLLM generates more precise responses with higher overall quality, lower exposure bias, better calibration, and higher long-text generation performance than the baselines. Our method is scalable for different model families
with 120M to 13B parameters. Our code, data, and model checkpoints can be found in https://github.com/microsoft/LMOps/tree/main/minillm. | https://openreview.net/pdf/ca03ee8216d0c10da88c9530b721b8ee62366772.pdf |
Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation | https://openreview.net/forum?id=Vja3ecieXY | https://openreview.net/forum?id=Vja3ecieXY | Kai Huang,Hanyun Yin,Heng Huang,Wei Gao | ICLR 2024,Poster | Fine-tuning is essential to adapting pre-trained large language models to downstream applications. With the increasing popularity of LLM-enabled applications, fine-tuning has been performed intensively worldwide, incurring a tremendous amount of computing costs that correspond to big carbon footprint and environmental impact. Mitigating such environmental impact directly correlates to reducing the fine-tuning FLOPs. Existing fine-tuning schemes focus on either saving memory or reducing the overhead of computing weight updates, but cannot achieve sufficient FLOPs reduction due to their ignorance of the training cost in backpropagation. To address this limitation, in this paper we present GreenTrainer, a new technique that minimizes the FLOPs of LLM fine-tuning via adaptive backpropagation, which adaptively selects the most appropriate set of LLM tensors for fine-tuning based on their importance and backpropagation cost in training. Experiment results show that GreenTrainer can save up to 64\% training FLOPs compared to full fine-tuning, without any noticeable accuracy loss. Compared to the existing schemes such as Prefix Tuning and LoRA, GreenTrainer can achieve up to 4\% improvement of model accuracy, with on-par FLOPs reduction. | https://openreview.net/pdf/f15896bdbd51bc5b71ea6af37c57f34e138bf098.pdf |
The importance of feature preprocessing for differentially private linear optimization | https://openreview.net/forum?id=XlTDBZFXWp | https://openreview.net/forum?id=XlTDBZFXWp | Ziteng Sun,Ananda Theertha Suresh,Aditya Krishna Menon | ICLR 2024,Poster | Training machine learning models with differential privacy (DP) has received increasing interest in recent years. One of the most popular algorithms for training differentially private models is differentially private stochastic gradient descent (DPSGD) and its variants, where at each step gradients are clipped and combined with some noise. Given the increasing usage of DPSGD, we ask the question: is DPSGD alone sufficient to find a good minimizer for every dataset under privacy constraints?
As a first step towards answering this question, we show that even for the simple case of linear classification, unlike non-private optimization, (private) feature preprocessing is vital for differentially private optimization. In detail, we first show theoretically that there exists an example where without feature preprocessing, DPSGD incurs a privacy error proportional to the maximum norm of features over all samples. We then propose an algorithm called *DPSGD-F*, which combines DPSGD with feature preprocessing and prove that for classification tasks, it incurs a privacy error proportional to the diameter of the features $\max_{x, x' \in D} \|x - x'\|_2$. We then demonstrate the practicality of our algorithm on image classification benchmarks. | https://openreview.net/pdf/1f28ea93c2a38dc3a191f5adae1f0ec862c1f825.pdf |
Pathformer: Multi-scale Transformers with Adaptive Pathways for Time Series Forecasting | https://openreview.net/forum?id=lJkOCMP2aW | https://openreview.net/forum?id=lJkOCMP2aW | Peng Chen,Yingying ZHANG,Yunyao Cheng,Yang Shu,Yihang Wang,Qingsong Wen,Bin Yang,Chenjuan Guo | ICLR 2024,Poster | Transformers for time series forecasting mainly model time series from limited or fixed scales, making it challenging to capture different characteristics spanning various scales. We propose Pathformer, a multi-scale Transformer with adaptive pathways. It integrates both temporal resolution and temporal distance for multi-scale modeling. Multi-scale division divides the time series into different temporal resolutions using patches of various sizes. Based on the division of each scale, dual attention is performed over these patches to capture global correlations and local details as temporal dependencies. We further enrich the multi-scale Transformer with adaptive pathways, which adaptively adjust the multi-scale modeling process based on the varying temporal dynamics of the input, improving the accuracy and generalization of Pathformer. Extensive experiments on eleven real-world datasets demonstrate that Pathformer not only achieves state-of-the-art performance by surpassing all current models but also exhibits stronger generalization abilities under various transfer scenarios. The code is made available at https://github.com/decisionintelligence/pathformer. | https://openreview.net/pdf/815876afae8a6953bb7abf22c29fef42fd4aa385.pdf |
Tree Cross Attention | https://openreview.net/forum?id=Vw24wtSddM | https://openreview.net/forum?id=Vw24wtSddM | Leo Feng,Frederick Tung,Hossein Hajimirsadeghi,Yoshua Bengio,Mohamed Osama Ahmed | ICLR 2024,Poster | Cross Attention is a popular method for retrieving information from a set of context tokens for making predictions. At inference time, for each prediction, Cross Attention scans the full set of $\mathcal{O}(N)$ tokens. In practice, however, often only a small subset of tokens are required for good performance.
Methods such as Perceiver IO are cheap at inference as they distill the information to a smaller-sized set of latent tokens $L < N$ on which cross attention is then applied, resulting in only $\mathcal{O}(L)$ complexity.
However, in practice, as the number of input tokens and the amount of information to distill increases, the number of latent tokens needed also increases significantly.
In this work, we propose Tree Cross Attention (TCA) - a module based on Cross Attention that only retrieves information from a logarithmic $\mathcal{O}(\log(N))$ number of tokens for performing inference.
TCA organizes the data in a tree structure and performs a tree search at inference time to retrieve the relevant tokens for prediction.
Leveraging TCA, we introduce ReTreever, a flexible architecture for token-efficient inference.
We show empirically that Tree Cross Attention (TCA) performs comparable to Cross Attention across various classification and uncertainty regression tasks while being significantly more token-efficient.
Furthermore, we compare ReTreever against Perceiver IO, showing significant gains while using the same number of tokens for inference. | https://openreview.net/pdf/7a9c4d8060b6fe548171e868628417cb05488e7b.pdf |
LUT-GEMM: Quantized Matrix Multiplication based on LUTs for Efficient Inference in Large-Scale Generative Language Models | https://openreview.net/forum?id=gLARhFLE0F | https://openreview.net/forum?id=gLARhFLE0F | Gunho Park,Baeseong park,Minsub Kim,Sungjae Lee,Jeonghoon Kim,Beomseok Kwon,Se Jung Kwon,Byeongwook Kim,Youngjoo Lee,Dongsoo Lee | ICLR 2024,Poster | Recent advances in self-supervised learning and the Transformer architecture have significantly improved natural language processing (NLP), achieving remarkably low perplexity.
However, the growing size of NLP models introduces a memory wall problem during the generation phase.
To mitigate this issue, recent efforts have focused on quantizing model weights to sub-4-bit precision while preserving full precision for activations, resulting in practical speed-ups during inference on a single GPU.
However, these improvements primarily stem from reduced memory movement, which necessitates a resource-intensive dequantization process rather than actual computational reduction.
In this paper, we introduce LUT-GEMM, an efficient kernel for quantized matrix multiplication, which not only eliminates the resource-intensive dequantization process but also reduces computational costs compared to previous kernels for weight-only quantization.
Furthermore, we proposed group-wise quantization to offer a flexible trade-off between compression ratio and accuracy.
The impact of LUT-GEMM is facilitated by implementing high compression ratios through low-bit quantization and efficient LUT-based operations.
We show experimentally that when applied to the OPT-175B model with 3-bit quantization, LUT-GEMM substantially accelerates token generation latency, achieving a remarkable 2.1x improvement on a single GPU when compared to OPTQ, which relies on the costly dequantization process. | https://openreview.net/pdf/0ca6146af47cce1c8ae34f3680408aaf0fd04ab0.pdf |
Outliers with Opposing Signals Have an Outsized Effect on Neural Network Optimization | https://openreview.net/forum?id=kIZ3S3tel6 | https://openreview.net/forum?id=kIZ3S3tel6 | Elan Rosenfeld,Andrej Risteski | ICLR 2024,Poster | We identify a new phenomenon in neural network optimization which arises from the interaction of depth and a particular heavy-tailed structure in natural data. Our result offers intuitive explanations for several previously reported observations about network training dynamics, including a conceptually new cause for progressive sharpening and the edge of stability. We further draw connections to related phenomena including grokking and simplicity bias.
Experimentally, we demonstrate the significant influence of paired groups of outliers in the training data with strong *Opposing Signals*: consistent, large magnitude features which dominate the network output and provide gradients which point in opposite directions. Due to these outliers, early optimization enters a narrow valley which carefully balances the opposing groups; subsequent sharpening causes their loss to rise rapidly, oscillating between high on one group and then the other, until the overall loss spikes. We carefully study these groups' effect on the network's optimization and behavior, and we complement this with a theoretical analysis of a two-layer linear network under a simplified model.
Our finding enables new qualitative predictions of training behavior which we confirm experimentally. It also provides a new lens through which to study and improve modern training practices for stochastic optimization, which we highlight via a case study of Adam versus SGD. | https://openreview.net/pdf/89d60c3f5187b21c0413fb7c9da6519cf34b9e6b.pdf |
Stable Anisotropic Regularization | https://openreview.net/forum?id=dbQH9AOVd5 | https://openreview.net/forum?id=dbQH9AOVd5 | William Rudman,Carsten Eickhoff | ICLR 2024,Poster | Given the success of Large Language Models (LLMs), there has been considerable interest in studying the properties of model activations. The literature overwhelmingly agrees that LLM representations are dominated by a few ``outlier dimensions'' with exceedingly high variance and magnitude. Several studies in Natural Language Processing (NLP) have sought to mitigate the impact of such outlier dimensions and force LLMs to be isotropic (i.e., have uniform variance across all dimensions in embedding space). Isotropy is thought to be a desirable property for LLMs that improves model performance and more closely aligns textual representations with human intuition. However, many claims regarding isotropy in NLP have been based on the average cosine similarity of embeddings, which has recently been shown to be a flawed measure of isotropy. In this paper, we propose I-STAR: IsoScore$^{\star}$-based STable Anisotropic Regularization, a novel regularization method that can be used to increase or decrease levels of isotropy in embedding space during training. I-STAR uses IsoScore$^{\star}$, the first accurate measure of isotropy that is both differentiable and stable on mini-batch computations. In contrast to several previous works, we find that \textit{decreasing} isotropy in contextualized embeddings improves performance on the majority of tasks and models considered in this paper. | https://openreview.net/pdf/80c125ab8c13f14706a641a0466296c8cc3cb8df.pdf |
Threshold-Consistent Margin Loss for Open-World Deep Metric Learning | https://openreview.net/forum?id=vE5MyzpP92 | https://openreview.net/forum?id=vE5MyzpP92 | Qin ZHANG,Linghan Xu,Jun Fang,Qingming Tang,Ying Nian Wu,Joseph Tighe,Yifan Xing | ICLR 2024,Poster | Existing losses used in deep metric learning (DML) for image retrieval often lead to highly non-uniform intra-class and inter-class representation structures across test classes and data distributions. When combined with the common practice of using a fixed threshold to declare a match, this gives rise to significant performance variations in terms of false accept rate (FAR) and false reject rate (FRR) across test classes and data distributions. We define this issue in DML as threshold inconsistency. In real-world applications, such inconsistency often complicates the threshold selection process when deploying large-scale image retrieval systems. To measure this inconsistency, we propose a novel variance-based metric called Operating-Point-Inconsistency-Score (OPIS) that quantifies the variance in the operating characteristics across classes. Using the OPIS metric, we find that achieving high accuracy levels in a DML model does not automatically guarantee threshold consistency. In fact, our investigation reveals a Pareto frontier in the high-accuracy regime, where existing methods to improve accuracy often lead to degradation in threshold consistency. To address this trade-off, we introduce the Threshold-Consistent Margin (TCM) loss, a simple yet effective regularization technique that promotes uniformity in representation structures across classes by selectively penalizing hard sample pairs. Large-scale experiments demonstrate TCM's effectiveness in enhancing threshold consistency while preserving accuracy, simplifying the threshold selection process in practical DML settings. | https://openreview.net/pdf/f40c60ca8b419e304ebb25e9a38c6438f95d501c.pdf |
Jointly Training Large Autoregressive Multimodal Models | https://openreview.net/forum?id=5jcav5RcKw | https://openreview.net/forum?id=5jcav5RcKw | Emanuele Aiello,LILI YU,Yixin Nie,Armen Aghajanyan,Barlas Oguz | ICLR 2024,Poster | In recent years, advances in the large-scale pretraining of language and text-to-image models have revolutionized the field of machine learning. Yet, integrating these two modalities into a single, robust model capable of generating seamless multimodal outputs remains a significant challenge. To address this gap, we present the Joint Autoregressive Mixture (JAM) framework, a modular approach that systematically fuses existing text and image generation models. We also introduce a specialized, data-efficient instruction-tuning strategy, tailored for mixed-modal generation tasks. Our final instruct-tuned model demonstrates unparalleled performance in generating high-quality multimodal outputs and represents the first model explicitly designed for this purpose. | https://openreview.net/pdf/3033c8bec20912cab2a0d7dc5e5262baef7533f1.pdf |
Rethinking Adversarial Policies: A Generalized Attack Formulation and Provable Defense in RL | https://openreview.net/forum?id=pDCublKPmG | https://openreview.net/forum?id=pDCublKPmG | Xiangyu Liu,Souradip Chakraborty,Yanchao Sun,Furong Huang | ICLR 2024,Poster | Most existing works focus on direct perturbations to the victim's state/action or the underlying transition dynamics to demonstrate the vulnerability of reinforcement learning agents to adversarial attacks.
However, such direct manipulations may not be always realizable.
In this paper, we consider a multi-agent setting where a well-trained victim agent $\nu$ is exploited by an attacker controlling another
agent $\alpha$ with an \textit{adversarial policy}. Previous models do not account for the possibility that the attacker may only have partial control over
$\alpha$ or that the attack may produce easily detectable ``abnormal'' behaviors. Furthermore, there is a lack of provably efficient defenses against these adversarial policies.
To address these limitations, we introduce a generalized attack framework that has the flexibility to model to what extent the adversary is able to control the agent, and allows the attacker to regulate the state distribution shift and produce stealthier adversarial policies. Moreover, we offer a provably efficient defense with polynomial convergence to the most robust victim policy through adversarial training with timescale separation.
This stands in sharp contrast to supervised learning, where adversarial training typically provides only \textit{empirical} defenses.
Using the Robosumo competition experiments, we show that our generalized attack formulation results in much stealthier adversarial policies when maintaining the same winning rate as baselines.
Additionally, our adversarial training approach yields stable learning dynamics and less exploitable victim policies. | https://openreview.net/pdf/5135816d419bcef3370d30cab554a82466b5c74b.pdf |
On the Over-Memorization During Natural, Robust and Catastrophic Overfitting | https://openreview.net/forum?id=2V1Z0Jdmss | https://openreview.net/forum?id=2V1Z0Jdmss | Runqi Lin,Chaojian Yu,Bo Han,Tongliang Liu | ICLR 2024,Poster | Overfitting negatively impacts the generalization ability of deep neural networks (DNNs) in both natural and adversarial training. Existing methods struggle to consistently address different types of overfitting, typically designing strategies that focus separately on either natural or adversarial patterns. In this work, we adopt a unified perspective by solely focusing on natural patterns to explore different types of overfitting. Specifically, we examine the memorization effect in DNNs and reveal a shared behaviour termed over-memorization, which impairs their generalization capacity. This behaviour manifests as DNNs suddenly becoming high-confidence in predicting certain training patterns and retaining a persistent memory for them. Furthermore, when DNNs over-memorize an adversarial pattern, they tend to simultaneously exhibit high-confidence prediction for the corresponding natural pattern. These findings motivate us to holistically mitigate different types of overfitting by hindering the DNNs from over-memorization training patterns. To this end, we propose a general framework, $\textit{Distraction Over-Memorization}$ (DOM), which explicitly prevents over-memorization by either removing or augmenting the high-confidence natural patterns. Extensive experiments demonstrate the effectiveness of our proposed method in mitigating overfitting across various training paradigms. | https://openreview.net/pdf/adad7179801e34b7075adf66bc946a460961f58c.pdf |
The Generative AI Paradox: “What It Can Create, It May Not Understand” | https://openreview.net/forum?id=CF8H8MS5P8 | https://openreview.net/forum?id=CF8H8MS5P8 | Peter West,Ximing Lu,Nouha Dziri,Faeze Brahman,Linjie Li,Jena D. Hwang,Liwei Jiang,Jillian Fisher,Abhilasha Ravichander,Khyathi Chandu,Benjamin Newman,Pang Wei Koh,Allyson Ettinger,Yejin Choi | ICLR 2024,Poster | The recent wave of generative AI has sparked unprecedented global attention, with both excitement and concern over potentially superhuman levels of artificial intelligence: models now take only seconds to produce outputs that would challenge or exceed the capabilities even of expert humans. At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans. This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make? In this work, we posit that this tension reflects a divergence in the configuration of intelligence in today's generative models relative to intelligence in humans. Specifically, we propose and test the **Generative AI Paradox** hypothesis: generative models, having been trained directly to reproduce expert-like outputs, acquire generative capabilities that are not contingent upon---and can therefore exceed---their ability to understand those same types of outputs. This contrasts with humans, for whom basic understanding almost always precedes the ability to
generate expert-level outputs. We test this hypothesis through controlled experiments analyzing generation vs.~understanding in generative models, across both language and image modalities. Our results show that although models can outperform humans in generation, they consistently fall short of human capabilities in measures of understanding, as well as weaker correlation between generation and understanding performance, and more brittleness to adversarial inputs. Our findings support the hypothesis that models' generative capability may not be contingent upon understanding capability, and call for caution in interpreting artificial intelligence by analogy to human intelligence. | https://openreview.net/pdf/841e9b85039aa5e566d2f42bcc1603e7bd8a5f4b.pdf |
Semantic Flow: Learning Semantic Fields of Dynamic Scenes from Monocular Videos | https://openreview.net/forum?id=A2mRcRyGdl | https://openreview.net/forum?id=A2mRcRyGdl | Fengrui Tian,Yueqi Duan,Angtian Wang,Jianfei Guo,Shaoyi Du | ICLR 2024,Poster | In this work, we pioneer Semantic Flow, a neural semantic representation of dynamic scenes from monocular videos. In contrast to previous NeRF methods that reconstruct dynamic scenes from the colors and volume densities of individual points, Semantic Flow learns semantics from continuous flows that contain rich 3D motion information. As there is 2D-to-3D ambiguity problem in the viewing direction when extracting 3D flow features from 2D video frames, we consider the volume densities as opacity priors that describe the contributions of flow features to the semantics on the frames. More specifically, we first learn a flow network to predict flows in the dynamic scene, and propose a flow feature aggregation module to extract flow features from video frames. Then, we propose a flow attention module to extract motion information from flow features, which is followed by a semantic network to output semantic logits of flows. We integrate the logits with
volume densities in the viewing direction to supervise the flow features with semantic labels on video frames. Experimental results show that our model is able to learn from multiple dynamic scenes and supports a series of new tasks such as instance-level scene editing, semantic completions, dynamic scene tracking and semantic adaption on novel scenes. | https://openreview.net/pdf/e63af47539c27ef566afde3ab6666d9b52050017.pdf |
Revisiting Link Prediction: a data perspective | https://openreview.net/forum?id=8Ur2xmuw7w | https://openreview.net/forum?id=8Ur2xmuw7w | Haitao Mao,Juanhui Li,Harry Shomer,Bingheng Li,Wenqi Fan,Yao Ma,Tong Zhao,Neil Shah,Jiliang Tang | ICLR 2024,Poster | Link prediction, a fundamental task on graphs, has proven indispensable in various applications, e.g., friend recommendation, protein analysis, and drug interaction prediction. However, since datasets span a multitude of domains, they could have distinct underlying mechanisms of link formation. Evidence in existing literature underscores the absence of a universally best algorithm suitable for all datasets. In this paper, we endeavor to explore principles of link prediction across diverse datasets from a data-centric perspective. We recognize three fundamental factors critical to link prediction: local structural proximity, global structural proximity, and feature proximity. We then unearth relationships among those factors where (i) global structural proximity only shows effectiveness when local structural proximity is deficient. (ii) The incompatibility can be found between feature and structural proximity. Such incompatibility leads to GNNs for Link Prediction (GNN4LP) consistently underperforming on edges where the feature proximity factor dominates. Inspired by these new insights from a data perspective, we offer practical instruction for GNN4LP model design and guidelines for selecting appropriate benchmark datasets for more comprehensive evaluations. | https://openreview.net/pdf/e4c47c2e95e6994d3d1097c3fcd32026b4474c21.pdf |
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding | https://openreview.net/forum?id=4L0xnS4GQM | https://openreview.net/forum?id=4L0xnS4GQM | Zilong Wang,Hao Zhang,Chun-Liang Li,Julian Martin Eisenschlos,Vincent Perot,Zifeng Wang,Lesly Miculicich,Yasuhisa Fujii,Jingbo Shang,Chen-Yu Lee,Tomas Pfister | ICLR 2024,Poster | Table-based reasoning with large language models (LLMs) is a promising direction to tackle many table understanding tasks, such as table-based question answering and fact verification. Compared with generic reasoning, table-based reasoning requires the extraction of underlying semantics from both free-form questions and semi-structured tabular data. Chain-of-Thought and its similar approaches incorporate the reasoning chain in the form of textual context, but it is still an open question how to effectively leverage tabular data in the reasoning chain. We propose the Chain-of-Table framework, where tabular data is explicitly used in the reasoning chain as a proxy for intermediate thoughts. Specifically, we guide LLMs using in-context learning to iteratively generate operations and update the table to represent a tabular reasoning chain. LLMs can therefore dynamically plan the next operation based on the results of the previous ones. This continuous evolution of the table forms a chain, showing the reasoning process for a given tabular problem. The chain carries structured information of the intermediate results, enabling more accurate and reliable predictions. Chain-of-Table achieves new state-of-the-art performance on WikiTQ, FeTaQA, and TabFact benchmarks across multiple LLM choices. | https://openreview.net/pdf/0dbe9070a64ac6e1efaba9fcc09d2c7568854c27.pdf |
Denoising Diffusion Bridge Models | https://openreview.net/forum?id=FKksTayvGo | https://openreview.net/forum?id=FKksTayvGo | Linqi Zhou,Aaron Lou,Samar Khanna,Stefano Ermon | ICLR 2024,Poster | Diffusion models are powerful generative models that map noise to data using stochastic processes. However, for many applications such as image editing, the model input comes from a distribution that is not random noise. As such, diffusion models must rely on cumbersome methods like guidance or projected sampling to incorporate this information in the generative process. In our work, we propose Denoising Diffusion Bridge Models (DDBMs), a natural alternative to this paradigm based on *diffusion bridges*, a family of processes that interpolate between two paired distributions given as endpoints. Our method learns the score of the diffusion bridge from data and maps from one endpoint distribution to the other by solving a (stochastic) differential equation based on the learned score. Our method naturally unifies several classes of generative models, such as score-based diffusion models and OT-Flow-Matching, allowing us to adapt existing design and architectural choices to our more general problem. Empirically, we apply DDBMs to challenging image datasets in both pixel and latent space. On standard image translation problems, DDBMs achieve significant improvement over baseline methods, and, when we reduce the problem to image generation by setting the source distribution to random noise, DDBMs achieve comparable FID scores to state-of-the-art methods despite being built for a more general task. | https://openreview.net/pdf/a8cdff6761d51b3418f150c062888844ed697787.pdf |
Incremental Randomized Smoothing Certification | https://openreview.net/forum?id=SdeAPV1irk | https://openreview.net/forum?id=SdeAPV1irk | Shubham Ugare,Tarun Suresh,Debangshu Banerjee,Gagandeep Singh,Sasa Misailovic | ICLR 2024,Poster | Randomized smoothing-based certification is an effective approach for obtaining robustness certificates of deep neural networks (DNNs) against adversarial attacks. This method constructs a smoothed DNN model and certifies its robustness through statistical sampling, but it is computationally expensive, especially when certifying with a large number of samples. Furthermore, when the smoothed model is modified (e.g., quantized or pruned), certification guarantees may not hold for the modified DNN, and recertifying from scratch can be prohibitively expensive.
We present the first approach for incremental robustness certification for randomized smoothing, IRS. We show how to reuse the certification guarantees for the original smoothed model to certify an approximated model with very few samples. IRS significantly reduces the computational cost of certifying modified DNNs while maintaining strong robustness guarantees. We experimentally demonstrate the effectiveness of our approach, showing up to 4.1x certification speedup over the certification that applies randomized smoothing of the approximate model from scratch. | https://openreview.net/pdf/acf5117cfacbbda2331d3d5dd1e9635927e9d366.pdf |
Local Graph Clustering with Noisy Labels | https://openreview.net/forum?id=89A5c6enfc | https://openreview.net/forum?id=89A5c6enfc | Artur Back de Luca,Kimon Fountoulakis,Shenghao Yang | ICLR 2024,Poster | The growing interest in machine learning problems over graphs with additional node information such as texts, images, or labels has popularized methods that require the costly operation of processing the entire graph. Yet, little effort has been made to the development of fast local methods (i.e. without accessing the entire graph) that extract useful information from such data. To that end, we propose a study of local graph clustering using noisy node labels as a proxy for additional node information. In this setting, nodes receive initial binary labels based on cluster affiliation: 1 if they belong to the target cluster and 0 otherwise. Subsequently, a fraction of these labels is flipped. We investigate the benefits of incorporating noisy labels for local graph clustering. By constructing a weighted graph with such labels, we study the performance of graph diffusion-based local clustering method on both the original and the weighted graphs. From a theoretical perspective, we consider recovering an unknown target cluster with a single seed node in a random graph with independent noisy node labels. We provide sufficient conditions on the label noise under which, with high probability, using diffusion in the weighted graph yields a more accurate recovery of the target cluster. This approach proves more effective than using the given labels alone or using diffusion in the label-free original graph. Empirically, we show that reliable node labels can be obtained with just a few samples from an attributed graph. Moreover, utilizing these labels via diffusion in the weighted graph leads to significantly better local clustering performance across several real-world datasets, improving F1 scores by up to 13\%. | https://openreview.net/pdf/623d80cd0fdc7fda5669e65920591cb672fdadc8.pdf |
Principled Federated Domain Adaptation: Gradient Projection and Auto-Weighting | https://openreview.net/forum?id=6J3ehSUrMU | https://openreview.net/forum?id=6J3ehSUrMU | Enyi Jiang,Yibo Jacky Zhang,Sanmi Koyejo | ICLR 2024,Poster | Federated Domain Adaptation (FDA) describes the federated learning (FL) setting where source clients and a server work collaboratively to improve the performance of a target client where limited data is available. The domain shift between the source and target domains, coupled with limited data of the target client, makes FDA a challenging problem, e.g., common techniques such as federated averaging and fine-tuning fail due to domain shift and data scarcity.
To theoretically understand the problem, we introduce new metrics that characterize the FDA setting and a theoretical framework with novel theorems for analyzing the performance of server aggregation rules. Further, we propose a novel lightweight aggregation rule, Federated Gradient Projection ($\texttt{FedGP}$), which significantly improves the target performance with domain shift and data scarcity. Moreover, our theory suggests an $\textit{auto-weighting scheme}$ that finds the optimal combinations of the source and target gradients. This scheme improves both $\texttt{FedGP}$ and a simpler heuristic aggregation rule. Extensive experiments verify the theoretical insights and illustrate the effectiveness of the proposed methods in practice. | https://openreview.net/pdf/157a666195e3de8c3efd2d03adaaab6fdb22dd6f.pdf |
GraphPulse: Topological representations for temporal graph property prediction | https://openreview.net/forum?id=DZqic2sPTY | https://openreview.net/forum?id=DZqic2sPTY | Kiarash Shamsi,Farimah Poursafaei,Shenyang Huang,Bao Tran Gia Ngo,Baris Coskunuzer,Cuneyt Gurcan Akcora | ICLR 2024,Poster | Many real-world networks evolve over time, and predicting the evolution of such networks remains a challenging task. Graph Neural Networks (GNNs) have shown empirical success for learning on static graphs, but they lack the ability to effectively learn from nodes and edges with different timestamps. Consequently, the prediction of future properties in temporal graphs remains a relatively under-explored area.
In this paper, we aim to bridge this gap by introducing a principled framework, named GraphPulse. The framework combines two important techniques for the analysis of temporal graphs within a Newtonian framework. First, we employ the Mapper method, a key tool in topological data analysis, to extract essential clustering information from graph nodes. Next, we harness the sequential modeling capabilities of Recurrent Neural Networks (RNNs) for temporal reasoning regarding the graph's evolution. Through extensive experimentation, we demonstrate that our model enhances the ROC-AUC metric by 10.2\% in comparison to the top-performing state-of-the-art method across various temporal networks. We provide the implementation of GraphPulse at https://github.com/kiarashamsi/GraphPulse. | https://openreview.net/pdf/0b79710d653e4abb9cdb1b11de8059b9db002668.pdf |
Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs | https://openreview.net/forum?id=xZDWO0oejD | https://openreview.net/forum?id=xZDWO0oejD | Qingru Zhang,Chandan Singh,Liyuan Liu,Xiaodong Liu,Bin Yu,Jianfeng Gao,Tuo Zhao | ICLR 2024,Poster | In human-written articles, we often leverage the subtleties of text style, such as bold and italics, to guide the attention of readers. These textual emphases are vital for the readers to grasp the conveyed information. When interacting with large language models (LLMs), we have a similar need -- steering the model to pay closer attention to user-specified information, e.g., an instruction. Existing methods, however, are constrained to process plain text and do not support such a mechanism. This motivates us to introduce PASTA -- Post-hoc Attention STeering Approach, a method that allows LLMs to read text with user-specified emphasis marks. To this end, PASTA identifies a small subset of attention heads and applies precise attention reweighting on them, directing the model attention to user-specified parts. Like prompting, PASTA is applied at inference time and does not require changing any model parameters. Experiments demonstrate that PASTA can substantially enhance an LLM's ability to follow user instructions or integrate new knowledge from user inputs, leading to a significant performance improvement on a variety of tasks, e.g., an average accuracy improvement of 22\% for LLAMA-7B. Our code is publicly available at https://github.com/QingruZhang/PASTA . | https://openreview.net/pdf/04523a36c3572f2d286402a814eb2ec77d67cc25.pdf |
PRIME: Prioritizing Interpretability in Failure Mode Extraction | https://openreview.net/forum?id=QrEHs9w5UF | https://openreview.net/forum?id=QrEHs9w5UF | Keivan Rezaei,Mehrdad Saberi,Mazda Moayeri,Soheil Feizi | ICLR 2024,Poster | In this work, we study the challenge of providing human-understandable descriptions for failure modes in trained image classification models.
Existing works address this problem by first identifying clusters (or directions) of incorrectly classified samples in a latent space and then aiming to provide human-understandable text descriptions for them.
We observe that in some cases, describing text does not match well
with identified failure modes, partially owing to the fact that shared interpretable attributes of failure modes may not be captured using clustering in the feature space.
To improve on these shortcomings, we propose a novel approach that prioritizes interpretability in this problem: we start by obtaining human-understandable concepts (tags) of images in the dataset and
then analyze the model's behavior based on the presence or absence of combinations of these tags.
Our method also ensures that the tags describing a failure mode form a minimal set,
avoiding redundant and noisy descriptions.
Through several experiments on different datasets, we show that our method successfully identifies failure modes and generates high-quality text descriptions associated with them.
These results highlight the importance of prioritizing interpretability in understanding model failures. | https://openreview.net/pdf/8007bcf8f34a78fd4dd848adb5d781d99167a5f5.pdf |
On gauge freedom, conservativity and intrinsic dimensionality estimation in diffusion models | https://openreview.net/forum?id=92KV9xAMhF | https://openreview.net/forum?id=92KV9xAMhF | Christian Horvat,Jean-Pascal Pfister | ICLR 2024,Poster | Diffusion models are generative models that have recently demonstrated impressive performances in terms of sampling quality and density estimation in high dimensions. They rely on a forward continuous diffusion process and a backward continuous denoising process, which can be described by a time-dependent vector field and is used as a generative model. In the original formulation of the diffusion model, this vector field is assumed to be the score function (i.e. it is the gradient of the log-probability at a given time in the diffusion process). Curiously, on the practical side, most studies on diffusion models implement this vector field as a neural network function and do not constrain it be the gradient of some energy function (that is, most studies do not constrain the vector field to be conservative). Even though some studies investigated empirically whether such a constraint will lead to a performance gain, they lead to contradicting results and failed to provide analytical results. Here, we provide three analytical results regarding the extent of the modeling freedom of this vector field. {Firstly, we propose a novel decomposition of vector fields into a conservative component and an orthogonal component which satisfies a given (gauge) freedom. Secondly, from this orthogonal decomposition, we show that exact density estimation and exact sampling is achieved when the conservative component is exactly equals to the true score and therefore conservativity is neither necessary nor sufficient to obtain exact density estimation and exact sampling. Finally, we show that when it comes to inferring local information of the data manifold, constraining the vector field to be conservative is desirable. | https://openreview.net/pdf/0f00bdbe115a876cbb3e04c3a5919841c882cd52.pdf |
Domain constraints improve risk prediction when outcome data is missing | https://openreview.net/forum?id=1mNFsbvo2P | https://openreview.net/forum?id=1mNFsbvo2P | Sidhika Balachandar,Nikhil Garg,Emma Pierson | ICLR 2024,Poster | Machine learning models are often trained to predict the outcome resulting from a human decision. For example, if a doctor decides to test a patient for disease, will the patient test positive? A challenge is that historical decision-making determines whether the outcome is observed: we only observe test outcomes for patients doctors historically tested. Untested patients, for whom outcomes are unobserved, may differ from tested patients along observed and unobserved dimensions. We propose a Bayesian model class which captures this setting. The purpose of the model is to accurately estimate risk for both tested and untested patients. Estimating this model is challenging due to the wide range of possibilities for untested patients. To address this, we propose two domain constraints which are plausible in health settings: a prevalence constraint, where the overall disease prevalence is known, and an expertise constraint, where the human decision-maker deviates from purely risk-based decision-making only along a constrained feature set. We show theoretically and on synthetic data that domain constraints improve parameter inference. We apply our model to a case study of cancer risk prediction, showing that the model's inferred risk predicts cancer diagnoses, its inferred testing policy captures known public health policies, and it can identify suboptimalities in test allocation. Though our case study is in healthcare, our analysis reveals a general class of domain constraints which can improve model estimation in many settings. | https://openreview.net/pdf/b86176320b4bfa73e8eee4928ee9ff15cc4181f7.pdf |
Learning Multi-Agent Communication with Contrastive Learning | https://openreview.net/forum?id=vZZ4hhniJU | https://openreview.net/forum?id=vZZ4hhniJU | Yat Long Lo,Biswa Sengupta,Jakob Nicolaus Foerster,Michael Noukhovitch | ICLR 2024,Poster | Communication is a powerful tool for coordination in multi-agent RL. But inducing an effective, common language is a difficult challenge, particularly in the decentralized setting. In this work, we introduce an alternative perspective where communicative messages sent between agents are considered as different incomplete views of the environment state. By examining the relationship between messages sent and received, we propose to learn to communicate using contrastive learning to maximize the mutual information between messages of a given trajectory. In communication-essential environments, our method outperforms previous work in both performance and learning speed. Using qualitative metrics and representation probing, we show that our method induces more symmetric communication and captures global state information from the environment. Overall, we show the power of contrastive learning and the importance of leveraging messages as encodings for effective communication. | https://openreview.net/pdf/2c9dcdc69b3da56a7ebdec7279ee8bc8087b5c39.pdf |
Closing the Gap between TD Learning and Supervised Learning - A Generalisation Point of View. | https://openreview.net/forum?id=qg5JENs0N4 | https://openreview.net/forum?id=qg5JENs0N4 | Raj Ghugare,Matthieu Geist,Glen Berseth,Benjamin Eysenbach | ICLR 2024,Poster | Some reinforcement learning (RL) algorithms have the capability of recombining together pieces of previously seen experience to solve a task never seen before during training. This oft-sought property is one of the few ways in which dynamic programming based RL algorithms are considered different from supervised learning (SL) based RL algorithms. Yet, recent RL methods based on off-the-shelf SL algorithms achieve excellent results without an explicit mechanism for stitching; it remains unclear whether those methods forgo this important stitching property. This paper studies this question in the setting of goal-reaching problems. We show that the desirable stitching property corresponds to a form of generalization: after training on a distribution of (state, goal) pairs, one would like to evaluate on (state, goal) pairs not seen together in the training data. Our analysis shows that this sort of generalization is different from i.i.d. generalization. This connection between stitching and generalization reveals why we should not expect existing RL methods based on SL to perform stitching, even in the limit of large datasets and models. We experimentally validate this result on carefully constructed datasets.
This connection suggests a simple remedy, the same remedy for improving generalization in supervised learning: data augmentation. We propose a naive temporal data augmentation approach and demonstrate that adding it to RL methods based on SL enables them to successfully stitch together experience, so that they succeed in navigating between states and goals unseen together during training. | https://openreview.net/pdf/a259dbdb9d4d44aa70dd1d0a024af855ec958224.pdf |
lpNTK: Better Generalisation with Less Data via Sample Interaction During Learning | https://openreview.net/forum?id=8Ju0VmvMCW | https://openreview.net/forum?id=8Ju0VmvMCW | Shangmin Guo,Yi Ren,Stefano V Albrecht,Kenny Smith | ICLR 2024,Poster | Although much research has been done on proposing new models or loss functions to improve the generalisation of artificial neural networks (ANNs), less attention has been directed to the impact of the training data on generalisation. In this work, we start from approximating the interaction between samples, i.e. how learning one sample would modify the model's prediction on other samples. Through analysing the terms involved in weight updates in supervised learning, we find that labels influence the interaction between samples. Therefore, we propose the labelled pseudo Neural Tangent Kernel (lpNTK) which takes label information into consideration when measuring the interactions between samples. We first prove that lpNTK asymptotically converges to the empirical neural tangent kernel in terms of the Frobenius norm under certain assumptions. Secondly, we illustrate how lpNTK helps to understand learning phenomena identified in previous work, specifically the learning difficulty of samples and forgetting events during learning. Moreover, we also show that using lpNTK to identify and remove poisoning training samples does not hurt the generalisation performance of ANNs. | https://openreview.net/pdf/6b2adcd8ad81d95376fb9844e20e0f10f2339d70.pdf |
Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation | https://openreview.net/forum?id=Xvfz8NHmCj | https://openreview.net/forum?id=Xvfz8NHmCj | Wenxuan Zhang,Youssef Mohamed,Bernard Ghanem,Philip Torr,Adel Bibi,Mohamed Elhoseiny | ICLR 2024,Poster | We propose and study a realistic Continual Learning (CL) setting where learning algorithms are granted a restricted computational budget per time step while training. We apply this setting to large-scale semi-supervised Continual Learning scenarios with sparse label rate. Previous proficient CL methods perform very poorly in this challenging setting. Overfitting to the sparse labeled data and insufficient computational budget are the two main culprits for such a poor performance. Our new setting encourages learning methods to effectively and efficiently utilize the unlabeled data during training. To that end, we propose a simple but highly effective baseline, DietCL, which utilizes both unlabeled and labeled data jointly. DietCL meticulously allocates computational budget for both types of data. We validate our baseline, at scale, on several datasets, e.g., CLOC, ImageNet10K, and CGLM, under constraint budget setup. DietCL outperforms, by a large margin, all existing supervised CL algorithms as well as more recent continual semi-supervised methods. Our extensive analysis and ablations demonstrate that DietCL is stable under a full spectrum of label sparsity, computational budget and various other ablations. | https://openreview.net/pdf/39ce02edd44311a18d225b7c411469aaeaa36fa0.pdf |
Video Decomposition Prior: Editing Videos Layer by Layer | https://openreview.net/forum?id=nfMyERXNru | https://openreview.net/forum?id=nfMyERXNru | Gaurav Shrivastava,Ser-Nam Lim,Abhinav Shrivastava | ICLR 2024,Poster | In the evolving landscape of video editing methodologies, a majority of deep learning techniques are often reliant on extensive datasets of observed input and ground truth sequence pairs for optimal performance. Such reliance often falters when acquiring data becomes challenging, especially in tasks like video dehazing and relighting, where replicating identical motions and camera angles in both corrupted and ground truth sequences is complicated. Moreover, these conventional methodologies perform best when the test distribution closely mirrors the training distribution. Recognizing these challenges, this paper introduces a novel video decomposition prior `VDP' framework which derives inspiration from professional video editing practices. Our methodology does not mandate task-specific external data corpus collection, instead pivots to utilizing the motion and appearance of the input video. VDP framework decomposes a video sequence into a set of multiple RGB layers and associated opacity levels. These set of layers are then manipulated individually to obtain the desired results. We addresses tasks such as video object segmentation, dehazing, and relighting. Moreover, we introduce a novel logarithmic video decomposition formulation for video relighting tasks, setting a new benchmark over the existing methodologies. We evaluate our approach on standard video datasets like DAVIS, REVIDE, & SDSD and show qualitative results on a diverse array of internet videos. | https://openreview.net/pdf/16042f39cfa56feff62c7750efb3c74ec2bab237.pdf |
Beyond task performance: evaluating and reducing the flaws of large multimodal models with in-context-learning | https://openreview.net/forum?id=mMaQvkMzDi | https://openreview.net/forum?id=mMaQvkMzDi | Mustafa Shukor,Alexandre Rame,Corentin Dancette,Matthieu Cord | ICLR 2024,Poster | Following the success of Large Language Models (LLMs), Large Multimodal Models (LMMs), such as the Flamingo model and its subsequent competitors, have started to emerge as natural steps towards generalist agents. However, interacting with recent LMMs reveals major limitations that are hardly captured by the current evaluation benchmarks. Indeed, task performances (e.g., VQA accuracy) alone do not provide enough clues to understand their real capabilities, limitations, and to which extent such models are aligned to human expectations. To refine our understanding of those flaws, we deviate from the current evaluation paradigm, and (1) evaluate 10 recent open-source LMMs from 3B up to 80B parameter scale, on 5 different axes; hallucinations, abstention, compositionality, explainability and instruction following. Our evaluation on these axes reveals major flaws in LMMs. While the current go-to solution to align these models is based on training, such as instruction tuning or RLHF, we rather (2) explore the training-free in-context learning (ICL) as a solution, and study how it affects these limitations. Based on our ICL study, (3) we push ICL further and propose new multimodal ICL variants such as; Multitask-ICL, Chain-of-Hindsight-ICL, and Self-Correcting-ICL. Our findings are as follows; (1) Despite their success, LMMs have flaws that remain unsolved with scaling alone. (2) The effect of ICL on LMMs flaws is nuanced; despite its effectiveness for improved explainability, answer abstention, ICL only slightly improves instruction following, does not improve compositional abilities, and actually even amplifies hallucinations. (3) The proposed ICL variants are promising as post-hoc approaches to efficiently tackle some of those flaws. The code is available here: https://github.com/mshukor/EvALign-ICL. | https://openreview.net/pdf/44360af9a776b75990f4167a311735f25f8a2c33.pdf |
Butterfly Effects of SGD Noise: Error Amplification in Behavior Cloning and Autoregression | https://openreview.net/forum?id=CgPs04l9TO | https://openreview.net/forum?id=CgPs04l9TO | Adam Block,Dylan J Foster,Akshay Krishnamurthy,Max Simchowitz,Cyril Zhang | ICLR 2024,Poster | This work studies training instabilities of behavior cloning with deep neural networks. We observe that minibatch SGD updates to the policy network during training result in sharp oscillations in long-horizon rewards, despite negligibly affecting the behavior cloning loss. We empirically disentangle the statistical and computational causes of these oscillations, and find them to stem from the chaotic propagation of minibatch SGD noise through unstable closed-loop dynamics. While SGD noise is benign in the single-step action prediction objective, it results in catastrophic error accumulation over long horizons, an effect we term *gradient variance amplification* (GVA). We demonstrate that many standard mitigation techniques do not alleviate GVA, but that taking an exponential moving average (EMA) of iterates is surprisingly effective at doing so. Furthermore, we illustrate the generality of the phenomenon by showing both the existence of GVA and its amelioration by EMA in autoregressive language generation. Finally, we provide theoretical vignettes both exhibiting the benefits of EMA in alleviating GVA and illustrating the extent to which classical convex models help in understanding the benefits of iterate averaging in deep learning. | https://openreview.net/pdf/f9b4d8ce050d60372c1935942226b07d5e967609.pdf |
On Stationary Point Convergence of PPO-Clip | https://openreview.net/forum?id=uznKlCpWjV | https://openreview.net/forum?id=uznKlCpWjV | Ruinan Jin,Shuai Li,Baoxiang Wang | ICLR 2024,Poster | Proximal policy optimization (PPO) has gained popularity in reinforcement learning (RL). Its PPO-Clip variant is one the most frequently implemented algorithms and is one of the first-to-try algorithms in RL tasks. This variant uses a clipped surrogate objective function not typically found in other algorithms. Many works have demonstrated the practical performance of PPO-Clip, but the theoretical understanding of it is limited to specific settings. In this work, we provide a comprehensive analysis that shows the stationary point convergence of PPO-Clip and the convergence rate thereof. Our analysis is new and overcomes many challenges, including the non-smooth nature of the clip operator, the potentially unbounded score function, and the involvement of the ratio of two stochastic policies. Our results and techniques might share new insights into PPO-Clip. | https://openreview.net/pdf/affa6137c1294a52762092393afbf692bb8c350f.pdf |
Automatic Functional Differentiation in JAX | https://openreview.net/forum?id=gzT61ziSCu | https://openreview.net/forum?id=gzT61ziSCu | Min Lin | ICLR 2024,Poster | We extend JAX with the capability to automatically differentiate higher-order functions (functionals and operators). By representing functions as infinite dimensional generalization of arrays, we seamlessly use JAX's existing primitive system to implement higher-order functions. We present a set of primitive operators that serve as foundational building blocks for constructing several key types of functionals. For every introduced primitive operator, we derive and implement both linearization and transposition rules, aligning with JAX's internal protocols for forward and reverse mode automatic differentiation. This enhancement allows for functional differentiation in the same syntax traditionally use for functions. The resulting functional gradients are themselves functions ready to be invoked in python. We showcase this tool's efficacy and simplicity through applications where functional derivatives are indispensable. | https://openreview.net/pdf/f304e5e89c5a8226b8d319b116acda0074847079.pdf |
FedCompass: Efficient Cross-Silo Federated Learning on Heterogeneous Client Devices Using a Computing Power-Aware Scheduler | https://openreview.net/forum?id=msXxrttLOi | https://openreview.net/forum?id=msXxrttLOi | Zilinghan Li,Pranshu Chaturvedi,Shilan He,Han Chen,Gagandeep Singh,Volodymyr Kindratenko,Eliu A Huerta,Kibaek Kim,Ravi Madduri | ICLR 2024,Poster | Cross-silo federated learning offers a promising solution to collaboratively train robust and generalized AI models without compromising the privacy of local datasets, e.g., healthcare, financial, as well as scientific projects that lack a centralized data facility. Nonetheless, because of the disparity of computing resources among different clients (i.e., device heterogeneity), synchronous federated learning algorithms suffer from degraded efficiency when waiting for straggler clients. Similarly, asynchronous federated learning algorithms experience degradation in the convergence rate and final model accuracy on non-identically and independently distributed (non-IID) heterogeneous datasets due to stale local models and client drift. To address these limitations in cross-silo federated learning with heterogeneous clients and data, we propose FedCompass, an innovative semi-asynchronous federated learning algorithm with a computing power-aware scheduler on the server side, which adaptively assigns varying amounts of training tasks to different clients using the knowledge of the computing power of individual clients. FedCompass ensures that multiple locally trained models from clients are received almost simultaneously as a group for aggregation, effectively reducing the staleness of local models. At the same time, the overall training process remains asynchronous, eliminating prolonged waiting periods from straggler clients. Using diverse non-IID heterogeneous distributed datasets, we demonstrate that FedCompass achieves faster convergence and higher accuracy than other asynchronous algorithms while remaining more efficient than synchronous algorithms when performing federated learning on heterogeneous clients. The source code for FedCompass is available at https://github.com/APPFL/FedCompass. | https://openreview.net/pdf/f279f1fb93970c0390c2f0b7f0a41818dea7f5aa.pdf |
ADOPD: A Large-Scale Document Page Decomposition Dataset | https://openreview.net/forum?id=x1ptaXpOYa | https://openreview.net/forum?id=x1ptaXpOYa | Jiuxiang Gu,Xiangxi Shi,Jason Kuen,Lu Qi,Ruiyi Zhang,Anqi Liu,Ani Nenkova,Tong Sun | ICLR 2024,Poster | Research in document image understanding is hindered by limited high-quality document data. To address this, we introduce ADOPD, a comprehensive dataset for document page decomposition. ADOPD stands out with its data-driven approach for document taxonomy discovery during data collection, complemented by dense annotations. Our approach integrates large-scale pretrained models with a human-in-the-loop process to guarantee diversity and balance in the resulting data collection. Leveraging our data-driven document taxonomy, we collect and densely annotate document images, addressing four document image understanding tasks: Doc2Mask, Doc2Box, Doc2Tag, and Doc2Seq. Specifically, for each image, the annotations include human-labeled entity masks, text bounding boxes, as well as automatically generated tags and captions that have been manually cleaned. We conduct comprehensive experimental analyses to validate our data and assess the four tasks using various models. We envision ADOPD as a foundational dataset with the potential to drive future research in document understanding. | https://openreview.net/pdf/ed945735c4fdded84b19584180cb71f5e88bf428.pdf |
Provably Efficient CVaR RL in Low-rank MDPs | https://openreview.net/forum?id=9x6yrFAPnx | https://openreview.net/forum?id=9x6yrFAPnx | Yulai Zhao,Wenhao Zhan,Xiaoyan Hu,Ho-fung Leung,Farzan Farnia,Wen Sun,Jason D. Lee | ICLR 2024,Poster | We study risk-sensitive Reinforcement Learning (RL), where we aim to maximize
the Conditional Value at Risk (CVaR) with a fixed risk tolerance $\tau$.
Prior theoretical work studying risk-sensitive RL focuses on the tabular Markov Decision Processes (MDPs) setting.
To extend CVaR RL to settings where state space is large, function approximation must be deployed.
We study CVaR RL in low-rank MDPs with nonlinear function approximation. Low-rank MDPs assume the underlying transition kernel admits a low-rank decomposition, but unlike prior linear models, low-rank MDPs do not assume the feature or state-action representation is known.
We propose a novel Upper Confidence Bound (UCB) bonus-driven algorithm to carefully balance the interplay between exploration, exploitation, and representation learning in CVaR RL.
We prove that our algorithm achieves a sample complexity of $\tilde{O}\left(\frac{H^7 A^2 d^4}{\tau^2 \epsilon^2}\right)$ to yield an $\epsilon$-optimal CVaR, where $H$ is the length of each episode, $A$ is the capacity of action space, and $d$ is the dimension of representations.
Computational-wise, we design a novel discretized Least-Squares Value Iteration (LSVI) algorithm for the CVaR objective as the planning oracle and show that we can find the near-optimal policy in a polynomial running time with a Maximum Likelihood Estimation oracle.
To our knowledge, this is the first provably efficient CVaR RL algorithm in low-rank MDPs. | https://openreview.net/pdf/eaf81f3cdb16317a7636b5cce56c2d8147e2ddce.pdf |
COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically for Model-Based RL | https://openreview.net/forum?id=jnFcKjtUPN | https://openreview.net/forum?id=jnFcKjtUPN | Xiyao Wang,Ruijie Zheng,Yanchao Sun,Ruonan Jia,Wichayaporn Wongkamjan,Huazhe Xu,Furong Huang | ICLR 2024,Poster | Dyna-style model-based reinforcement learning contains two phases: model rollouts to generate sample for policy learning and real environment exploration using current policy for dynamics model learning. However, due to the complex real-world environment, it is inevitable to learn an imperfect dynamics model with model prediction error, which can further mislead policy learning and result in sub-optimal solutions. In this paper, we propose $\texttt{COPlanner}$, a planning-driven framework for model-based methods to address the inaccurately learned dynamics model problem with conservative model rollouts and optimistic environment exploration. $\texttt{COPlanner}$ leverages an uncertainty-aware policy-guided model predictive control (UP-MPC) component to plan for multi-step uncertainty estimation. This estimated uncertainty then serves as a penalty during model rollouts and as a bonus during real environment exploration respectively, to choose actions. Consequently, $\texttt{COPlanner}$ can avoid model uncertain regions through conservative model rollouts, thereby alleviating the influence of model error. Simultaneously, it explores high-reward model uncertain regions to reduce model error actively through optimistic real environment exploration. $\texttt{COPlanner}$ is a plug-and-play framework that can be applied to any dyna-style model-based methods. Experimental results on a series of proprioceptive and visual continuous control tasks demonstrate that both sample efficiency and asymptotic performance of strong model-based methods are significantly improved combined with $\texttt{COPlanner}$. | https://openreview.net/pdf/ab26bab3b3100a3584256214927a1fa490451162.pdf |
Can Transformers Capture Spatial Relations between Objects? | https://openreview.net/forum?id=HgZUcwFhjr | https://openreview.net/forum?id=HgZUcwFhjr | Chuan Wen,Dinesh Jayaraman,Yang Gao | ICLR 2024,Poster | Spatial relationships between objects represent key scene information for humans to understand and interact with the world. To study the capability of current computer vision systems to recognize physically grounded spatial relations, we start by proposing precise relation definitions that permit consistently annotating a benchmark dataset. Despite the apparent simplicity of this task relative to others in the recognition literature, we observe that existing approaches perform poorly on this benchmark. We propose new approaches exploiting the long-range attention capabilities of transformers for this task, and evaluating key design principles. We identify a simple ``RelatiViT'' architecture and demonstrate that it outperforms all current approaches. To our knowledge, this is the first method to convincingly outperform naive baselines on spatial relation prediction in in-the-wild settings. The code and datasets are available in \url{https://sites.google.com/view/spatial-relation}. | https://openreview.net/pdf/1ec8df3c05a5e80570903e0023ec0369a13472f4.pdf |
Plug-and-Play Posterior Sampling under Mismatched Measurement and Prior Models | https://openreview.net/forum?id=66arKkGiFy | https://openreview.net/forum?id=66arKkGiFy | Marien Renaud,Jiaming Liu,Valentin De Bortoli,Andres Almansa,Ulugbek Kamilov | ICLR 2024,Poster | Posterior sampling has been shown to be a powerful Bayesian approach for solving imaging inverse problems. The recent plug-and-play unadjusted Langevin algorithm (PnP-ULA) has emerged as a promising method for Monte Carlo sampling and minimum mean squared error (MMSE) estimation by combining physical measurement models with deep-learning priors specified using image denoisers. However, the intricate relationship between the sampling distribution of PnP-ULA and the mismatched data-fidelity and denoiser has not been theoretically analyzed. We address this gap by proposing a posterior-$L_2$ pseudometric and using it to quantify an explicit error bound for PnP-ULA under mismatched posterior distribution. We numerically validate our theory on several inverse problems such as sampling from Gaussian mixture models and image deblurring. Our results suggest that the sensitivity of the sampling distribution of PnP-ULA to a mismatch in the measurement model and the denoiser can be precisely characterized. | https://openreview.net/pdf/9ea1be449a91b1b8d11cd4145e708cc11d5980c8.pdf |
Learning Robust Generalizable Radiance Field with Visibility and Feature Augmented Point Representation | https://openreview.net/forum?id=o4CLLlIaaH | https://openreview.net/forum?id=o4CLLlIaaH | WANG Jiaxu,Ziyi Zhang,Renjing Xu | ICLR 2024,Poster | This paper introduces a novel paradigm for the generalizable neural radiance field (NeRF). Previous generic NeRFs combine multiview stereo techniques with image-based neural rendering, yielding impressive results, while suffering from three issues. First, occlusions often result in inconsistent feature matching. Then, they deliver distortions and artifacts in geometric discontinuities and locally sharp shapes due to their individual process of sampled points and rough feature aggregation. Third, their image-based representations experience severe degradations
when source views are not near enough to the target view. To address challenges, we propose the first paradigm that constructs the generalizable neural field based on point-based rather than image-based rendering, which we call the Generalizable neural Point Field (GPF). Our approach explicitly models visibilities by geometric priors and augments them with neural features. We propose a novel nonuniform log sampling strategy to improve rendering speed and reconstruction quality. Moreover, we present a learnable kernel spatially augmented
with features for feature aggregations, mitigating distortions at places with drastically varying geometries. Besides, our representation can be easily manipulated. Experiments show that our model can deliver better geometries, view consistencies, and rendering quality than all counterparts and benchmarks on three datasets in both generalization and finetuning settings, preliminarily proving the potential of the new paradigm for generalizable NeRF | https://openreview.net/pdf/aa93ee892a0cfaedceafef0a9e81528fe6020ea9.pdf |
Navigating Text-To-Image Customization: From LyCORIS Fine-Tuning to Model Evaluation | https://openreview.net/forum?id=wfzXa8e783 | https://openreview.net/forum?id=wfzXa8e783 | SHIH-YING YEH,Yu-Guan Hsieh,Zhidong Gao,Bernard B W Yang,Giyeong Oh,Yanmin Gong | ICLR 2024,Poster | Text-to-image generative models have garnered immense attention for their ability to produce high-fidelity images from text prompts. Among these, Stable Diffusion distinguishes itself as a leading open-source model in this fast-growing field. However, the intricacies of fine-tuning these models pose multiple challenges from new methodology integration to systematic evaluation. Addressing these issues, this paper introduces LyCORIS (Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion), an open-source library that offers a wide selection of fine-tuning methodologies for Stable Diffusion. Furthermore, we present a thorough framework for the systematic assessment of varied fine-tuning techniques. This framework employs a diverse suite of metrics and delves into multiple facets of fine-tuning, including hyperparameter adjustments and the evaluation with different prompt types across various concept categories. Through this comprehensive approach, our work provides essential insights into the nuanced effects of fine-tuning parameters, bridging the gap between state-of-the-art research and practical application. | https://openreview.net/pdf/2c5d7180b9a8155961acd0ac4085d47085c23cea.pdf |
Finite Scalar Quantization: VQ-VAE Made Simple | https://openreview.net/forum?id=8ishA3LxN8 | https://openreview.net/forum?id=8ishA3LxN8 | Fabian Mentzer,David Minnen,Eirikur Agustsson,Michael Tschannen | ICLR 2024,Poster | We propose to replace vector quantization (VQ) in the latent representation of VQ-VAEs
with a simple scheme termed finite scalar quantization (FSQ), where we project the VAE representation down to a few dimensions (typically less than 10).
Each dimension is quantized to a small set of fixed values, leading to an (implicit) codebook given by the product of these sets.
By appropriately choosing the number of dimensions and values each dimension can take, we obtain the same codebook size as in VQ.
On top of such discrete representations,
we can train the same models that have been trained on VQ-VAE representations. For example, autoregressive and masked transformer models for image generation, multimodal generation, and dense prediction computer vision tasks.
Concretely, we employ FSQ with MaskGIT for image generation, and with UViM for depth estimation, colorization, and panoptic segmentation.
Despite the much simpler design of FSQ, we obtain competitive performance in all these tasks.
We emphasize that FSQ does not suffer from codebook collapse and does not need the complex machinery employed in VQ (commitment losses, codebook reseeding, code splitting, entropy penalties, etc.) to learn expressive discrete representations. | https://openreview.net/pdf/798fe17c3dd9040e6fdbd2307fd969061ec484f4.pdf |
Interpretable Meta-Learning of Physical Systems | https://openreview.net/forum?id=nnicaG5xiH | https://openreview.net/forum?id=nnicaG5xiH | Matthieu Blanke,Marc Lelarge | ICLR 2024,Poster | Machine learning methods can be a valuable aid in the scientific process, but they need to face challenging settings where data come from inhomogeneous experimental conditions. Recent meta-learning methods have made significant progress in multi-task learning, but they rely on black-box neural networks, resulting in high computational costs and limited interpretability. We introduce CAMEL, a new meta-learning architecture capable of learning efficiently from multiple environments, with an affine structure with respect to the learning task. We prove that CAMEL can identify the physical parameters of the system, enabling interpreable learning. We demonstrate the competitive generalization performance and the low computational cost of our method by comparing it to state-of-the-art algorithms on physical systems, ranging from toy models to complex, non-analytical systems. The interpretability of our method is illustrated with original applications to parameter identification and to adaptive control and system identification. | https://openreview.net/pdf/6aa5ecddbb4062e010f6e87f67a3b7222f689dbe.pdf |
Grokking in Linear Estimators -- A Solvable Model that Groks without Understanding | https://openreview.net/forum?id=GH2LYb9XV0 | https://openreview.net/forum?id=GH2LYb9XV0 | Noam Itzhak Levi,Alon Beck,Yohai Bar-Sinai | ICLR 2024,Poster | Grokking is the intriguing phenomenon where a model learns to generalize long after it has fit the training data.
We show both analytically and numerically that grokking can surprisingly occur in linear networks performing linear tasks in a simple teacher-student setup. In this setting, the full training dynamics is derived in terms of the expected training and generalization data covariance matrix. We present exact predictions on how the grokking time depends on input and output dimensionality, train sample size, regularization, and network parameters initialization.
The key findings are that late generalization increase may not imply a transition from "memorization" to "understanding", but can simply be an artifact of the accuracy measure.
We provide empirical verification for these propositions, along with preliminary results indicating that some predictions also hold for deeper networks, with non-linear activations. | https://openreview.net/pdf/dd10db9cc0953c0122ae45da21993b9b60ae82bb.pdf |
Discovering Failure Modes of Text-guided Diffusion Models via Adversarial Search | https://openreview.net/forum?id=TOWdQQgMJY | https://openreview.net/forum?id=TOWdQQgMJY | Qihao Liu,Adam Kortylewski,Yutong Bai,Song Bai,Alan Yuille | ICLR 2024,Poster | Text-guided diffusion models (TDMs) are widely applied but can fail unexpectedly. Common failures include: _(i)_ natural-looking text prompts generating images with the wrong content, or _(ii)_ different random samples of the latent variables that generate vastly different, and even unrelated, outputs despite being conditioned on the same text prompt. In this work, we aim to study and understand the failure modes of TDMs in more detail. To achieve this, we propose SAGE, the first adversarial search method on TDMs that systematically explores the discrete prompt space and the high-dimensional latent space, to automatically discover undesirable behaviors and failure cases in image generation. We use image classifiers as surrogate loss functions during searching, and employ human inspections to validate the identified failures. For the first time, our method enables efficient exploration of both the discrete and intricate human language space and the challenging latent space, overcoming the gradient vanishing problem. Then, we demonstrate the effectiveness of SAGE on five widely used generative models and reveal four typical failure modes that have not been systematically studied before: (1) We find a variety of natural text prompts that generate images failing to capture the semantics of input texts. We further discuss the underlying causes and potential solutions based on the results. (2) We find regions in the latent space that lead to distorted images independent of the text prompt, suggesting that parts of the latent space are not well-structured. (3) We also find latent samples that result in natural-looking images unrelated to the text prompt, implying a possible misalignment between the latent and prompt spaces. (4) By appending a single adversarial token embedding to any input prompts, we can generate a variety of specified target objects, with minimal impact on CLIP scores, demonstrating the fragility of language representations. | https://openreview.net/pdf/2f0f8be6731080c2df71f55ad821d89cedc1acd5.pdf |
DiffAR: Denoising Diffusion Autoregressive Model for Raw Speech Waveform Generation | https://openreview.net/forum?id=GTk0AdOYLq | https://openreview.net/forum?id=GTk0AdOYLq | Roi Benita,Michael Elad,Joseph Keshet | ICLR 2024,Poster | Diffusion models have recently been shown to be relevant for high-quality speech generation. Most work has been focused on generating spectrograms, and as such, they further require a subsequent model to convert the spectrogram to a waveform (i.e., a vocoder). This work proposes a diffusion probabilistic end-to-end model for generating a raw speech waveform. The proposed model is autoregressive, generating overlapping frames sequentially, where each frame is conditioned on a portion of the previously generated one. Hence, our model can effectively synthesize an unlimited speech duration while preserving high-fidelity synthesis and temporal coherence. We implemented the proposed model for unconditional and conditional speech generation, where the latter can be driven by an input sequence of phonemes, amplitudes, and pitch values. Working on the waveform directly has some empirical advantages. Specifically, it allows the creation of local acoustic behaviors, like vocal fry, which makes the overall waveform sounds more natural. Furthermore, the proposed diffusion model is stochastic and not deterministic; therefore, each inference generates a slightly different waveform variation, enabling abundance of valid realizations. Experiments show that the proposed model generates speech with superior quality compared with other state-of-the-art neural speech generation systems. | https://openreview.net/pdf/c345adedd1923a238d6e7fd2c159a8246734f126.pdf |
Statistical Rejection Sampling Improves Preference Optimization | https://openreview.net/forum?id=xbjSwwrQOe | https://openreview.net/forum?id=xbjSwwrQOe | Tianqi Liu,Yao Zhao,Rishabh Joshi,Misha Khalman,Mohammad Saleh,Peter J Liu,Jialu Liu | ICLR 2024,Poster | Improving the alignment of language models with human preferences remains an active research challenge. Previous approaches have primarily utilized online Reinforcement Learning from Human Feedback (RLHF). Recently, offline methods such as Sequence Likelihood Calibration (SLiC) and Direct Preference Optimization (DPO) have emerged as attractive alternatives, offering improvements in stability and scalability while maintaining competitive performance. SLiC refines its loss function using sequence pairs sampled from a supervised fine-tuned (SFT) policy, while DPO directly optimizes language models based on preference data, foregoing the need for a separate reward model. However, the maximum likelihood estimator (MLE) of the target optimal policy requires labeled preference pairs sampled from that policy. The absence of a reward model in DPO constrains its ability to sample preference pairs from the optimal policy. Meanwhile, SLiC can only sample preference pairs from the SFT policy. To address these limitations, we introduce a novel approach called Statistical Rejection Sampling Optimization (RSO) designed to source preference data from the target optimal policy using rejection sampling, enabling a more accurate estimation of the optimal policy. We also propose a unified framework that enhances the loss functions used in both SLiC and DPO from a preference modeling standpoint. Through extensive experiments across diverse tasks, we demonstrate that RSO consistently outperforms both SLiC and DPO as evaluated by both Large Language Models (LLMs) and human raters. | https://openreview.net/pdf/f5580e1dcb1423e7563982aec8482496ef57969f.pdf |
On the generalization capacity of neural networks during generic multimodal reasoning | https://openreview.net/forum?id=zyBJodMrn5 | https://openreview.net/forum?id=zyBJodMrn5 | Takuya Ito,Soham Dan,Mattia Rigotti,James Kozloski,Murray Campbell | ICLR 2024,Poster | The advent of the Transformer has led to the development of large language models (LLM), which appear to demonstrate human-like capabilities. To assess the generality of this class of models and a variety of other base neural network architectures to multimodal domains, we evaluated and compared their capacity for multimodal generalization. We introduce a multimodal question-answer benchmark to evaluate three specific types of out-of-distribution (OOD) generalization performance: distractor generalization (generalization in the presence of distractors), systematic compositional generalization (generalization to new task permutations), and productive compositional generalization (generalization to more complex tasks with deeper dependencies). While we found that most architectures faired poorly on most forms of generalization (e.g., RNNs and standard Transformers), models that leveraged cross-attention mechanisms between input domains, such as the Perceiver, fared better. Our positive results demonstrate that for multimodal distractor and systematic generalization, cross-attention is an important mechanism to integrate multiple sources of information. On the other hand, all architectures failed in productive generalization, suggesting fundamental limitations of existing architectures for specific types of multimodal OOD generalization. These results demonstrate the strengths and limitations of specific architectural components underlying modern neural models for multimodal reasoning. Finally, we provide *Generic COG* (gCOG), a configurable benchmark with several multimodal generalization splits, for future studies to explore. | https://openreview.net/pdf/f7a706bf2f572d8c1e6ff6efe2079660ea9674b2.pdf |
The Devil is in the Object Boundary: Towards Annotation-free Instance Segmentation using Foundation Models | https://openreview.net/forum?id=4JbrdrHxYy | https://openreview.net/forum?id=4JbrdrHxYy | Cheng Shi,Sibei Yang | ICLR 2024,Poster | Foundation models, pre-trained on a large amount of data have demonstrated impressive zero-shot capabilities in various downstream tasks. However, in object detection and instance segmentation, two fundamental computer vision tasks heavily reliant on extensive human annotations, foundation models such as SAM and DINO struggle to achieve satisfactory performance.
In this study, we reveal that the devil is in the object boundary, $\textit{i.e.}$, these foundation models fail to discern boundaries between individual objects.
For the first time, we probe that CLIP, which has never accessed any instance-level annotations, can provide a highly beneficial and strong instance-level boundary prior in the clustering results of its particular intermediate layer. Following this surprising observation, we propose $\textbf{\textit{Zip}}$ which $\textbf{Z}$ips up CL$\textbf{ip}$ and SAM in a novel classification-first-then-discovery pipeline, enabling annotation-free, complex-scene-capable, open-vocabulary object detection and instance segmentation.
Our Zip significantly boosts SAM's mask AP on COCO dataset by 12.5\% and establishes state-of-the-art performance in various settings, including training-free, self-training, and label-efficient finetuning. Furthermore, annotation-free Zip even achieves comparable performance to the best-performing open-vocabulary object detecters using base annotations. Code is released at https://github.com/ChengShiest/Zip-Your-CLIP | https://openreview.net/pdf/a5d1c26ea8f37dbbc39cda09f1436d2151cbc6c6.pdf |
Breaking Physical and Linguistic Borders: Multilingual Federated Prompt Tuning for Low-Resource Languages | https://openreview.net/forum?id=zzqn5G9fjn | https://openreview.net/forum?id=zzqn5G9fjn | Wanru Zhao,Yihong Chen,Royson Lee,Xinchi Qiu,Yan Gao,Hongxiang Fan,Nicholas Donald Lane | ICLR 2024,Poster | Pretrained large language models (LLMs) have emerged as a cornerstone in modern natural language processing, with their utility expanding to various applications and languages. However, the fine-tuning of multilingual LLMs, particularly for low-resource languages, is fraught with challenges steming from data-sharing restrictions (the physical border) and from the inherent linguistic differences (the linguistic border). These barriers hinder users of various languages, especially those in low-resource regions, from fully benefiting from the advantages of LLMs.
To overcome these challenges, we propose the Federated Prompt Tuning Paradigm for Multilingual Scenarios, which leverages parameter-efficient fine-tuning in a manner that preserves user privacy. We have designed a comprehensive set of experiments and introduced the concept of "language distance" to highlight the several strengths of this paradigm. Even under computational constraints, our method not only bolsters data efficiency but also facilitates mutual enhancements across languages, particularly benefiting low-resource ones. Compared to traditional local crosslingual transfer tuning methods, our approach achieves a 6.9\% higher accuracy, reduces the training parameters by over 99\%, and demonstrates stronger cross-lingual generalization. Such findings underscore the potential of our approach to promote social equality, ensure user privacy, and champion linguistic diversity. | https://openreview.net/pdf/7b596a6a9b3782552c44ef4e37c727aeb8be3eb1.pdf |
A Data-Driven Measure of Relative Uncertainty for Misclassification Detection | https://openreview.net/forum?id=ruGY8v10mK | https://openreview.net/forum?id=ruGY8v10mK | Eduardo Dadalto Câmara Gomes,Marco Romanelli,Georg Pichler,Pablo Piantanida | ICLR 2024,Poster | Misclassification detection is an important problem in machine learning, as it allows for the identification of instances where the model's predictions are unreliable. However, conventional uncertainty measures such as Shannon entropy do not provide an effective way to infer the real uncertainty associated with the model's predictions. In this paper, we introduce a novel data-driven measure of uncertainty relative to an observer for misclassification detection. By learning patterns in the distribution of soft-predictions, our uncertainty measure can identify misclassified samples based on the predicted class probabilities. Interestingly, according to the proposed measure, soft-predictions corresponding to misclassified instances can carry a large amount of uncertainty, even though they may have low Shannon entropy. We demonstrate empirical improvements over multiple image classification tasks, outperforming state-of-the-art misclassification detection methods. | https://openreview.net/pdf/f03c7f93f17173a9aa3bfe78e0a346b28dd150fd.pdf |
Most discriminative stimuli for functional cell type clustering | https://openreview.net/forum?id=9W6KaAcYlr | https://openreview.net/forum?id=9W6KaAcYlr | Max F Burg,Thomas Zenkel,Michaela Vystrčilová,Jonathan Oesterle,Larissa Höfling,Konstantin Friedrich Willeke,Jan Lause,Sarah Müller,Paul G. Fahey,Zhiwei Ding,Kelli Restivo,Shashwat Sridhar,Tim Gollisch,Philipp Berens,Andreas S. Tolias,Thomas Euler,Matthias Bethge,Alexander S Ecker | ICLR 2024,Poster | Identifying cell types and understanding their functional properties is crucial for unraveling the mechanisms underlying perception and cognition. In the retina, functional types can be identified by carefully selected stimuli, but this requires expert domain knowledge and biases the procedure towards previously known cell types. In the visual cortex, it is still unknown what functional types exist and how to identify them. Thus, for unbiased identification of the functional cell types in retina and visual cortex, new approaches are needed. Here we propose an optimization-based clustering approach using deep predictive models to obtain functional clusters of neurons using Most Discriminative Stimuli (MDS). Our approach alternates between stimulus optimization with cluster reassignment akin to an expectation-maximization algorithm. The algorithm recovers functional clusters in mouse retina, marmoset retina and macaque visual area V4. This demonstrates that our approach can successfully find discriminative stimuli across species, stages of the visual system and recording techniques. The resulting most discriminative stimuli can be used to assign functional cell types fast and on the fly, without the need to train complex predictive models or show a large natural scene dataset, paving the way for experiments that were previously limited by experimental time. Crucially, MDS are interpretable: they visualize the distinctive stimulus patterns that most unambiguously identify a specific type of neuron. | https://openreview.net/pdf/093e7c837637df251311a818fa0562a9f2c71d58.pdf |
Biased Temporal Convolution Graph Network for Time Series Forecasting with Missing Values | https://openreview.net/forum?id=O9nZCwdGcG | https://openreview.net/forum?id=O9nZCwdGcG | Xiaodan Chen,Xiucheng Li,Bo Liu,Zhijun Li | ICLR 2024,Poster | Multivariate time series forecasting plays an important role in various applications ranging from meteorology study, traffic management to economics planning. In the past decades, many efforts have been made toward accurate and reliable forecasting methods development under the assumption of intact input data. However, the time series data from real-world scenarios is often partially observed due to device malfunction or costly data acquisition, which can seriously impede the performance of the existing approaches. A naive employment of imputation methods unavoidably involves error accumulation and leads to suboptimal solutions. Motivated by this, we propose a Biased Temporal Convolution Graph Network that jointly captures the temporal dependencies and spatial structure. In particular, we inject bias into the two carefully developed modules, the Multi-Scale Instance PartialTCN and Biased GCN, to account for missing patterns. The experimental results show that our proposed model is able to achieve up to $9.93$\% improvements over the existing methods on five real-world benchmark datasets. Our code is available at: https://github.com/chenxiaodanhit/BiTGraph. | https://openreview.net/pdf/fac10027363bead6c32296faa4335ddd0197c9c7.pdf |
Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models | https://openreview.net/forum?id=JzG7kSpjJk | https://openreview.net/forum?id=JzG7kSpjJk | Jung Hwan Heo,Jeonghoon Kim,Beomseok Kwon,Byeongwook Kim,Se Jung Kwon,Dongsoo Lee | ICLR 2024,Poster | Large Language Models (LLMs) have recently demonstrated a remarkable success across various tasks. However, efficiently serving LLMs has been a challenge due to its large memory bottleneck, specifically in small batch inference settings (e.g. mobile devices). Weight-only quantization can be a promising approach, but sub-4 bit quantization remains a challenge due to large-magnitude activation outliers. To mitigate the undesirable outlier effect, we first propose per-IC quantization, a simple yet effective method that creates quantization groups within each input channel (IC) rather than the conventional per-output channel (OC). Our method is motivated by the observation that activation outliers affect the input dimension of the weight matrix, so similarly grouping the weights in the IC direction can $\textit{isolate outliers to be within a group}$. We also find that activation outliers do not dictate quantization difficulty, and inherent weight sensitivities also exist. With per-IC quantization as a new outlier-friendly scheme, we then propose Adaptive Dimensions ($\textbf{AdaDim}$), a versatile quantization framework that can adapt to various weight sensitivity patterns. We demonstrate the effectiveness of AdaDim by augmenting prior methods such as Round-To-Nearest and GPTQ, showing significant improvements across various language modeling benchmarks for both base (up to $+4.7\%$ on MMLU) and instruction-tuned (up to $+10\%$ on HumanEval) LLMs. | https://openreview.net/pdf/f7b6e245c95384388621ca3c11e72335bb105d4b.pdf |
DecompOpt: Controllable and Decomposed Diffusion Models for Structure-based Molecular Optimization | https://openreview.net/forum?id=Y3BbxvAQS9 | https://openreview.net/forum?id=Y3BbxvAQS9 | Xiangxin Zhou,Xiwei Cheng,Yuwei Yang,Yu Bao,Liang Wang,Quanquan Gu | ICLR 2024,Poster | Recently, 3D generative models have shown promising performances in structure-based drug design by learning to generate ligands given target binding sites. However, only modeling the target-ligand distribution can hardly fulfill one of the main goals in drug discovery -- designing novel ligands with desired properties, e.g., high binding affinity, easily synthesizable, etc. This challenge becomes particularly pronounced when the target-ligand pairs used for training do not align with these desired properties. Moreover, most existing methods aim at solving de novo design task, while many generative scenarios requiring flexible controllability, such as R-group optimization and scaffold hopping, have received little attention. In this work, we propose DecompOpt, a structure-based molecular optimization method based on a controllable and decomposed diffusion model. DecompOpt presents a new generation paradigm which combines optimization with conditional diffusion models to achieve desired properties while adhering to the molecular grammar. Additionally, DecompOpt offers a unified framework covering both de novo design and controllable generation. To achieve so, ligands are decomposed into substructures which allows fine-grained control and local optimization. Experiments show that DecompOpt can efficiently generate molecules with improved properties than strong de novo baselines, and demonstrate great potential in controllable generation tasks. | https://openreview.net/pdf/60fd35737daa96c8deecf4ee743c1bcf661a39f9.pdf |
Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals | https://openreview.net/forum?id=UMfcdRIotC | https://openreview.net/forum?id=UMfcdRIotC | Yair Ori Gat,Nitay Calderon,Amir Feder,Alexander Chapanin,Amit Sharma,Roi Reichart | ICLR 2024,Poster | Causal explanations of the predictions of NLP systems are essential to ensure safety and establish trust. Yet, existing methods often fall short of explaining model predictions effectively or efficiently and are often model-specific. In this paper, we address model-agnostic explanations, proposing two approaches for counterfactual (CF) approximation. The first approach is CF generation, where a large language model (LLM) is prompted to change a specific text concept while keeping confounding concepts unchanged. While this approach is demonstrated to be very effective, applying LLM at inference-time is costly. We hence present a second approach based on matching, and propose a method that is guided by an LLM at training-time and learns a dedicated embedding space. This space is faithful to a given causal graph and effectively serves to identify matches that approximate CFs. After showing theoretically that approximating CFs is required in order to construct faithful explanations, we benchmark our approaches and explain several models, including LLMs with billions of parameters. Our empirical results demonstrate the excellent performance of CF generation models as model-agnostic explainers. Moreover, our matching approach, which requires far less test-time resources, also provides effective explanations, surpassing many baselines. We also find that Top-K techniques universally improve every tested method. Finally, we showcase the potential of LLMs in constructing new benchmarks for model explanation and subsequently validate our conclusions. Our work illuminates new pathways for efficient and accurate approaches to interpreting NLP systems. | https://openreview.net/pdf/b005b0a0b705b0f6459282c15ecfdd504dfb1ebd.pdf |
Separating common from salient patterns with Contrastive Representation Learning | https://openreview.net/forum?id=30N3bNAiw3 | https://openreview.net/forum?id=30N3bNAiw3 | Robin Louiset,Edouard Duchesnay,Antoine Grigis,Pietro Gori | ICLR 2024,Poster | Contrastive Analysis is a sub-field of Representation Learning that aims at separating 1) salient factors of variation - that only exist in the target dataset (i.e., diseased subjects) in contrast with 2) common factors of variation between target and background (i.e., healthy subjects) datasets. Despite their relevance, current models based on Variational Auto-Encoders have shown poor performance in learning semantically-expressive representations. On the other hand, Contrastive Representation Learning has shown tremendous performance leaps in various applications (classification, clustering, etc.). In this work, we propose to leverage the ability of Contrastive Learning to learn semantically expressive representations when performing Contrastive Analysis. Namely, we reformulate Contrastive Analysis under the lens of the InfoMax Principle and identify two Mutual Information terms to maximize and one to minimize. We decompose the two first terms into an Alignment and a Uniformity term, as commonly done in Contrastive Learning. Then, we motivate a novel Mutual Information minimization strategy to prevent information leakage between common and salient distributions. We validate our method on datasets designed to assess the pattern separation capability in Contrastive Analysis, including MNIST superimposed on CIFAR10, CelebA accessories, dSprites item superimposed on a digit grid, and three medical datasets. | https://openreview.net/pdf/2028de276ac8f4cafaaefa472135a132b1c5880e.pdf |
Self-Supervised Contrastive Learning for Long-term Forecasting | https://openreview.net/forum?id=nBCuRzjqK7 | https://openreview.net/forum?id=nBCuRzjqK7 | Junwoo Park,Daehoon Gwak,Jaegul Choo,Edward Choi | ICLR 2024,Poster | Long-term forecasting presents unique challenges due to the time and memory
complexity of handling long sequences. Existing methods, which rely on sliding windows to process long sequences, struggle to effectively capture long-term variations that are partially caught within the short window (i.e., outer-window variations). In this paper, we introduce a novel approach that overcomes this limitation by employing contrastive learning and enhanced decomposition architecture,
specifically designed to focus on long-term variations. To this end, our contrastive
loss incorporates global autocorrelation held in the whole time series, which facilitates the construction of positive and negative pairs in a self-supervised manner. When combined with our decomposition networks, our constrative learning significantly improves long-term forecasting performance. Extensive experiments demonstrate that our approach outperforms 14 baseline models on well-established
nine long-term benchmarks, especially in challenging scenarios that require a significantly long output for forecasting. This paper not only presents a novel direction for long-term forecasting but also offers a more reliable method for effectively integrating long-term variations into time-series representation learning. | https://openreview.net/pdf/3b2989ee19ac823390821b9bb2cb41a400b30ac8.pdf |
A Semantic Invariant Robust Watermark for Large Language Models | https://openreview.net/forum?id=6p8lpe4MNf | https://openreview.net/forum?id=6p8lpe4MNf | Aiwei Liu,Leyi Pan,Xuming Hu,Shiao Meng,Lijie Wen | ICLR 2024,Poster | Watermark algorithms for large language models (LLMs) have achieved extremely high accuracy in detecting text generated by LLMs. Such algorithms typically involve adding extra watermark logits to the LLM's logits at each generation step. However, prior algorithms face a trade-off between attack robustness and security robustness. This is because the watermark logits for a token are determined by a certain number of preceding tokens; a small number leads to low security robustness, while a large number results in insufficient attack robustness. In this work, we propose a semantic invariant watermarking method for LLMs that provides both attack robustness and security robustness. The watermark logits in our work are determined by the semantics of all preceding tokens. Specifically, we utilize another embedding LLM to generate semantic embeddings for all preceding tokens, and then these semantic embeddings are transformed into the watermark logits through our trained watermark model.
Subsequent analyses and experiments demonstrated the attack robustness of our method in semantically invariant settings: synonym substitution and text paraphrasing settings. Finally, we also show that our watermark possesses adequate security robustness. | https://openreview.net/pdf/73727e667d0cd3d9ef9aa2c2aaf95a4dd55c4890.pdf |
Fast Equilibrium of SGD in Generic Situations | https://openreview.net/forum?id=qgWJkDiI5p | https://openreview.net/forum?id=qgWJkDiI5p | Zhiyuan Li,Yi Wang,Zhiren Wang | ICLR 2024,Poster | Normalization layers are ubiquitous in deep learning, greatly accelerating optimization. However, they also introduce many unexpected phenomena during training, for example, the Fast Equilibrium conjecture proposed by (Li et al.,2020), which states that the scale-invariant normalized network, when trained by SGD with $\eta$ learning rate and $\lambda$ weight decay, mixes to an equilibrium in $\tilde{O}(1/\eta\lambda)$ steps, as opposed to classical $e^{O(\eta^{-1})}$ mixing time. Recent works by Wang & Wang (2022); Li et al. (2022c) proved this conjecture under different sets of assumptions. This paper aims to answer the fast equilibrium conjecture in full generality by removing the non-generic assumptions of Wang & Wang (2022); Li et al. (2022c) that the minima are isolated, that the region near minima forms a unique basin, and that the set of minima is an analytic set. Our main technical contribution is to show that with probability close to 1, in exponential time trajectories will not escape the attracting basin containing its initial position. | https://openreview.net/pdf/8a344dd66a52b8302a079e5085eb82003ceab1e9.pdf |
Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM | https://openreview.net/forum?id=izrOLJov5y | https://openreview.net/forum?id=izrOLJov5y | Eliya Nachmani,Alon Levkovitch,Roy Hirsch,Julian Salazar,Chulayuth Asawaroengchai,Soroosh Mariooryad,Ehud Rivlin,RJ Skerry-Ryan,Michelle Tadmor Ramanovich | ICLR 2024,Poster | We present Spectron, a novel approach to adapting pre-trained large language models (LLMs) to perform spoken question answering (QA) and speech continuation. By endowing the LLM with a pre-trained speech encoder, our model becomes able to take speech inputs and generate speech outputs. The entire system is trained end-to-end and operates directly on spectrograms, simplifying our architecture. Key to our approach is a training objective that jointly supervises speech recognition, text continuation, and speech synthesis using only paired speech-text pairs, enabling a `cross-modal' chain-of-thought within a single decoding pass. Our method surpasses existing spoken language models in speaker preservation and semantic coherence. Furthermore, the proposed model improves upon direct initialization in retaining the knowledge of the original LLM as demonstrated through spoken QA datasets. We release our audio samples and spoken QA dataset via our website. | https://openreview.net/pdf/d13a58ec6d34a9e5a63eea1003d654cf9e48e3b7.pdf |
Transport meets Variational Inference: Controlled Monte Carlo Diffusions | https://openreview.net/forum?id=PP1rudnxiW | https://openreview.net/forum?id=PP1rudnxiW | Francisco Vargas,Shreyas Padhy,Denis Blessing,Nikolas Nüsken | ICLR 2024,Poster | Connecting optimal transport and variational inference, we present a principled and systematic framework for sampling and generative modelling centred around divergences on path space. Our work culminates in the development of the Controlled Monte Carlo Diffusion sampler (CMCD) for Bayesian computation, a score-based annealing technique that crucially adapts both forward and backward dynamics in a diffusion model. On the way, we clarify the relationship between the EM-algorithm and iterative proportional fitting (IPF) for Schroedinger bridges, deriving as well a regularised objective that bypasses the iterative bottleneck of standard IPF-updates. Finally, we show that CMCD has a strong foundation in the Jarzinsky and Crooks identities from statistical physics, and that it convincingly outperforms competing approaches across a wide array of experiments. | https://openreview.net/pdf/a64ae00c935b14d6dfef8b57d262a7401d169dfe.pdf |
DAFA: Distance-Aware Fair Adversarial Training | https://openreview.net/forum?id=BRdEBlwUW6 | https://openreview.net/forum?id=BRdEBlwUW6 | Hyungyu Lee,Saehyung Lee,Hyemi Jang,Junsung Park,Ho Bae,Sungroh Yoon | ICLR 2024,Poster | The disparity in accuracy between classes in standard training is amplified during adversarial training, a phenomenon termed the robust fairness problem. Existing methodologies aimed to enhance robust fairness by sacrificing the model's performance on easier classes in order to improve its performance on harder ones. However, we observe that under adversarial attacks, the majority of the model's predictions for samples from the worst class are biased towards classes similar to the worst class, rather than towards the easy classes. Through theoretical and empirical analysis, we demonstrate that robust fairness deteriorates as the distance between classes decreases. Motivated by these insights, we introduce the Distance-Aware Fair Adversarial Training (DAFA) methodology, which addresses robust fairness by taking into account the similarities between classes. Specifically, our method assigns distinct adversarial margins and loss weights to each class and adjusts them to encourage a trade-off in robustness among similar classes. Experimental results across various datasets demonstrate that our method not only maintains average robust accuracy but also significantly improves the worst robust accuracy, indicating a marked improvement in robust fairness compared to existing methods. | https://openreview.net/pdf/00a90c2371f58783c93e92854c2828b446f4066f.pdf |
AffineQuant: Affine Transformation Quantization for Large Language Models | https://openreview.net/forum?id=of2rhALq8l | https://openreview.net/forum?id=of2rhALq8l | Yuexiao Ma,Huixia Li,Xiawu Zheng,Feng Ling,Xuefeng Xiao,Rui Wang,Shilei Wen,Fei Chao,Rongrong Ji | ICLR 2024,Poster | The significant resource requirements associated with Large-scale Language Models (LLMs) have generated considerable interest in the development of techniques aimed at compressing and accelerating neural networks.
Among these techniques, Post-Training Quantization (PTQ) has emerged as a subject of considerable interest due to its noteworthy compression efficiency and cost-effectiveness in the context of training.
Existing PTQ methods for LLMs limit the optimization scope to scaling transformations between pre- and post-quantization weights.
This constraint results in significant errors after quantization, particularly in low-bit configurations.
In this paper, we advocate for the direct optimization using equivalent Affine transformations in PTQ (AffineQuant).
This approach extends the optimization scope and thus significantly minimizing quantization errors.
Additionally, by employing the corresponding inverse matrix, we can ensure equivalence between the pre- and post-quantization outputs of PTQ, thereby maintaining its efficiency and generalization capabilities.
To ensure the invertibility of the transformation during optimization, we further introduce a gradual mask optimization method.
This method initially focuses on optimizing the diagonal elements and gradually extends to the other elements.
Such an approach aligns with the Levy-Desplanques theorem, theoretically ensuring invertibility of the transformation.
As a result, significant performance improvements are evident across different LLMs on diverse datasets.
Notably, these improvements are most pronounced when using very low-bit quantization, enabling the deployment of large models on edge devices.
To illustrate, we attain a C4 perplexity of $15.76$ (2.26$\downarrow$ vs $18.02$ in OmniQuant) on the LLaMA2-$7$B model of W$4$A$4$ quantization without overhead.
On zero-shot tasks, AffineQuant achieves an average of $58.61\%$ accuracy ( $1.98\%\uparrow$ vs $56.63$ in OmniQuant) when using $4$/$4$-bit quantization for LLaMA-$30$B, which setting a new state-of-the-art benchmark for PTQ in LLMs.
Codes are available at: https://github.com/bytedance/AffineQuant. | https://openreview.net/pdf/95e017eecf36f3f7745315b50cb34d7b24c481ce.pdf |
Encoding Unitig-level Assembly Graphs with Heterophilous Constraints for Metagenomic Contigs Binning | https://openreview.net/forum?id=vBw8JGBJWj | https://openreview.net/forum?id=vBw8JGBJWj | Hansheng Xue,Vijini Mallawaarachchi,Lexing Xie,Vaibhav Rajan | ICLR 2024,Poster | Metagenomics studies genomic material derived from mixed microbial communities in diverse environments, holding considerable significance for both human health and environmental sustainability. Metagenomic binning refers to the clustering of genomic subsequences obtained from high-throughput DNA sequencing into distinct bins, each representing a constituent organism within the community. Mainstream binning methods primarily rely on sequence features such as composition and abundance, making them unable to effectively handle sequences shorter than 1,000 bp and inherent noise within sequences. Several binning tools have emerged, aiming to enhance binning outcomes by using the assembly graph generated by assemblers, which encodes valuable overlapping information among genomic sequences. However, existing assembly graph-based binners mainly focus on simplified contig-level assembly graphs that are recreated from assembler’s original graphs, unitig-level assembly graphs. The simplification reduces the resolution of the connectivity information in original graphs. In this paper, we design a novel binning tool named UnitigBin, which leverages representation learning on unitig-level assembly graphs while adhering to heterophilious constraints imposed by single-copy marker genes, ensuring that constrained contigs cannot be grouped together. Extensive experiments conducted on synthetic and real datasets demonstrate that UnitigBin significantly surpasses state-of-the-art binning tools. | https://openreview.net/pdf/5e09b5e2a56f96a1275298d6160aaeaac721aa8d.pdf |
SF(DA)$^2$: Source-free Domain Adaptation Through the Lens of Data Augmentation | https://openreview.net/forum?id=kUCgHbmO11 | https://openreview.net/forum?id=kUCgHbmO11 | Uiwon Hwang,Jonghyun Lee,Juhyeon Shin,Sungroh Yoon | ICLR 2024,Poster | In the face of the deep learning model's vulnerability to domain shift, source-free domain adaptation (SFDA) methods have been proposed to adapt models to new, unseen target domains without requiring access to source domain data. Although the potential benefits of applying data augmentation to SFDA are attractive, several challenges arise such as the dependence on prior knowledge of class-preserving transformations and the increase in memory and computational requirements. In this paper, we propose Source-free Domain Adaptation Through the Lens of Data Augmentation (SF(DA)$^2$), a novel approach that leverages the benefits of data augmentation without suffering from these challenges. We construct an augmentation graph in the feature space of the pretrained model using the neighbor relationships between target features and propose spectral neighborhood clustering to identify partitions in the prediction space. Furthermore, we propose implicit feature augmentation and feature disentanglement as regularization loss functions that effectively utilize class semantic information within the feature space. These regularizers simulate the inclusion of an unlimited number of augmented target features into the augmentation graph while minimizing computational and memory demands. Our method shows superior adaptation performance in SFDA scenarios, including 2D image and 3D point cloud datasets and a highly imbalanced dataset. | https://openreview.net/pdf/6c35c33ebcc803dfef4c29bbfad1934cb2f98442.pdf |
Mitigating the Curse of Dimensionality for Certified Robustness via Dual Randomized Smoothing | https://openreview.net/forum?id=C1sQBG6Sqp | https://openreview.net/forum?id=C1sQBG6Sqp | Song Xia,Yi Yu,Xudong Jiang,Henghui Ding | ICLR 2024,Poster | Randomized Smoothing (RS) has been proven a promising method for endowing an arbitrary image classifier with certified robustness. However, the substantial uncertainty inherent in the high-dimensional isotropic Gaussian noise imposes the curse of dimensionality on RS. Specifically, the upper bound of ${\ell_2}$ certified robustness radius provided by RS exhibits a diminishing trend with the expansion of the input dimension $d$, proportionally decreasing at a rate of $1/\sqrt{d}$. This paper explores the feasibility of providing ${\ell_2}$ certified robustness for high-dimensional input through the utilization of dual smoothing in the lower-dimensional space. The proposed Dual Randomized Smoothing (DRS) down-samples the input image into two sub-images and smooths the two sub-images in lower dimensions. Theoretically, we prove that DRS guarantees a tight ${\ell_2}$ certified robustness radius for the original input and reveal that DRS attains a superior upper bound on the ${\ell_2}$ robustness radius, which decreases proportionally at a rate of $(1/\sqrt m + 1/\sqrt n )$ with $m+n=d$. Extensive experiments demonstrate the generalizability and effectiveness of DRS, which exhibits a notable capability to integrate with established methodologies, yielding substantial improvements in both accuracy and ${\ell_2}$ certified robustness baselines of RS on the CIFAR-10 and ImageNet datasets. Code is available at https://github.com/xiasong0501/DRS. | https://openreview.net/pdf/cd7865fec3a804bbf5e7b41b2c062a7f5de9da9c.pdf |
In-context Exploration-Exploitation for Reinforcement Learning | https://openreview.net/forum?id=uIKZSStON3 | https://openreview.net/forum?id=uIKZSStON3 | Zhenwen Dai,Federico Tomasi,Sina Ghiassian | ICLR 2024,Poster | In-context learning is a promising approach for online policy learning of offline reinforcement learning (RL) methods, which can be achieved at inference time without gradient optimization. However, this method is hindered by significant computational costs resulting from the gathering of large training trajectory sets and the need to train large Transformer models. We address this challenge by introducing an In-context Exploration-Exploitation (ICEE) algorithm, designed to optimize the efficiency of in-context policy learning. Unlike existing models, ICEE performs an exploration-exploitation trade-off at inference time within a Transformer model, without the need for explicit Bayesian inference. Consequently, ICEE can solve Bayesian optimization problems as efficiently as Gaussian process biased methods do, but in significantly less time. Through experiments in grid world environments, we demonstrate that ICEE can learn to solve new RL tasks using only tens of episodes, marking a substantial improvement over the hundreds of episodes needed by the previous in-context learning method. | https://openreview.net/pdf/2be54fea28c2cbe6e795c9f61f8f1775cf5d6b94.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.