title
stringlengths 15
138
| url
stringlengths 42
42
| detail_url
stringlengths 42
42
| authors
stringlengths 7
526
| tags
stringclasses 3
values | abstract
stringlengths 480
3.09k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Retrieval-based Disentangled Representation Learning with Natural Language Supervision | https://openreview.net/forum?id=ZlQRiFmq7Y | https://openreview.net/forum?id=ZlQRiFmq7Y | Jiawei Zhou,Xiaoguang Li,Lifeng Shang,Xin Jiang,Qun Liu,Lei Chen | ICLR 2024,Spotlight | Disentangled representation learning remains challenging as the underlying factors of variation in the data do not naturally exist. The inherent complexity of real-world data makes it unfeasible to exhaustively enumerate and encapsulate all its variations within a finite set of factors. However, it is worth noting that most real-world data have linguistic equivalents, typically in the form of textual descriptions. These linguistic counterparts can represent the data and effortlessly decomposed into distinct tokens. In light of this, we present Vocabulary Disentangled Retrieval (VDR), a retrieval-based framework that harnesses natural language as proxies of the underlying data variation to drive disentangled representation learning. Our approach employ a bi-encoder model to represent both data and natural language in a vocabulary space, enabling the model to distinguish dimensions that capture intrinsic characteristics within data through its natural language counterpart, thus facilitating disentanglement. We extensively assess the performance of VDR across 15 retrieval benchmark datasets, covering text-to-text and cross-modal retrieval scenarios, as well as human evaluation. Our experimental results compellingly demonstrate the superiority of VDR over previous bi-encoder retrievers with comparable model size and training costs, achieving an impressive 8.7% improvement in NDCG@10 on the BEIR benchmark, a 5.3\% increase on MS COCO, and a 6.0% increase on Flickr30k in terms of mean recall in the zero-shot setting. Moreover, The results from human evaluation indicate that interpretability of our method is on par with SOTA captioning models. | https://openreview.net/pdf/806a04ba3fc6094730d982164ed4de6b3cf4f351.pdf |
On the Markov Property of Neural Algorithmic Reasoning: Analyses and Methods | https://openreview.net/forum?id=Kn7tWhuetn | https://openreview.net/forum?id=Kn7tWhuetn | Montgomery Bohde,Meng Liu,Alexandra Saxton,Shuiwang Ji | ICLR 2024,Spotlight | Neural algorithmic reasoning is an emerging research direction that endows neural networks with the ability to mimic algorithmic executions step-by-step. A common paradigm in existing designs involves the use of historical embeddings in predicting the results of future execution steps. Our observation in this work is that such historical dependence intrinsically contradicts the Markov nature of algorithmic reasoning tasks. Based on this motivation, we present our ForgetNet, which does not use historical embeddings and thus is consistent with the Markov nature of the tasks. To address challenges in training ForgetNet at early stages, we further introduce G-ForgetNet, which uses a gating mechanism to allow for the selective integration of historical embeddings. Such an enhanced capability provides valuable computational pathways during the model's early training phase. Our extensive experiments, based on the CLRS-30 algorithmic reasoning benchmark, demonstrate that both ForgetNet and G-ForgetNet achieve better generalization capability than existing methods. Furthermore, we investigate the behavior of the gating mechanism, highlighting its degree of alignment with our intuitions and its effectiveness for robust performance. Our code is publicly available at https://github.com/divelab/ForgetNet. | https://openreview.net/pdf/46ea9907175ecd6c88621bba3b5478fb9390eea8.pdf |
TRAM: Bridging Trust Regions and Sharpness Aware Minimization | https://openreview.net/forum?id=kxebDHZ7b7 | https://openreview.net/forum?id=kxebDHZ7b7 | Tom Sherborne,Naomi Saphra,Pradeep Dasigi,Hao Peng | ICLR 2024,Spotlight | Sharpness-aware minimization (SAM) reports improving domain generalization by
reducing the loss surface curvature in the parameter space. However,
generalization during _fine-tuning_ is often more dependent on the
transferability of _representations_ in the function space. Trust-region
methods (TR) target this goal by regularizing representation curvature to reduce
catastrophic forgetting of pre-trained task-agnostic information while adopting
task-specific skills. We consider unifying these strategies for low curvature in
both parameter space and function space to improve out-of-domain (OOD)
generalization. We propose **Trust Region Aware Minimization** (TRAM), a
SAM algorithm fine-tuning for low parameter sharpness and smooth, informative
representations preserving pre-trained structure. TRAM uses a trust region bound
to inform the SAM adversarial neighborhood, introducing an awareness of function
curvature within optimization for flatter minima. We empirically validate TRAM
in vision (cross-dataset adaptation) and text (OOD language modeling, zero-shot
cross-lingual transfer) tasks where robust domain transfer and representation
generality are critical. TRAM outperforms SAM- and TR-based optimization across
all tasks, notably surpassing competing methods for hard transfer between
_anticorrelated_ domains. TRAM establishes a novel standard in
fine-tuning for domain-generalizable models with minimal additional computation
over previous sharpness-aware methods. | https://openreview.net/pdf/15fa46e9fb64654d30da84732fc37543dd3a94ca.pdf |
CAMIL: Context-Aware Multiple Instance Learning for Cancer Detection and Subtyping in Whole Slide Images | https://openreview.net/forum?id=rzBskAEmoc | https://openreview.net/forum?id=rzBskAEmoc | Olga Fourkioti,Matt De Vries,Chris Bakal | ICLR 2024,Spotlight | The visual examination of tissue biopsy sections is fundamental for cancer diagnosis, with pathologists analyzing sections at multiple magnifications to discern tumor cells and their subtypes. However, existing attention-based multiple instance learning (MIL) models used for analyzing Whole Slide Images (WSIs) in cancer diagnostics often overlook the contextual information of tumor and neighboring tiles, leading to misclassifications. To address this, we propose the Context-Aware Multiple Instance Learning (CAMIL) architecture. CAMIL incorporates neighbor-constrained attention to consider dependencies among tiles within a WSI and integrates contextual constraints as prior knowledge into the MIL model. We evaluated CAMIL on subtyping non-small cell lung cancer (TCGA-NSCLC) and detecting lymph node (CAMELYON16 and CAMELYON17) metastasis, achieving test AUCs of 97.5\%, 95.9\%, and 88.1\%, respectively, outperforming other state-of-the-art methods. Additionally, CAMIL enhances model interpretability by identifying regions of high diagnostic value. Our code is available at https://github.com/olgarithmics/ICLR_CAMIL. | https://openreview.net/pdf/b4d6251d3b1639d170a910826e7643be5d050285.pdf |
DyST: Towards Dynamic Neural Scene Representations on Real-World Videos | https://openreview.net/forum?id=MnMWa94t12 | https://openreview.net/forum?id=MnMWa94t12 | Maximilian Seitzer,Sjoerd van Steenkiste,Thomas Kipf,Klaus Greff,Mehdi S. M. Sajjadi | ICLR 2024,Spotlight | Visual understanding of the world goes beyond the semantics and flat structure of individual images. In this work, we aim to capture both the 3D structure and dynamics of real-world scenes from monocular real-world videos. Our Dynamic Scene Transformer (DyST) model leverages recent work in neural scene representation to learn a latent decomposition of monocular real-world videos into scene content, per-view scene dynamics, and camera pose. This separation is achieved through a novel co-training scheme on monocular videos and our new synthetic dataset DySO. DyST learns tangible latent representations for dynamic scenes that enable view generation with separate control over the camera and the content of the scene. | https://openreview.net/pdf/cb1c5f7dc44ea3c18ca42146caaee182fe578c30.pdf |
Bilevel Optimization under Unbounded Smoothness: A New Algorithm and Convergence Analysis | https://openreview.net/forum?id=LqRGsGWOTX | https://openreview.net/forum?id=LqRGsGWOTX | Jie Hao,Xiaochuan Gong,Mingrui Liu | ICLR 2024,Spotlight | Bilevel optimization is an important formulation for many machine learning problems, such as meta-learning and hyperparameter optimization. Current bilevel optimization algorithms assume that the gradient of the upper-level function is Lipschitz (i.e., the upper-level function has a bounded smoothness parameter). However, recent studies reveal that certain neural networks such as recurrent neural networks (RNNs) and long-short-term memory networks (LSTMs) exhibit potential unbounded smoothness, rendering conventional bilevel optimization algorithms unsuitable for these neural networks. In this paper, we design a new bilevel optimization algorithm, namely BO-REP, to address this challenge. This algorithm updates the upper-level variable using normalized momentum and incorporates two novel techniques for updating the lower-level variable: \textit{initialization refinement} and \textit{periodic updates}. Specifically, once the upper-level variable is initialized, a subroutine is invoked to obtain a refined estimate of the corresponding optimal lower-level variable, and the lower-level variable is updated only after every specific period instead of each iteration. When the upper-level problem is nonconvex and unbounded smooth, and the lower-level problem is strongly convex, we prove that our algorithm requires $\widetilde{O}(1/\epsilon^4)$ \footnote{Here $\widetilde{O}(\cdot)$ compresses logarithmic factors of $1/\epsilon$ and $1/\delta$, where $\delta\in(0,1)$ denotes the failure probability.} iterations to find an $\epsilon$-stationary point in the stochastic setting, where each iteration involves calling a stochastic gradient or Hessian-vector product oracle. Notably, this result matches the state-of-the-art complexity results under the bounded smoothness setting and without mean-squared smoothness of the stochastic gradient, up to logarithmic factors. Our proof relies on novel technical lemmas for the periodically updated lower-level variable, which are of independent interest. Our experiments on hyper-representation learning, hyperparameter optimization, and data hyper-cleaning for text classification tasks demonstrate the effectiveness of our proposed algorithm. The code is available at [https://github.com/MingruiLiu-ML-Lab/Bilevel-Optimization-under-Unbounded-Smoothness](https://github.com/MingruiLiu-ML-Lab/Bilevel-Optimization-under-Unbounded-Smoothness). | https://openreview.net/pdf/1a34c4fa191cbbf4c1a8a8ca78bf84ce2094b701.pdf |
Bounds on Representation-Induced Confounding Bias for Treatment Effect Estimation | https://openreview.net/forum?id=d3xKPQVjSc | https://openreview.net/forum?id=d3xKPQVjSc | Valentyn Melnychuk,Dennis Frauen,Stefan Feuerriegel | ICLR 2024,Spotlight | State-of-the-art methods for conditional average treatment effect (CATE) estimation make widespread use of representation learning. Here, the idea is to reduce the variance of the low-sample CATE estimation by a (potentially constrained) low-dimensional representation. However, low-dimensional representations can lose information about the observed confounders and thus lead to bias, because of which the validity of representation learning for CATE estimation is typically violated. In this paper, we propose a new, representation-agnostic refutation framework for estimating bounds on the representation-induced confounding bias that comes from dimensionality reduction (or other constraints on the representations) in CATE estimation. First, we establish theoretically under which conditions CATE is non-identifiable given low-dimensional (constrained) representations. Second, as our remedy, we propose a neural refutation framework which performs partial identification of CATE or, equivalently, aims at estimating lower and upper bounds of the representation-induced confounding bias. We demonstrate the effectiveness of our bounds in a series of experiments. In sum, our refutation framework is of direct relevance in practice where the validity of CATE estimation is of importance. | https://openreview.net/pdf/d06dd3ea5318958c6924d08f905235b1512fde33.pdf |
DSPy: Compiling Declarative Language Model Calls into State-of-the-Art Pipelines | https://openreview.net/forum?id=sY5N0zY5Od | https://openreview.net/forum?id=sY5N0zY5Od | Omar Khattab,Arnav Singhvi,Paridhi Maheshwari,Zhiyuan Zhang,Keshav Santhanam,Sri Vardhamanan A,Saiful Haq,Ashutosh Sharma,Thomas T. Joshi,Hanna Moazam,Heather Miller,Matei Zaharia,Christopher Potts | ICLR 2024,Spotlight | The ML community is rapidly exploring techniques for prompting language models (LMs) and for stacking them into pipelines that solve complex tasks. Unfortunately, existing LM pipelines are typically implemented using hard-coded “prompt templates”, i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, or imperative computational graphs where LMs are invoked through declarative modules. DSPy modules are parameterized, meaning they can learn how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. We design a compiler that will optimize any DSPy pipeline to maximize a given metric, by creating and collecting demonstrations. We conduct two case studies, showing that succinct DSPy programs can express and optimize pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. Within minutes of compiling, DSPy can automatically produce pipelines that outperform out-of-the-box few-shot prompting as well as expert-created demonstrations for GPT-3.5 and Llama2-13b-chat. On top of that, DSPy programs compiled for relatively small LMs like 770M parameter T5 and Llama2-13b-chat are competitive with many approaches that rely on large and proprietary LMs like GPT-3.5 and on expert-written prompt chains. DSPy is available at https://github.com/stanfordnlp/dspy | https://openreview.net/pdf/41028bc2988c119c4fb5c213ab3919ceae696846.pdf |
Impact of Computation in Integral Reinforcement Learning for Continuous-Time Control | https://openreview.net/forum?id=xJEd8PkdNz | https://openreview.net/forum?id=xJEd8PkdNz | Wenhan Cao,Wei Pan | ICLR 2024,Spotlight | Integral reinforcement learning (IntRL) demands the precise computation of the utility function's integral at its policy evaluation (PEV) stage. This is achieved through quadrature rules, which are weighted sums of utility functions evaluated from state samples obtained in discrete time. Our research reveals a critical yet underexplored phenomenon: the choice of the computational method -- in this case, the quadrature rule -- can significantly impact control performance. This impact is traced back to the fact that computational errors introduced in the PEV stage can affect the policy iteration's convergence behavior, which in turn affects the learned controller. To elucidate how computation impacts control, we draw a parallel between IntRL's policy iteration and Newton's method applied to the Hamilton-Jacobi-Bellman equation. In this light, computational error in PEV manifests as an extra error term in each iteration of Newton's method, with its upper bound proportional to the computational error. Further, we demonstrate that when the utility function resides in a reproducing kernel Hilbert space (RKHS), the optimal quadrature is achievable by employing Bayesian quadrature with the RKHS-inducing kernel function. We prove that the local convergence rates for IntRL using the trapezoidal rule and Bayesian quadrature with a Matérn kernel to be $O(N^{-2})$ and $O(N^{-b})$, where $N$ is the number of evenly-spaced samples and $b$ is the Matérn kernel's smoothness parameter. These theoretical findings are finally validated by two canonical control tasks. | https://openreview.net/pdf/9eca44e1414a070f87a6a21de74fc149bd37de96.pdf |
Masks, Signs, And Learning Rate Rewinding | https://openreview.net/forum?id=qODvxQ8TXW | https://openreview.net/forum?id=qODvxQ8TXW | Advait Harshal Gadhikar,Rebekka Burkholz | ICLR 2024,Spotlight | Learning Rate Rewinding (LRR) has been established as a strong variant of Iterative Magnitude Pruning (IMP) to find lottery tickets in deep overparameterized neural networks. While both iterative pruning schemes couple structure and parameter learning, understanding how LRR excels in both aspects can bring us closer to the design of more flexible deep learning algorithms that can optimize diverse sets of sparse architectures. To this end, we conduct experiments that disentangle the effect of mask learning and parameter optimization and how both benefit from overparameterization. The ability of LRR to flip parameter signs early and stay robust to sign perturbations seems to make it not only more effective in mask identification but also in optimizing diverse sets of masks, including random ones. In support of this hypothesis, we prove in a simplified single hidden neuron setting that LRR succeeds in more cases than IMP, as it can escape initially problematic sign configurations. | https://openreview.net/pdf/8049c38689012fa79be944abb2bec1446b8ed012.pdf |
Gradual Domain Adaptation via Gradient Flow | https://openreview.net/forum?id=iTTZFKrlGV | https://openreview.net/forum?id=iTTZFKrlGV | Zhan Zhuang,Yu Zhang,Ying Wei | ICLR 2024,Spotlight | Domain shift degrades classification models on new data distributions. Conventional unsupervised domain adaptation (UDA) aims to learn features that bridge labeled source and unlabeled target domains. In contrast to feature learning, gradual domain adaptation (GDA) leverages extra continuous intermediate domains with pseudo-labels to boost the source classifier. However, real intermediate domains are sometimes unavailable or ineffective. In this paper, we propose $\textbf{G}$radual Domain Adaptation via $\textbf{G}$radient $\textbf{F}$low (GGF) to generate intermediate domains with preserving labels, thereby enabling us a fine-tuning method for GDA. We employ the Wasserstein gradient flow in Kullback–Leibler divergence to transport samples from the source to the target domain. To simulate the dynamics, we utilize the Langevin algorithm. Since the Langevin algorithm disregards label information and introduces diffusion noise, we introduce classifier-based and sample-based potentials to avoid label switching and dramatic deviations in the sampling process. For the proposed GGF model, we analyze its generalization bound. Experiments on several benchmark datasets demonstrate the superiority of the proposed GGF method compared to state-of-the-art baselines. | https://openreview.net/pdf/ff915349976b783c6976376bdd9392b8a18f7773.pdf |
Maximum Entropy Heterogeneous-Agent Reinforcement Learning | https://openreview.net/forum?id=tmqOhBC4a5 | https://openreview.net/forum?id=tmqOhBC4a5 | Jiarong Liu,Yifan Zhong,Siyi Hu,Haobo Fu,QIANG FU,Xiaojun Chang,Yaodong Yang | ICLR 2024,Spotlight | *Multi-agent reinforcement learning* (MARL) has been shown effective for cooperative games in recent years. However, existing state-of-the-art methods face challenges related to sample complexity, training instability, and the risk of converging to a suboptimal Nash Equilibrium. In this paper, we propose a unified framework for learning \emph{stochastic} policies to resolve these issues. We embed cooperative MARL problems into probabilistic graphical models, from which we derive the maximum entropy (MaxEnt) objective for MARL. Based on the MaxEnt framework, we propose *Heterogeneous-Agent Soft Actor-Critic* (HASAC) algorithm. Theoretically, we prove the monotonic improvement and convergence to *quantal response equilibrium* (QRE) properties of HASAC. Furthermore, we generalize a unified template for MaxEnt algorithmic design named *Maximum Entropy Heterogeneous-Agent Mirror Learning* (MEHAML), which provides any induced method with the same guarantees as HASAC. We evaluate HASAC on six benchmarks: Bi-DexHands, Multi-Agent MuJoCo, StarCraft Multi-Agent Challenge, Google Research Football, Multi-Agent Particle Environment, and Light Aircraft Game. Results show that HASAC consistently outperforms strong baselines, exhibiting better sample efficiency, robustness, and sufficient exploration. | https://openreview.net/pdf/82bacc9b0a9551bf4922e43270f4c315044f70af.pdf |
Hybrid Directional Graph Neural Network for Molecules | https://openreview.net/forum?id=BBD6KXIGJL | https://openreview.net/forum?id=BBD6KXIGJL | Junyi An,Chao Qu,Zhipeng Zhou,Fenglei Cao,Xu Yinghui,Yuan Qi,Furao Shen | ICLR 2024,Spotlight | Equivariant message passing neural networks have emerged as the prevailing approach for predicting chemical properties of molecules due to their ability to leverage translation and rotation symmetries, resulting in a strong inductive bias. However, the equivariant operations in each layer can impose excessive constraints on the function form and network flexibility. To address these challenges, we introduce a novel network called the Hybrid Directional Graph Neural Network (HDGNN), which effectively combines strictly equivariant operations with learnable modules. We evaluate the performance of HDGNN on the QM9 dataset and the IS2RE dataset of OC20, demonstrating its state-of-the-art performance on several tasks and competitive performance on others. Our code is anonymously released on https://github.com/ajy112/HDGNN. | https://openreview.net/pdf/fdaba18af51e693376f79fad547fdec1e1913044.pdf |
Unbiased Watermark for Large Language Models | https://openreview.net/forum?id=uWVC5FVidc | https://openreview.net/forum?id=uWVC5FVidc | Zhengmian Hu,Lichang Chen,Xidong Wu,Yihan Wu,Hongyang Zhang,Heng Huang | ICLR 2024,Spotlight | The recent advancements in large language models (LLMs) have sparked a growing apprehension regarding the potential misuse. One approach to mitigating this risk is to incorporate watermarking techniques into LLMs, allowing for the tracking and attribution of model outputs. This study examines a crucial aspect of watermarking: how significantly watermarks impact the quality of model-generated outputs. Previous studies have suggested a trade-off between watermark strength and output quality. However, our research demonstrates that it is possible to integrate watermarks without affecting the output probability distribution with appropriate implementation. We refer to this type of watermark as an unbiased watermark. This has significant implications for the use of LLMs, as it becomes impossible for users to discern whether a service provider has incorporated watermarks or not. Furthermore, the presence of watermarks does not compromise the performance of the model in downstream tasks, ensuring that the overall utility of the language model is preserved. Our findings contribute to the ongoing discussion around responsible AI development, suggesting that unbiased watermarks can serve as an effective means of tracking and attributing model outputs without sacrificing output quality. | https://openreview.net/pdf/fdb6b7b2517ce71ee9ed99a12175e4a0273d2b3f.pdf |
Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust Closed-Loop Control | https://openreview.net/forum?id=EriR6Ec69a | https://openreview.net/forum?id=EriR6Ec69a | Neehal Tumma,Mathias Lechner,Noel Loo,Ramin Hasani,Daniela Rus | ICLR 2024,Spotlight | Developing autonomous agents that can interact with changing environments is an open challenge in machine learning. Robustness is particularly important in these settings as agents are often fit offline on expert demonstrations but deployed online where they must generalize to the closed feedback loop within the environment. In this work, we explore the application of recurrent neural networks to tasks of this nature and understand how a parameterization of their recurrent connectivity influences robustness in closed-loop settings. Specifically, we represent the recurrent connectivity as a function of rank and sparsity and show both theoretically and empirically that modulating these two variables has desirable effects on network dynamics. The proposed low-rank, sparse connectivity induces an interpretable prior on the network that proves to be most amenable for a class of models known as closed-form continuous-time neural networks (CfCs). We find that CfCs with fewer parameters can outperform their full-rank, fully-connected counterparts in the online setting under distribution shift. This yields memory-efficient and robust agents while opening a new perspective on how we can modulate network dynamics through connectivity. | https://openreview.net/pdf/480e9c477c5c570d2bb4494763d1237fdf11f122.pdf |
CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction | https://openreview.net/forum?id=DjzvJCRsVf | https://openreview.net/forum?id=DjzvJCRsVf | Size Wu,Wenwei Zhang,Lumin Xu,Sheng Jin,Xiangtai Li,Wentao Liu,Chen Change Loy | ICLR 2024,Spotlight | Open-vocabulary dense prediction tasks including object detection and image segmentation have been advanced by the success of Contrastive Language-Image Pre-training (CLIP). CLIP models, particularly those incorporating vision transformers (ViTs), have exhibited remarkable generalization ability in zero-shot image classification. However, when transferring the vision-language alignment of CLIP from global image representation to local region representation for the open-vocabulary dense prediction tasks, CLIP ViTs suffer from the domain shift from full images to local image regions. In this paper, we embark on an in-depth analysis of the region-language alignment in CLIP models, which is essential for downstream open-vocabulary dense prediction tasks. Subsequently, we propose an approach named CLIPSelf, which adapts the image-level recognition ability of CLIP ViT to local image regions without needing any region-text pairs. CLIPSelf empowers ViTs to distill itself by aligning a region representation extracted from its dense feature map with the image-level representation of the corresponding image crop. With the enhanced CLIP ViTs, we achieve new state-of-the-art performance on open-vocabulary object detection, semantic segmentation, and panoptic segmentation across various benchmarks. Models and code are released at https://github.com/wusize/CLIPSelf. | https://openreview.net/pdf/126c5bcbf7072558944cfd391f4b42a43cdd40b1.pdf |
Large Brain Model for Learning Generic Representations with Tremendous EEG Data in BCI | https://openreview.net/forum?id=QzTpTRVtrP | https://openreview.net/forum?id=QzTpTRVtrP | Weibang Jiang,Liming Zhao,Bao-liang Lu | ICLR 2024,Spotlight | The current electroencephalogram (EEG) based deep learning models are typically designed for specific datasets and applications in brain-computer interaction (BCI), limiting the scale of the models and thus diminishing their perceptual capabilities and generalizability. Recently, Large Language Models (LLMs) have achieved unprecedented success in text processing, prompting us to explore the capabilities of Large EEG Models (LEMs). We hope that LEMs can break through the limitations of different task types of EEG datasets, and obtain universal perceptual capabilities of EEG signals through unsupervised pre-training. Then the models can be fine-tuned for different downstream tasks. However, compared to text data, the volume of EEG datasets is generally small and the format varies widely. For example, there can be mismatched numbers of electrodes, unequal length data samples, varied task designs, and low signal-to-noise ratio. To overcome these challenges, we propose a unified foundation model for EEG called Large Brain Model (LaBraM). LaBraM enables cross-dataset learning by segmenting the EEG signals into EEG channel patches. Vector-quantized neural spectrum prediction is used to train a semantically rich neural tokenizer that encodes continuous raw EEG channel patches into compact neural codes. We then pre-train neural Transformers by predicting the original neural codes for the masked EEG channel patches. The LaBraMs were pre-trained on about 2,500 hours of various types of EEG signals from around 20 datasets and validated on multiple different types of downstream tasks. Experiments on abnormal detection, event type classification, emotion recognition, and gait prediction show that our LaBraM outperforms all compared SOTA methods in their respective fields. Our code is available at https://github.com/935963004/LaBraM. | https://openreview.net/pdf/ce4dc6959056452394dc1ad7b8f64005d65d2165.pdf |
Towards LLM4QPE: Unsupervised Pretraining of Quantum Property Estimation and A Benchmark | https://openreview.net/forum?id=vrBVFXwAmi | https://openreview.net/forum?id=vrBVFXwAmi | Yehui Tang,Hao Xiong,Nianzu Yang,Tailong Xiao,Junchi Yan | ICLR 2024,Spotlight | Estimating the properties of quantum systems such as quantum phase has been critical in addressing the essential quantum many-body problems in physics and chemistry. Deep learning models have been recently introduced to property estimation, surpassing conventional statistical approaches. However, these methods are tailored to the specific task and quantum data at hand. It remains an open and attractive question for devising a more universal task-agnostic pretraining model for quantum property estimation. In this paper, we propose LLM4QPE, a large language model style quantum task-agnostic pretraining and finetuning paradigm that 1) performs unsupervised pretraining on diverse quantum systems with different physical conditions; 2) uses the pretrained model for supervised finetuning and delivers high performance with limited training data, on downstream tasks. It mitigates the cost for quantum data collection and speeds up convergence. Extensive experiments show the promising efficacy of LLM4QPE in various tasks including classifying quantum phases of matter on Rydberg atom model and predicting two-body correlation function on anisotropic Heisenberg model. | https://openreview.net/pdf/511ed0e0d3b143e5589b96afaba84da894f71df7.pdf |
GTMGC: Using Graph Transformer to Predict Molecule’s Ground-State Conformation | https://openreview.net/forum?id=F7QnIKlC1N | https://openreview.net/forum?id=F7QnIKlC1N | Guikun Xu,Yongquan Jiang,PengChuan Lei,Yan Yang,Jim Chen | ICLR 2024,Spotlight | The ground-state conformation of a molecule is often decisive for its properties. However, experimental or computational methods, such as density functional theory (DFT), are time-consuming and labor-intensive for obtaining this conformation. Deep learning (DL) based molecular representation learning (MRL) has made significant advancements in molecular modeling and has achieved remarkable results in various tasks. Consequently, it has emerged as a promising approach for directly predicting the ground-state conformation of molecules. In this regard, we introduce GTMGC, a novel network based on Graph-Transformer (GT) that seamlessly predicts the spatial configuration of molecules in a 3D space from their 2D topological architecture in an end-to-end manner. Moreover, we propose a novel self-attention mechanism called Molecule Structural Residual Self-Attention (MSRSA) for molecular structure modeling. This mechanism not only guarantees high model performance and easy implementation but also lends itself well to other molecular modeling tasks. Our method has been evaluated on the Molecule3D benchmark dataset and the QM9 dataset. Experimental results demonstrate that our approach achieves remarkable performance and outperforms current state-of-the-art methods as well as the widely used open-source software RDkit. | https://openreview.net/pdf/c141834e1d331e6055ab503795d49d4e4b8548fb.pdf |
Generalization of Scaled Deep ResNets in the Mean-Field Regime | https://openreview.net/forum?id=tMzPZTvz2H | https://openreview.net/forum?id=tMzPZTvz2H | Yihang Chen,Fanghui Liu,Yiping Lu,Grigorios Chrysos,Volkan Cevher | ICLR 2024,Spotlight | Despite the widespread empirical success of ResNet, the generalization properties of deep ResNet are rarely explored beyond the lazy training regime. In this work, we investigate scaled ResNet in the limit of infinitely deep and wide neural networks, of which the gradient flow is described by a partial differential equation in the large-neural network limit, i.e., the mean-field regime. To derive the generalization bounds under this setting, our analysis necessitates a shift from the conventional time-invariant Gram matrix employed in the lazy training regime to a time-variant, distribution-dependent version. To this end, we provide a global lower bound on the minimum eigenvalue of the Gram matrix under the mean-field regime. Besides, for the traceability of the dynamic of Kullback-Leibler (KL) divergence, we establish the linear convergence of the empirical error and estimate the upper bound of the KL divergence over parameters distribution. Finally, we build the uniform convergence for generalization bound via Rademacher complexity. Our results offer new insights into the generalization ability of deep ResNet beyond the lazy training regime and contribute to advancing the understanding of the fundamental properties of deep neural networks. | https://openreview.net/pdf/72b4830ed0321f0098f96447794bfcc965134752.pdf |
ODE Discovery for Longitudinal Heterogeneous Treatment Effects Inference | https://openreview.net/forum?id=pxI5IPeWgW | https://openreview.net/forum?id=pxI5IPeWgW | Krzysztof Kacprzyk,Samuel Holt,Jeroen Berrevoets,Zhaozhi Qian,Mihaela van der Schaar | ICLR 2024,Spotlight | Inferring unbiased treatment effects has received widespread attention in the machine learning community. In recent years, our community has proposed numerous solutions in standard settings, high-dimensional treatment settings, and even longitudinal settings. While very diverse, the solution has mostly relied on neural networks for inference and simultaneous correction of assignment bias. New approaches typically build on top of previous approaches by proposing new (or refined) architectures and learning algorithms. However, the end result—a neural-network-based inference machine—remains unchallenged. In this paper, we introduce a different type of solution in the longitudinal setting: a closed-form ordinary differential equation (ODE). While we still rely on continuous optimization to learn an ODE, the resulting inference machine is no longer a neural network. Doing so yields several advantages such as interpretability, irregular sampling, and a different set of identification assumptions. Above all, we consider the introduction of a completely new type of solution to be our most important contribution as it may spark entirely new innovations in treatment effects in general. We facilitate this by formulating our contribution as a framework that can transform any ODE discovery method into a treatment effects method. | https://openreview.net/pdf/2e710a9328ce1b12daf4fde40da8165ca071d5db.pdf |
Learning Hierarchical World Models with Adaptive Temporal Abstractions from Discrete Latent Dynamics | https://openreview.net/forum?id=TjCDNssXKU | https://openreview.net/forum?id=TjCDNssXKU | Christian Gumbsch,Noor Sajid,Georg Martius,Martin V. Butz | ICLR 2024,Spotlight | Hierarchical world models can significantly improve model-based reinforcement learning (MBRL) and planning by enabling reasoning across multiple time scales. Nonetheless, the majority of state-of-the-art MBRL methods employ flat, non-hierarchical models. We propose Temporal Hierarchies from Invariant Context Kernels (THICK), an algorithm that learns a world model hierarchy via discrete latent dynamics. The lower level of THICK updates parts of its latent state sparsely in time, forming invariant contexts. The higher level exclusively predicts situations involving context changes. Our experiments demonstrate that THICK learns categorical, interpretable, temporal abstractions on the high level, while maintaining precise low-level predictions. Furthermore, we show that the emergent hierarchical predictive model seamlessly enhances the abilities of MBRL or planning methods. We believe that THICK contributes to the further development of hierarchical agents capable of more sophisticated planning and reasoning abilities. | https://openreview.net/pdf/3e5df2ed6659f21032c8784d5836ef8147d1413a.pdf |
Prediction without Preclusion: Recourse Verification with Reachable Sets | https://openreview.net/forum?id=SCQfYpdoGE | https://openreview.net/forum?id=SCQfYpdoGE | Avni Kothari,Bogdan Kulynych,Tsui-Wei Weng,Berk Ustun | ICLR 2024,Spotlight | Machine learning models are often used to decide who receives a loan, a job interview, or a public benefit. Models in such settings use features without considering their *actionability*. As a result, they can assign predictions that are \emph{fixed} -- meaning that individuals who are denied loans and interviews are, in fact, *precluded from access* to credit and employment. In this work, we introduce a procedure called *recourse verification* to test if a model assigns fixed predictions to its decision subjects. We propose a model-agnostic approach for verification with *reachable sets* -- i.e., the set of all points that a person can reach through their actions in feature space. We develop methods to construct reachable sets for discrete feature spaces, which can certify the responsiveness of *any model* by simply querying its predictions. We conduct a comprehensive empirical study on the infeasibility of recourse on datasets from consumer finance. Our results highlight how models can inadvertently preclude access by assigning fixed predictions and underscore the need to account for actionability in model development. | https://openreview.net/pdf/08180baa9640c55bee2805e14b90ecc715509ee3.pdf |
ODICE: Revealing the Mystery of Distribution Correction Estimation via Orthogonal-gradient Update | https://openreview.net/forum?id=L8UNn7Llt4 | https://openreview.net/forum?id=L8UNn7Llt4 | Liyuan Mao,Haoran Xu,Weinan Zhang,Xianyuan Zhan | ICLR 2024,Spotlight | In this study, we investigate the DIstribution Correction Estimation (DICE) methods, an important line of work in offline reinforcement learning (RL) and imitation learning (IL). DICE-based methods impose state-action-level behavior constraint, which is an ideal choice for offline learning. However, they typically perform much worse than current state-of-the-art (SOTA) methods that solely use action-level behavior constraint. After revisiting DICE-based methods, we find there exist two gradient terms when learning the value function using true-gradient update: forward gradient (taken on the current state) and backward gradient (taken on the next state). Using forward gradient bears a large similarity to many offline RL methods, and thus can be regarded as applying action-level constraint. However, directly adding the backward gradient may degenerate or cancel out its effect if these two gradients have conflicting directions. To resolve this issue, we propose a simple yet effective modification that projects the backward gradient onto the normal plane of the forward gradient, resulting in an orthogonal-gradient update, a new learning rule for DICE-based methods. We conduct thorough theoretical analyses and find that the projected backward gradient brings state-level behavior regularization, which reveals the mystery of DICE-based methods: the value learning objective does try to impose state-action-level constraint, but needs to be used in a corrected way. Through toy examples and extensive experiments on complex offline RL and IL tasks, we demonstrate that DICE-based methods using orthogonal-gradient updates achieve SOTA performance and great robustness. | https://openreview.net/pdf/833ece7fade579c01692e5603d476db35ce59989.pdf |
Improving Non-Transferable Representation Learning by Harnessing Content and Style | https://openreview.net/forum?id=FYKVPOHCpE | https://openreview.net/forum?id=FYKVPOHCpE | Ziming Hong,Zhenyi Wang,Li Shen,Yu Yao,Zhuo Huang,Shiming Chen,Chuanwu Yang,Mingming Gong,Tongliang Liu | ICLR 2024,Spotlight | Non-transferable learning (NTL) aims to restrict the generalization of models toward the target domain(s). To this end, existing works learn non-transferable representations by reducing statistical dependence between the source and target domain. However, such statistical methods essentially neglect to distinguish between *styles* and *contents*, leading them to inadvertently fit (i) spurious correlation between *styles* and *labels*, and (ii) fake independence between *contents* and *labels*. Consequently, their performance will be limited when natural distribution shifts occur or malicious intervention is imposed. In this paper, we propose a novel method (dubbed as H-NTL) to understand and advance the NTL problem by introducing a causal model to separately model *content* and *style* as two latent factors, based on which we disentangle and harness them as guidances for learning non-transferable representations with intrinsically causal relationships. Specifically, to avoid fitting spurious correlation and fake independence, we propose a variational inference framework to disentangle the naturally mixed *content factors* and *style factors* under our causal model. Subsequently, based on dual-path knowledge distillation, we harness the disentangled two *factors* as guidances for non-transferable representation learning: (i) we constraint the source domain representations to fit *content factors* (which are the intrinsic cause of *labels*), and (ii) we enforce that the target domain representations fit *style factors* which barely can predict labels. As a result, the learned feature representations follow optimal untransferability toward the target domain and minimal negative influence on the source domain, thus enabling better NTL performance. Empirically, the proposed H-NTL significantly outperforms competing methods by a large margin. | https://openreview.net/pdf/4d359626d33d8cb2e10f6d1cf6728b805a2b5316.pdf |
ModernTCN: A Modern Pure Convolution Structure for General Time Series Analysis | https://openreview.net/forum?id=vpJMJerXHU | https://openreview.net/forum?id=vpJMJerXHU | Luo donghao,wang xue | ICLR 2024,Spotlight | Recently, Transformer-based and MLP-based models have emerged rapidly and
won dominance in time series analysis. In contrast, convolution is losing steam
in time series tasks nowadays for inferior performance. This paper studies the
open question of how to better use convolution in time series analysis and makes
efforts to bring convolution back to the arena of time series analysis. To this end,
we modernize the traditional TCN and conduct time series related modifications
to make it more suitable for time series tasks. As the outcome, we propose
ModernTCN and successfully solve this open question through a seldom-explored
way in time series community. As a pure convolution structure, ModernTCN still
achieves the consistent state-of-the-art performance on five mainstream time series
analysis tasks while maintaining the efficiency advantage of convolution-based
models, therefore providing a better balance of efficiency and performance than
state-of-the-art Transformer-based and MLP-based models. Our study further
reveals that, compared with previous convolution-based models, our ModernTCN
has much larger effective receptive fields (ERFs), therefore can better unleash the
potential of convolution in time series analysis. Code is available at this repository:
https://github.com/luodhhh/ModernTCN. | https://openreview.net/pdf/c0de77eed380b4b2736dfe855ed3cf0d62f7d8c1.pdf |
Towards Robust Out-of-Distribution Generalization Bounds via Sharpness | https://openreview.net/forum?id=tPEwSYPtAC | https://openreview.net/forum?id=tPEwSYPtAC | Yingtian Zou,Kenji Kawaguchi,Yingnan Liu,Jiashuo Liu,Mong-Li Lee,Wynne Hsu | ICLR 2024,Spotlight | Generalizing to out-of-distribution (OOD) data or unseen domain, termed OOD generalization, still lacks appropriate theoretical guarantees. Canonical OOD bounds focus on different distance measurements between source and target domains but fail to consider the optimization property of the learned model. As empirically shown in recent work, sharpness of learned minimum influences OOD generalization. To bridge this gap between optimization and OOD generalization, we study the effect of sharpness on how a model tolerates data change in domain shift which is usually captured by "robustness" in generalization. In this paper, we give a rigorous connection between sharpness and robustness, which gives better OOD guarantees for robust algorithms. It also provides a theoretical backing for "flat minima leads to better OOD generalization". Overall, we propose a sharpness-based OOD generalization bound by taking robustness into consideration, resulting in a tighter bound than non-robust guarantees. Our findings are supported by the experiments on a ridge regression model, as well as the experiments on deep learning classification tasks. | https://openreview.net/pdf/36904eee9458f8a5da9944cdcd92446a053dfa88.pdf |
MAPE-PPI: Towards Effective and Efficient Protein-Protein Interaction Prediction via Microenvironment-Aware Protein Embedding | https://openreview.net/forum?id=itGkF993gz | https://openreview.net/forum?id=itGkF993gz | Lirong Wu,Yijun Tian,Yufei Huang,Siyuan Li,Haitao Lin,Nitesh V Chawla,Stan Z. Li | ICLR 2024,Spotlight | Protein-Protein Interactions (PPIs) are fundamental in various biological processes and play a key role in life activities. The growing demand and cost of experimental PPI assays require computational methods for efficient PPI prediction. While existing methods rely heavily on protein sequence for PPI prediction, it is the protein structure that is the key to determine the interactions. To take both protein modalities into account, we define the microenvironment of an amino acid residue by its sequence and structural contexts, which describe the surrounding chemical properties and geometric features. In addition, microenvironments defined in previous work are largely based on experimentally assayed physicochemical properties, for which the "vocabulary" is usually extremely small. This makes it difficult to cover the diversity and complexity of microenvironments. In this paper, we propose Microenvironment-Aware Protein Embedding for PPI prediction (MPAE-PPI), which encodes microenvironments into chemically meaningful discrete codes via a sufficiently large microenvironment "vocabulary" (i.e., codebook). Moreover, we propose a novel pre-training strategy, namely Masked Codebook Modeling (MCM), to capture the dependencies between different microenvironments by randomly masking the codebook and reconstructing the input. With the learned microenvironment codebook, we can reuse it as an off-the-shelf tool to efficiently and effectively encode proteins of different sizes and functions for large-scale PPI prediction. Extensive experiments show that MAPE-PPI can scale to PPI prediction with millions of PPIs with superior trade-offs between effectiveness and computational efficiency than the state-of-the-art competitors. | https://openreview.net/pdf/72464b5e34ef4de8f928bfdd6309981dbe271cf6.pdf |
Negative Label Guided OOD Detection with Pretrained Vision-Language Models | https://openreview.net/forum?id=xUO1HXz4an | https://openreview.net/forum?id=xUO1HXz4an | Xue Jiang,Feng Liu,Zhen Fang,Hong Chen,Tongliang Liu,Feng Zheng,Bo Han | ICLR 2024,Spotlight | Out-of-distribution (OOD) detection aims at identifying samples from unknown classes, playing a crucial role in trustworthy models against errors on unexpected inputs.
Extensive research has been dedicated to exploring OOD detection in the vision modality.
{Vision-language models (VLMs) can leverage both textual and visual information for various multi-modal applications, whereas few OOD detection methods take into account information from the text modality.
In this paper, we propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases. We design a novel scheme for the OOD score collaborated with negative labels.
Theoretical analysis helps to understand the mechanism of negative labels. Extensive experiments demonstrate that our method NegLabel achieves state-of-the-art performance on various OOD detection benchmarks and generalizes well on multiple VLM architectures. Furthermore, our method NegLabel exhibits remarkable robustness against diverse domain shifts. The codes are available at https://github.com/tmlr-group/NegLabel. | https://openreview.net/pdf/b9ad30ff96f366ad87a0053257956ba3b2a4ece6.pdf |
OPTIMAL ROBUST MEMORIZATION WITH RELU NEURAL NETWORKS | https://openreview.net/forum?id=47hDbAMLbc | https://openreview.net/forum?id=47hDbAMLbc | Lijia Yu,Xiao-Shan Gao,Lijun Zhang | ICLR 2024,Spotlight | Memorization with neural networks is to study the expressive power of neural networks to interpolate a finite classification data set, which is closely related to the generalizability of deep learning. However, the important problem of robust memorization has not been thoroughly studied. In this paper, several basic problems about robust memorization are solved. First, we prove that it is NP-hard to compute neural networks with certain simple structures, which are robust memorization. A network hypothesis space is called optimal robust memorization for a data set if it can achieve robust memorization for any budget less than half the separation bound of the data set. Second, we explicitly construct neural networks with O(N n) parameters for optimal robust memorization of any data set with dimension n and size N . We also give a lower bound for the width of networks to achieve optimal robust memorization. Finally, we explicitly construct neural networks with
O(N n log n) parameters for optimal robust memorization of any binary classification data set by controlling the Lipschitz constant of the network. | https://openreview.net/pdf/c5be8a576cab367723fcf91c4b950557846a3e1a.pdf |
Neural Contractive Dynamical Systems | https://openreview.net/forum?id=iAYIRHOYy8 | https://openreview.net/forum?id=iAYIRHOYy8 | Hadi Beik Mohammadi,Søren Hauberg,Georgios Arvanitidis,Nadia Figueroa,Gerhard Neumann,Leonel Rozo | ICLR 2024,Spotlight | Stability guarantees are crucial when ensuring that a fully autonomous robot does not take undesirable or potentially harmful actions. Unfortunately, global stability guarantees are hard to provide in dynamical systems learned from data, especially when the learned dynamics are governed by neural networks. We propose a novel methodology to learn \emph{neural contractive dynamical systems}, where our neural architecture ensures contraction, and hence, global stability. To efficiently scale the method to high-dimensional dynamical systems, we develop a variant of the variational autoencoder that learns dynamics in a low-dimensional latent representation space while retaining contractive stability after decoding. We further extend our approach to learning contractive systems on the Lie group of rotations to account for full-pose end-effector dynamic motions. The result is the first highly flexible learning architecture that provides contractive stability guarantees with capability to perform obstacle avoidance. Empirically, we demonstrate that our approach encodes the desired dynamics more accurately than the current state-of-the-art, which provides less strong stability guarantees. | https://openreview.net/pdf/a89591eec311a0efbd01f7135555a21d2d682c1c.pdf |
Scaling Laws for Associative Memories | https://openreview.net/forum?id=Tzh6xAJSll | https://openreview.net/forum?id=Tzh6xAJSll | Vivien Cabannes,Elvis Dohmatob,Alberto Bietti | ICLR 2024,Spotlight | Learning arguably involves the discovery and memorization of abstract rules. The aim of this paper is to study associative memory mechanisms. Our model is based on high-dimensional matrices consisting of outer products of embeddings, which relates to the inner layers of transformer language models. We derive precise scaling laws with respect to sample size and parameter size, and discuss the statistical efficiency of different estimators, including optimization-based algorithms. We provide extensive numerical experiments to validate and interpret theoretical results, including fine-grained visualizations of the stored memory associations. | https://openreview.net/pdf/ba075a88abc0ad2b7f00577253a950d3264c5f2f.pdf |
Text2Reward: Reward Shaping with Language Models for Reinforcement Learning | https://openreview.net/forum?id=tUM39YTRxH | https://openreview.net/forum?id=tUM39YTRxH | Tianbao Xie,Siheng Zhao,Chen Henry Wu,Yitao Liu,Qian Luo,Victor Zhong,Yanchao Yang,Tao Yu | ICLR 2024,Spotlight | Designing reward functions is a longstanding challenge in reinforcement learning (RL); it requires specialized knowledge or domain data, leading to high costs for development. To address this, we introduce Text2Reward, a data-free framework that automates the generation and shaping of dense reward functions based on large language models (LLMs). Given a goal described in natural language, Text2Reward generates shaped dense reward functions as an executable program grounded in a compact representation of the environment. Unlike inverse RL and recent work that uses LLMs to write sparse reward codes or unshaped dense rewards with a constant function across timesteps, Text2Reward produces interpretable, free-form dense reward codes that cover a wide range of tasks, utilize existing packages, and allow iterative refinement with human feedback. We evaluate Text2Reward on two robotic manipulation benchmarks (ManiSkill2, MetaWorld) and two locomotion environments of MuJoCo. On 13 of the 17 manipulation tasks, policies trained with generated reward codes achieve similar or better task success rates and convergence speed than expert-written reward codes. For locomotion tasks, our method learns six novel locomotion behaviors with a success rate exceeding 94%. Furthermore, we show that the policies trained in the simulator with our method can be deployed in the real world. Finally, Text2Reward further improves the policies by refining their reward functions with human feedback. Video results are available at https://text-to-reward.github.io/ | https://openreview.net/pdf/a52e7202163a42116fae8ada42123e37f2aef287.pdf |
Towards Meta-Pruning via Optimal Transport | https://openreview.net/forum?id=sMoifbuxjB | https://openreview.net/forum?id=sMoifbuxjB | Alexander Theus,Olin Geimer,Friedrich Wicke,Thomas Hofmann,Sotiris Anagnostidis,Sidak Pal Singh | ICLR 2024,Spotlight | Structural pruning of neural networks conventionally relies on identifying and discarding less important neurons, a practice often resulting in significant accuracy loss that necessitates subsequent fine-tuning efforts. This paper introduces a novel approach named Intra-Fusion, challenging this prevailing pruning paradigm.
Unlike existing methods that focus on designing meaningful neuron importance metrics, Intra-Fusion redefines the overlying pruning procedure.
Through utilizing the concepts of model fusion and Optimal Transport, we leverage an agnostically given importance metric to arrive at a more effective sparse model representation.
Notably, our approach achieves substantial accuracy recovery without the need for resource-intensive fine-tuning, making it an efficient and promising tool for neural network compression.
Additionally, we explore how fusion can be added to the pruning process to significantly decrease the training time while maintaining competitive performance. We benchmark our results for various networks on commonly used datasets such as CIFAR-10, CIFAR-100, and ImageNet. More broadly, we hope that the proposed Intra-Fusion approach invigorates exploration into a fresh alternative to the predominant compression approaches.
Our code is available [here](https://github.com/alexandertheus/Intra-Fusion). | https://openreview.net/pdf/07560e42af2e42df14ac71025723b0b97a0924dd.pdf |
InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation | https://openreview.net/forum?id=MLBdiWu4Fw | https://openreview.net/forum?id=MLBdiWu4Fw | Yi Wang,Yinan He,Yizhuo Li,Kunchang Li,Jiashuo Yu,Xin Ma,Xinhao Li,Guo Chen,Xinyuan Chen,Yaohui Wang,Ping Luo,Ziwei Liu,Yali Wang,Limin Wang,Yu Qiao | ICLR 2024,Spotlight | This paper introduces InternVid, a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations for multimodal understanding and generation. InternVid contains over 7 million videos lasting nearly 760K hours, yielding 234M video clips accompanied by detailed descriptions of total 4.1B words. Our core contribution is to develop a scalable approach to autonomously build a high-quality video-text dataset with large language models (LLM), thereby showcasing its efficacy in learning video-language representation at scale. Specifically, we utilize a multi-scale approach to generate video-related descriptions. Furthermore, we introduce ViCLIP, a video-text representation learning model based on ViT-L. Learned on InternVid via contrastive learning, this model demonstrates leading zero-shot action recognition and competitive video retrieval performance. Beyond basic video understanding tasks like recognition and retrieval, our dataset and model have broad applications. They are particularly beneficial for generating interleaved video-text data for learning a video-centric dialogue system, advancing video-to-text and text-to-video generation research. These proposed resources provide a tool for researchers and practitioners interested in multimodal video understanding and generation. | https://openreview.net/pdf/5355ce2fec3ff26dca65a969b767fd7b1102bb05.pdf |
Dictionary Contrastive Learning for Efficient Local Supervision without Auxiliary Networks | https://openreview.net/forum?id=Gg7cXo3S8l | https://openreview.net/forum?id=Gg7cXo3S8l | Suhwan Choi,Myeongho Jeon,Yeonjung Hwang,Jeonglyul Oh,Sungjun Lim,Joonseok Lee,Myungjoo Kang | ICLR 2024,Spotlight | While backpropagation (BP) has achieved widespread success in deep learning, it
faces two prominent challenges: computational inefficiency and biological implausibility.
In response to these challenges, local supervision, encompassing Local
Learning (LL) and Forward Learning (FL), has emerged as a promising research
direction. LL employs module-wise BP to achieve competitive results yet relies on
module-wise auxiliary networks, which increase memory and parameter demands.
Conversely, FL updates layer weights without BP and auxiliary networks but falls
short of BP’s performance. This paper proposes a simple yet effective objective
within a contrastive learning framework for local supervision without auxiliary
networks. Given the insight that the existing contrastive learning framework for
local supervision is susceptible to task-irrelevant information without auxiliary
networks, we present DICTIONARY CONTRASTIVE LEARNING (DCL) that optimizes
the similarity between local features and label embeddings. Our method
using static label embeddings yields substantial performance improvements in the
FL scenario, outperforming state-of-the-art FL approaches. Moreover, our method
using adaptive label embeddings closely approaches the performance achieved by
LL while achieving superior memory and parameter efficiency. | https://openreview.net/pdf/f9734ebbb92e7bdafcdb35c2da50c63e5e5ad16d.pdf |
Bounding Box Stability against Feature Dropout Reflects Detector Generalization across Environments | https://openreview.net/forum?id=lmM4Ecm4HJ | https://openreview.net/forum?id=lmM4Ecm4HJ | Yang Yang,Wenhai Wang,Zhe Chen,Jifeng Dai,Liang Zheng | ICLR 2024,Spotlight | Bounding boxes uniquely characterize object detection, where a good detector gives accurate bounding boxes of categories of interest. However, in the real-world where test ground truths are not provided, it is non-trivial to find out whether bounding boxes are accurate, thus preventing us from assessing the detector generalization ability. In this work, we find under feature map dropout, good detectors tend to output bounding boxes whose locations do not change much, while bounding boxes of poor detectors will undergo noticeable position changes. We compute the box stability score (BS score) to reflect this stability. Specifically, given an image, we compute a normal set of bounding boxes and a second set after feature map dropout. To obtain BS score, we use bipartite matching to find the corresponding boxes between the two sets and compute the average Intersection over Union (IoU) across the entire test set. We contribute to finding that BS score has a strong, positive correlation with detection accuracy measured by mean average precision (mAP) under various test environments. This relationship allows us to predict the accuracy of detectors on various real-world test sets without accessing test ground truths, verified on canonical detection tasks such as vehicle detection and pedestrian detection. | https://openreview.net/pdf/5510c4a1e453a12979e2d2a9f12b836fdc0436c8.pdf |
Deep Geodesic Canonical Correlation Analysis for Covariance-Based Neuroimaging Data | https://openreview.net/forum?id=PnR1MNen7u | https://openreview.net/forum?id=PnR1MNen7u | Ce Ju,Reinmar J Kobler,Liyao Tang,Cuntai Guan,Motoaki Kawanabe | ICLR 2024,Spotlight | In human neuroimaging, multi-modal imaging techniques are frequently combined to enhance our comprehension of whole-brain dynamics and improve diagnosis in clinical practice. Modalities like electroencephalography and functional magnetic resonance imaging provide distinct views to the brain dynamics due to diametral spatiotemporal sensitivities and underlying neurophysiological coupling mechanisms. These distinct views pose a considerable challenge to learning a shared representation space, especially when dealing with covariance-based data characterized by their geometric structure. To capitalize on the geometric structure, we introduce a measure called geodesic correlation which expands traditional correlation consistency to covariance-based data on the symmetric positive definite (SPD) manifold. This measure is derived from classical canonical correlation analysis and serves to evaluate the consistency of latent representations obtained from paired views. For multi-view, self-supervised learning where one or both latent views are SPD we propose an innovative geometric deep learning framework termed DeepGeoCCA. Its primary objective is to enhance the geodesic correlation of unlabeled, paired data, thereby generating novel representations while retaining the geometric structures. In simulations and experiments with multi-view and multi-modal human neuroimaging data, we find that DeepGeoCCA learns latent representations with high geodesic correlation for unseen data while retaining relevant information for downstream tasks. | https://openreview.net/pdf/4ccf9cac26244e14e3fd2742852e226018c0e4b8.pdf |
SWAP-NAS: Sample-Wise Activation Patterns for Ultra-fast NAS | https://openreview.net/forum?id=tveiUXU2aa | https://openreview.net/forum?id=tveiUXU2aa | Yameng Peng,Andy Song,Haytham M. Fayek,Vic Ciesielski,Xiaojun Chang | ICLR 2024,Spotlight | Training-free metrics (a.k.a. zero-cost proxies) are widely used to avoid resource-intensive neural network training, especially in Neural Architecture Search (NAS). Recent studies show that existing training-free metrics have several limitations, such as limited correlation and poor generalisation across different search spaces and tasks. Hence, we propose Sample-Wise Activation Patterns and its derivative, SWAP-Score, a novel high-performance training-free metric. It measures the expressivity of networks over a batch of input samples. The SWAP-Score is strongly correlated with ground-truth performance across various search spaces and tasks, outperforming 15 existing training-free metrics on NAS-Bench-101/201/301 and TransNAS-Bench-101. The SWAP-Score can be further enhanced by regularisation, which leads to even higher correlations in cell-based search space and enables model size control during the search. For example, Spearman’s rank correlation coefficient between regularised SWAP-Score and CIFAR-100 validation accuracies on NAS-Bench-201 networks is 0.90, significantly higher than 0.80 from the second-best metric, NWOT. When integrated with an evolutionary algorithm for NAS, our SWAP-NAS achieves competitive performance on CIFAR-10 and ImageNet in approximately 6 minutes and 9 minutes of GPU time respectively. | https://openreview.net/pdf/37b8588fc5e1d4701d0dd7f69b3af45b36b148e9.pdf |
RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches | https://openreview.net/forum?id=F1TKzG8LJO | https://openreview.net/forum?id=F1TKzG8LJO | Jiayuan Gu,Sean Kirmani,Paul Wohlhart,Yao Lu,Montserrat Gonzalez Arenas,Kanishka Rao,Wenhao Yu,Chuyuan Fu,Keerthana Gopalakrishnan,Zhuo Xu,Priya Sundaresan,Peng Xu,Hao Su,Karol Hausman,Chelsea Finn,Quan Vuong,Ted Xiao | ICLR 2024,Spotlight | Generalization remains one of the most important desiderata for robust robot learning systems. While recently proposed approaches show promise in generalization to novel objects, semantic concepts, or visual distribution shifts, generalization to new tasks remains challenging. For example, a language-conditioned policy trained on pick-and-place tasks will not be able to generalize to a folding task, even if the arm trajectory of folding is similar to pick-and-place. Our key insight is that this kind of generalization becomes feasible if we represent the task through rough trajectory sketches. We propose a policy conditioning method using such rough trajectory sketches, which we call RT-Trajectory, that is practical, easy to specify, and allows the policy to effectively perform new tasks that would otherwise be challenging to perform. We find that trajectory sketches strike a balance between being detailed enough to express low-level motion-centric guidance while being coarse enough to allow the learned policy to interpret the trajectory sketch in the context of situational visual observations. In addition, we show how trajectory sketches can provide a useful interface to communicate with robotic policies -- they can be specified through simple human inputs like drawings or videos, or through automated methods such as modern image-generating or waypoint-generating methods. We evaluate RT-Trajectory at scale on a variety of real-world robotic tasks, and find that RT-Trajectory is able to perform a wider range of tasks compared to language-conditioned and goal-conditioned policies, when provided the same training data. | https://openreview.net/pdf/99c49fe414f0c5349b9a1f94d32198a847626df5.pdf |
NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers | https://openreview.net/forum?id=Rc7dAwVL3v | https://openreview.net/forum?id=Rc7dAwVL3v | Kai Shen,Zeqian Ju,Xu Tan,Eric Liu,Yichong Leng,Lei He,Tao Qin,sheng zhao,Jiang Bian | ICLR 2024,Spotlight | Scaling text-to-speech (TTS) to large-scale, multi-speaker, and in-the-wild datasets is important to capture the diversity in human speech such as speaker identities, prosodies, and styles (e.g., singing). Current large TTS systems usually quantize speech into discrete tokens and use language models to generate these tokens one by one, which suffer from unstable prosody, word skipping/repeating issue, and poor voice quality. In this paper, we develop NaturalSpeech 2, a TTS system that leverages a neural audio codec with residual vector quantizers to get the quantized latent vectors and uses a diffusion model to generate these latent vectors conditioned on text input. To enhance the zero-shot capability that is important to achieve diverse speech synthesis, we design a speech prompting mechanism to facilitate in-context learning in the diffusion model and the duration/pitch predictor. We scale NaturalSpeech 2 to large-scale datasets with 44K hours of speech and singing data and evaluate its voice quality on unseen speakers. NaturalSpeech 2 outperforms previous TTS systems by a large margin in terms of prosody/timbre similarity, robustness, and voice quality in a zero-shot setting, and performs novel zero-shot singing synthesis with only a speech prompt. Audio samples are available at https://naturalspeech2.github.io/. | https://openreview.net/pdf/509cda476b8eb36072d873b3fb7a5b5868bb7ce7.pdf |
Submodular Reinforcement Learning | https://openreview.net/forum?id=loYSzjSaAK | https://openreview.net/forum?id=loYSzjSaAK | Manish Prajapat,Mojmir Mutny,Melanie Zeilinger,Andreas Krause | ICLR 2024,Spotlight | In reinforcement learning (RL), rewards of states are typically considered additive, and following the Markov assumption, they are independent of states visited previously. In many important applications, such as coverage control, experiment design and informative path planning, rewards naturally have diminishing returns, i.e., their value decreases in light of similar states visited previously. To tackle this, we propose Submodular RL (subRL), a paradigm which seeks to optimize more general, non-additive (and history-dependent) rewards modelled via submodular set functions, which capture diminishing returns. Unfortunately, in general, even in tabular settings, we show that the resulting optimization problem is hard to approximate. On the other hand, motivated by the success of greedy algorithms in classical submodular optimization, we propose subPO, a simple policy gradient-based algorithm for subRL that handles non-additive rewards by greedily maximizing marginal gains. Indeed, under some assumptions on the underlying Markov Decision Process (MDP), subPO recovers optimal constant factor approximations of submodular bandits. Moreover, we derive a natural policy gradient approach for locally optimizing subRL instances even in large state- and action- spaces. We showcase the versatility of our approach by applying subPO to several applications, such as biodiversity monitoring, Bayesian experiment design, informative path planning, and coverage maximization. Our results demonstrate sample efficiency, as well as scalability to high-dimensional state-action spaces. | https://openreview.net/pdf/8fc77d8529744661d87719d0416984370812942f.pdf |
Making Pre-trained Language Models Great on Tabular Prediction | https://openreview.net/forum?id=anzIzGZuLi | https://openreview.net/forum?id=anzIzGZuLi | Jiahuan Yan,Bo Zheng,Hongxia Xu,Yiheng Zhu,Danny Chen,Jimeng Sun,Jian Wu,Jintai Chen | ICLR 2024,Spotlight | The transferability of deep neural networks (DNNs) has made significant progress in image and language processing. However, due to the heterogeneity among tables, such DNN bonus is still far from being well exploited on tabular data prediction (e.g., regression or classification tasks). Condensing knowledge from diverse domains, language models (LMs) possess the capability to comprehend feature names from various tables, potentially serving as versatile learners in transferring knowledge across distinct tables and diverse prediction tasks, but their discrete text representation space is inherently incompatible with numerical feature values in tables. In this paper, we present TP-BERTa, a specifically pre-trained LM for tabular data prediction. Concretely, a novel relative magnitude tokenization converts scalar numerical feature values to finely discrete, high-dimensional tokens, and an intra-feature attention approach integrates feature values with the corresponding feature names. Comprehensive experiments demonstrate that our pre-trained TP-BERTa leads the performance among tabular DNNs and is competitive with Gradient Boosted Decision Tree models in typical tabular data regime. | https://openreview.net/pdf/c4a9c6bae09d696686e4f491b7316a399127722b.pdf |
Solving Inverse Problems with Latent Diffusion Models via Hard Data Consistency | https://openreview.net/forum?id=j8hdRqOUhN | https://openreview.net/forum?id=j8hdRqOUhN | Bowen Song,Soo Min Kwon,Zecheng Zhang,Xinyu Hu,Qing Qu,Liyue Shen | ICLR 2024,Spotlight | Latent diffusion models have been demonstrated to generate high-quality images, while offering efficiency in model training compared to diffusion models operating in the pixel space. However, incorporating latent diffusion models to solve inverse problems remains a challenging problem due to the nonlinearity of the encoder and decoder. To address these issues, we propose ReSample, an algorithm that can solve general inverse problems with pre-trained latent diffusion models. Our algorithm incorporates data consistency by solving an optimization problem during the reverse sampling process, a concept that we term as hard data consistency. Upon solving this optimization problem, we propose a novel resampling scheme to map the measurement-consistent sample back onto the noisy data manifold and theoretically demonstrate its benefits. Lastly, we apply our algorithm to solve a wide range of linear and nonlinear inverse problems in both natural and medical images, demonstrating that our approach outperforms existing state-of-the-art approaches, including those based on pixel-space diffusion models. | https://openreview.net/pdf/da11a915f62958de563c258cf1a15b945a4040f0.pdf |
The False Promise of Imitating Proprietary Language Models | https://openreview.net/forum?id=Kz3yckpCN5 | https://openreview.net/forum?id=Kz3yckpCN5 | Arnav Gudibande,Eric Wallace,Charlie Victor Snell,Xinyang Geng,Hao Liu,Pieter Abbeel,Sergey Levine,Dawn Song | ICLR 2024,Spotlight | An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). In this work, we critically analyze this approach of imitating language models. We first finetune a series of LMs that imitate ChatGPT using varying base model sizes (1.5B--13B), data sources, and imitation data amounts (0.3M--150M tokens). We then evaluate the models using crowd raters and canonical NLP benchmarks. Initially, we were surprised by the output quality of our imitation models---they appear far better at following instructions, and crowd workers rate their outputs as competitive with ChatGPT. However, when conducting more targeted automatic evaluations, we find that imitation models close little to none of the gap from the base LM to ChatGPT on tasks that are not heavily supported in the imitation data. We show that these performance discrepancies may slip past human raters because imitation models are adept at mimicking ChatGPT’s style but not its factuality. Overall, we conclude that while model imitation can be useful for training models to follow instructions and avoid toxic outputs, it falls short its full promise in many ways. In particular, there exists a substantial capabilities gap between open and closed LMs that we find cannot be bridged merely by adding more imitation data. Instead, we find that fine-tuning more capable base LMs has a significantly more substantial effect on closing this gap. In turn, we argue that the higher leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems. | https://openreview.net/pdf/b4739f190faa45d202f9d7847e19ebde7844eb25.pdf |
Sample-Efficient Linear Representation Learning from Non-IID Non-Isotropic Data | https://openreview.net/forum?id=Tr3fZocrI6 | https://openreview.net/forum?id=Tr3fZocrI6 | Thomas TCK Zhang,Leonardo Felipe Toso,James Anderson,Nikolai Matni | ICLR 2024,Spotlight | A powerful concept behind much of the recent progress in machine learning is the extraction of common features across data from heterogeneous sources or tasks. Intuitively, using all of one's data to learn a common representation function benefits both computational effort and statistical generalization by leaving a smaller number of parameters to fine-tune on a given task. Toward theoretically grounding these merits, we propose a general setting of recovering linear operators $M$
from noisy vector measurements $y = Mx + w$, where the covariates $x$ may be both non-i.i.d. and non-isotropic. We demonstrate that existing isotropy-agnostic meta-learning approaches incur biases on the representation update, which causes the scaling of the noise terms to lose favorable dependence on the number of source tasks. This in turn can cause the sample complexity of representation learning to be bottlenecked by the single-task data size. We introduce an adaptation, $\texttt{De-bias}$ & $\texttt{Feature-Whiten}$ ($\texttt{DFW}$), of the popular alternating minimization-descent (AMD) scheme proposed in Collins et al., (2021), and establish linear convergence to the optimal representation with noise level scaling down with the $\textit{total}$ source data size. This leads to generalization bounds on the same order as an oracle empirical risk minimizer. We verify the vital importance of $\texttt{DFW}$ on various numerical simulations. In particular, we show that vanilla alternating-minimization descent fails catastrophically even for iid, but mildly non-isotropic data.
Our analysis unifies and generalizes prior work, and provides a flexible framework for a wider range of applications, such as in controls and dynamical systems. | https://openreview.net/pdf/e7dccb9a39e6905c3e57dd5f906c31fbd3cab350.pdf |
Information Retention via Learning Supplemental Features | https://openreview.net/forum?id=o83eu4H9Mb | https://openreview.net/forum?id=o83eu4H9Mb | Zhipeng Xie,Yahe Li | ICLR 2024,Spotlight | The information bottleneck principle provides an information-theoretic method for learning a good representation as a trade-off between conciseness and predictive ability, which can reduce information redundancy, eliminate irrelevant and superfluous features, and thus enhance the in-domain generalizability. However, in low-resource or out-of-domain scenarios where the assumption of i.i.d does not necessarily hold true, superfluous (or redundant) relevant features may be supplemental to the mainline features of the model, and be beneficial in making prediction for test dataset with distribution shift. Therefore, instead of squeezing the input information by information bottleneck, we propose to keep as much relevant information as possible in use for making predictions. A three-stage supervised learning framework is designed and implemented to jointly learn the mainline and supplemental features, relieving supplemental features from the suppression of mainline features. Extensive experiments have shown that the learned representations of our method have good in-domain and out-of-domain generalization abilities, especially in low-resource cases. | https://openreview.net/pdf/7f425cda0816de3fc282c27ce87697f5b5c44077.pdf |
Mayfly: a Neural Data Structure for Graph Stream Summarization | https://openreview.net/forum?id=n7Sr8SW4bn | https://openreview.net/forum?id=n7Sr8SW4bn | Yuan Feng,Yukun Cao,Wang Hairu,Xike Xie,S Kevin Zhou | ICLR 2024,Spotlight | A graph is a structure made up of vertices and edges used to represent complex relationships between entities, while a graph stream is a continuous flow of graph updates that convey evolving relationships between entities. The massive volume and high dynamism of graph streams promote research on data structures of graph summarization, which provides a concise and approximate view of graph streams with sub-linear space and linear construction time, enabling real-time graph analytics in various domains, such as social networking, financing, and cybersecurity.
In this work, we propose the Mayfly, the first neural data structure for summarizing graph streams. The Mayfly replaces handcrafted data structures with better accuracy and adaptivity.
To cater to practical applications, Mayfly incorporates two offline training phases.
During the larval phase, the Mayfly learns basic summarization abilities from automatically and synthetically constituted meta-tasks, and in the metamorphosis phase, it rapidly adapts to real graph streams via meta-tasks.
With specific configurations of information pathways, the Mayfly enables flexible support for miscellaneous graph queries, including edge, node, and connectivity queries.
Extensive empirical studies show that the Mayfly significantly outperforms its handcrafted competitors. | https://openreview.net/pdf/e7d88ca4c807b194ba332ff83811bbd8c79934bc.pdf |
Exploring the Common Appearance-Boundary Adaptation for Nighttime Optical Flow | https://openreview.net/forum?id=776lhoaulC | https://openreview.net/forum?id=776lhoaulC | Hanyu Zhou,Yi Chang,Haoyue Liu,YAN WENDING,Yuxing Duan,Zhiwei Shi,Luxin Yan | ICLR 2024,Spotlight | We investigate a challenging task of nighttime optical flow, which suffers from weakened texture and amplified noise. These degradations weaken discriminative visual features, thus causing invalid motion feature matching. Typically, existing methods employ domain adaptation to transfer knowledge from auxiliary domain to nighttime domain in either input visual space or output motion space. However, this direct adaptation is ineffective, since there exists a large domain gap due to the intrinsic heterogeneous nature of the feature representations between auxiliary and nighttime domains. To overcome this issue, we explore a common-latent space as the intermediate bridge to reinforce the feature alignment between auxiliary and nighttime domains. In this work, we exploit two auxiliary daytime and event domains, and propose a novel common appearance-boundary adaptation framework for nighttime optical flow. In appearance adaptation, we employ the intrinsic image decomposition to embed the auxiliary daytime image and the nighttime image into a reflectance-aligned common space. We discover that motion distributions of the two reflectance maps are very similar, benefiting us to consistently transfer motion appearance knowledge from daytime to nighttime domain. In boundary adaptation, we theoretically derive the motion correlation formula between nighttime image and accumulated events within a spatiotemporal gradient-aligned common space. We figure out that the correlation of the two spatiotemporal gradient maps shares significant discrepancy, benefitting us to contrastively transfer boundary knowledge from event to nighttime domain. Moreover, appearance adaptation and boundary adaptation are complementary to each other, since they could jointly transfer global motion and local boundary knowledge to the nighttime domain. Extensive experiments have been performed to verify the superiority of the proposed method. | https://openreview.net/pdf/e3328346222dfff5580d6899ec8ffbac04ef6de9.pdf |
Graphical Multioutput Gaussian Process with Attention | https://openreview.net/forum?id=6N8TW504aa | https://openreview.net/forum?id=6N8TW504aa | Yijue Dai,Wenzhong Yan,Feng Yin | ICLR 2024,Spotlight | Integrating information while recognizing dependence from multiple data sources and enhancing the predictive performance of the multi-output regression are challenging tasks. Multioutput Gaussian Process (MOGP) methods offer outstanding solutions with tractable predictions and uncertainty quantification. However, their practical applications are hindered by high computational complexity and storage demand. Additionally, there exist model mismatches in existing MOGP models when dealing with non-Gaussian data. To improve the model representation ability in terms of flexibility, optimality, and scalability, this paper introduces a novel multi-output regression framework, termed Graphical MOGP (GMOGP), which is empowered by: (i) Generating flexible Gaussian process priors consolidated from dentified parents, (ii) providing dependent processes with attention-based graphical representations, and (iii) achieving Pareto optimal solutions of kernel hyperparameters via a distributed learning framework. Numerical results confirm that the proposed GMOGP significantly outperforms state-of-the-art MOGP alternatives in predictive performance, as well as in time and memory efficiency, across various synthetic and real datasets. | https://openreview.net/pdf/0b6ac06d4a4184388fc33af01e76741a7603c341.pdf |
Soft Contrastive Learning for Time Series | https://openreview.net/forum?id=pAsQSWlDUf | https://openreview.net/forum?id=pAsQSWlDUf | Seunghan Lee,Taeyoung Park,Kibok Lee | ICLR 2024,Spotlight | Contrastive learning has shown to be effective to learn representations from time series in a self-supervised way.
However, contrasting similar time series instances or values from adjacent timestamps within a time series leads to ignore their inherent correlations, which results in deteriorating the quality of learned representations.
To address this issue, we propose \textit{SoftCLT}, a simple yet effective soft contrastive learning strategy for time series.
This is achieved by introducing instance-wise and temporal contrastive loss with soft assignments ranging from zero to one.
Specifically, we define soft assignments for 1) instance-wise contrastive loss by distance between time series on the data space, warping and 2) temporal contrastive loss by the difference of timestamps.
SoftCLT is a plug-and-play method for time series contrastive learning that improves the quality of learned representations without bells and whistles.
In experiments, we demonstrate that SoftCLT consistently improves the performance in various downstream tasks including classification, semi-supervised learning, transfer learning, and anomaly detection, showing state-of-the-art performance.
Code is available at this repository: https://github.com/seunghan96/softclt. | https://openreview.net/pdf/310a449b3f99f247f4a3f30cd2a2f8806296770d.pdf |
Enhancing Group Fairness in Online Settings Using Oblique Decision Forests | https://openreview.net/forum?id=E1NxN5QMOE | https://openreview.net/forum?id=E1NxN5QMOE | Somnath Basu Roy Chowdhury,Nicholas Monath,Ahmad Beirami,Rahul Kidambi,Kumar Avinava Dubey,Amr Ahmed,Snigdha Chaturvedi | ICLR 2024,Spotlight | Fairness, especially group fairness, is an important consideration in the context of machine learning systems. The most commonly adopted group fairness-enhancing techniques are in-processing methods that rely on a mixture of a fairness objective (e.g., demographic parity) and a task-specific objective (e.g., cross-entropy) during the training process. However, when data arrives in an online fashion – one instance at a time – optimizing such fairness objectives poses several challenges. In particular, group fairness objectives are defined using expectations of predictions across different demographic groups. In the online setting, where the algorithm has access to a single instance at a time, estimating the group fairness objective requires additional storage and significantly more computation (e.g., forward/backward passes) than the task-specific objective at every time step. In this paper, we propose Aranyani, an ensemble of oblique decision trees, to make fair decisions in online settings. The hierarchical tree structure of Aranyani enables parameter isolation and allows us to efficiently compute the fairness gradients using aggregate statistics of previous decisions, eliminating the need for additional storage and forward/backward passes. We also present an efficient framework to train Aranyani and theoretically analyze several of its properties. We conduct empirical evaluations on 5 publicly available benchmarks (including vision and language datasets) to show that Aranyani achieves a better accuracy-fairness trade-off compared to baseline approaches. | https://openreview.net/pdf/a8b785960a7be6f38289cdad3923ad1ba27c3a26.pdf |
Generative Learning for Financial Time Series with Irregular and Scale-Invariant Patterns | https://openreview.net/forum?id=CdjnzWsQax | https://openreview.net/forum?id=CdjnzWsQax | Hongbin Huang,Minghua Chen,Xiao Qiao | ICLR 2024,Spotlight | Limited data availability poses a major obstacle in training deep learning models for financial applications. Synthesizing financial time series to augment real-world data is challenging due to the irregular and scale-invariant patterns uniquely associated with financial time series - temporal dynamics that repeat with varying duration and magnitude. Such dynamics cannot be captured by existing approaches, which often assume regularity and uniformity in the underlying data. We develop a novel generative framework called FTS-Diffusion to model irregular and scale-invariant patterns that consists of three modules. First, we develop a scale-invariant pattern recognition algorithm to extract recurring patterns that vary in duration and magnitude. Second, we construct a diffusion-based generative network to synthesize segments of patterns. Third, we model the temporal transition of patterns in order to aggregate the generated segments. Extensive experiments show that FTS-Diffusion generates synthetic financial time series highly resembling observed data, outperforming state-of-the-art alternatives. Two downstream experiments demonstrate that augmenting real-world data with synthetic data generated by FTS-Diffusion reduces the error of stock market prediction by up to 17.9%. To the best of our knowledge, this is the first work on generating intricate time series with irregular and scale-invariant patterns, addressing data limitation issues in finance. | https://openreview.net/pdf/afa4bb323e04cbec65604b1a8df0f2eebc2962f3.pdf |
Multiscale Positive-Unlabeled Detection of AI-Generated Texts | https://openreview.net/forum?id=5Lp6qU9hzV | https://openreview.net/forum?id=5Lp6qU9hzV | Yuchuan Tian,Hanting Chen,Xutao Wang,Zheyuan Bai,QINGHUA ZHANG,Ruifeng Li,Chao Xu,Yunhe Wang | ICLR 2024,Spotlight | Recent releases of Large Language Models (LLMs), e.g. ChatGPT, are astonishing at generating human-like texts, but they may impact the authenticity of texts. Previous works proposed methods to detect these AI-generated texts, including simple ML classifiers, pretrained-model-based zero-shot methods, and finetuned language classification models. However, mainstream detectors always fail on short texts, like SMSes, Tweets, and reviews. In this paper, a Multiscale Positive-Unlabeled (MPU) training framework is proposed to address the difficulty of short-text detection without sacrificing long-texts. Firstly, we acknowledge the human-resemblance property of short machine texts, and rephrase AI text detection as a partial Positive-Unlabeled (PU) problem by regarding these short machine texts as partially "unlabeled". Then in this PU context, we propose the length-sensitive Multiscale PU Loss, where a recurrent model in abstraction is used to estimate positive priors of scale-variant corpora. Additionally, we introduce a Text Multiscaling module to enrich training corpora. Experiments show that our MPU method augments detection performance on long AI-generated texts, and significantly improves short-text detection of language model detectors. Language Models trained with MPU could outcompete existing detectors on various short-text and long-text detection benchmarks. The codes are available at https://github.com/mindspore-lab/mindone/tree/master/examples/detect_chatgpt and https://github.com/YuchuanTian/AIGC_text_detector. | https://openreview.net/pdf/bd6826c79f81e0e0ac6f4c84f2b46d80eb3d130b.pdf |
A Lightweight Method for Tackling Unknown Participation Statistics in Federated Averaging | https://openreview.net/forum?id=ZKEuFKfCKA | https://openreview.net/forum?id=ZKEuFKfCKA | Shiqiang Wang,Mingyue Ji | ICLR 2024,Spotlight | In federated learning (FL), clients usually have diverse participation statistics that are unknown a priori, which can significantly harm the performance of FL if not handled properly. Existing works aiming at addressing this problem are usually based on global variance reduction, which requires a substantial amount of additional memory in a multiplicative factor equal to the total number of clients. An important open problem is to find a lightweight method for FL in the presence of clients with unknown participation rates. In this paper, we address this problem by adapting the aggregation weights in federated averaging (FedAvg) based on the participation history of each client. We first show that, with heterogeneous participation statistics, FedAvg with non-optimal aggregation weights can diverge from the optimal solution of the original FL objective, indicating the need of finding optimal aggregation weights. However, it is difficult to compute the optimal weights when the participation statistics are unknown. To address this problem, we present a new algorithm called FedAU, which improves FedAvg by adaptively weighting the client updates based on online estimates of the optimal weights without knowing the statistics of client participation. We provide a theoretical convergence analysis of FedAU using a novel methodology to connect the estimation error and convergence. Our theoretical results reveal important and interesting insights, while showing that FedAU converges to an optimal solution of the original objective and has desirable properties such as linear speedup. Our experimental results also verify the advantage of FedAU over baseline methods with various participation patterns. | https://openreview.net/pdf/deb3da6004c1c25ab01ed64fde43ebe424d7a09c.pdf |
Identifying the Risks of LM Agents with an LM-Emulated Sandbox | https://openreview.net/forum?id=GEcwtMk1uA | https://openreview.net/forum?id=GEcwtMk1uA | Yangjun Ruan,Honghua Dong,Andrew Wang,Silviu Pitis,Yongchao Zhou,Jimmy Ba,Yann Dubois,Chris J. Maddison,Tatsunori Hashimoto | ICLR 2024,Spotlight | Recent advances in Language Model (LM) agents and tool use, exemplified by applications like ChatGPT Plugins, enable a rich set of capabilities but also amplify potential risks—such as leaking private data or causing financial losses. Identifying these risks is labor-intensive, necessitating implementing the tools, setting up the environment for each test scenario manually, and finding risky cases. As tools and agents become more complex, the high cost of testing these agents will make it increasingly difficult to find high-stakes, long-tail risks. To address these challenges, we introduce ToolEmu: a framework that uses an LM to emulate tool execution and enables scalable testing of LM agents against a diverse range of tools and scenarios. Alongside the emulator, we develop an LM-based automatic safety evaluator that examines agent failures and quantifies associated risks. We test both the tool emulator and evaluator through human evaluation and find that 68.8% of failures identified with ToolEmu would be valid real-world agent failures. Using our curated initial benchmark consisting of 36 high-stakes toolkits and 144 test cases, we provide a quantitative risk analysis of current LM agents and identify numerous failures with potentially severe outcomes. Notably, even the safest LM agent exhibits such failures 23.9% of the time according to our evaluator, underscoring the need to develop safer LM agents for real-world deployment. | https://openreview.net/pdf/d1601f78407737fc216de9e6ec0085038f8c885f.pdf |
Coeditor: Leveraging Repo-level Diffs for Code Auto-editing | https://openreview.net/forum?id=ALVwQjZRS8 | https://openreview.net/forum?id=ALVwQjZRS8 | Jiayi Wei,Greg Durrett,Isil Dillig | ICLR 2024,Spotlight | Developers often dedicate significant time to maintaining and refactoring existing code. However, most prior work on generative models for code focuses solely on creating new code, overlooking the distinctive needs of editing existing code. In this work, we explore a multi-round code auto-editing setting, aiming to predict edits to a code region based on recent changes within the same codebase. Our model, Coeditor, is a fine-tuned language model specifically designed for code editing tasks. We represent code changes using a line diff format and employ static analysis to form large customized model contexts, ensuring the availability of appropriate information for prediction. We collect a code editing dataset from the commit histories of 1650 open-source Python projects for training and evaluation. In a simplified single-round, single-edit task, Coeditor significantly outperforms GPT-3.5 and SOTA open-source code completion models (bringing exact-match accuracy from 34.7 up to 60.4), demonstrating the benefits of incorporating editing history for code completion. In a multi-round, multi-edit setting, we observe substantial gains by iteratively conditioning on additional user edits. We have open-sourced our code, data, and model weights to encourage future research and have released a VSCode extension powered by our model for interactive IDE usage. | https://openreview.net/pdf/a68ee5b156d07bd4d39e7718b01a1ecdc5b5c3cb.pdf |
FITS: Modeling Time Series with $10k$ Parameters | https://openreview.net/forum?id=bWcnvZ3qMb | https://openreview.net/forum?id=bWcnvZ3qMb | Zhijian Xu,Ailing Zeng,Qiang Xu | ICLR 2024,Spotlight | In this paper, we introduce FITS, a lightweight yet powerful model for time series analysis. Unlike existing models that directly process raw time-domain data, FITS operates on the principle that time series can be manipulated through interpolation in the complex frequency domain, achieving performance comparable to state-of-the-art models for time series forecasting and anomaly detection tasks. Notably, FITS accomplishes this with a svelte profile of just about $10k$ parameters, making it ideally suited for edge devices and paving the way for a wide range of applications. The code is available for review at: \url{https://anonymous.4open.science/r/FITS}. | https://openreview.net/pdf/b24cfba5a0bb5ddb925050c72614c266f677f9a0.pdf |
MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models | https://openreview.net/forum?id=N8N0hgNDRt | https://openreview.net/forum?id=N8N0hgNDRt | Longhui Yu,Weisen Jiang,Han Shi,Jincheng YU,Zhengying Liu,Yu Zhang,James Kwok,Zhenguo Li,Adrian Weller,Weiyang Liu | ICLR 2024,Spotlight | Large language models (LLMs) have pushed the limits of natural language understanding and exhibited excellent problem-solving ability. Despite the great success, most existing open-source LLMs (\eg, LLaMA-2) are still far away from satisfactory for solving mathematical problems due to the complex reasoning procedures. To bridge this gap, we propose \emph{MetaMath}, a finetuned language model that specializes in mathematical reasoning. Specifically, we start by bootstrapping mathematical questions by rewriting the question from multiple perspectives, which results in a new dataset called MetaMathQA. Then we finetune the LLaMA-2 models on MetaMathQA. Experimental results on two popular benchmarks (\ie, GSM8K and MATH) for mathematical reasoning demonstrate that MetaMath outperforms a suite of open-source LLMs by a significant margin. Our MetaMath-7B model achieves $66.5\%$ on GSM8K and $19.8\%$ on MATH, exceeding the state-of-the-art models of the same size by $11.5\%$ and $8.7\%$. Particularly, MetaMath-70B achieves an accuracy of $82.3\%$ on GSM8K, slightly better than GPT-3.5-Turbo. We release the MetaMathQA dataset, the MetaMath models with different model sizes and the training code for public use. | https://openreview.net/pdf/f6e244230affa5173ef87947c86c25bd2891100d.pdf |
Query-Policy Misalignment in Preference-Based Reinforcement Learning | https://openreview.net/forum?id=UoBymIwPJR | https://openreview.net/forum?id=UoBymIwPJR | Xiao Hu,Jianxiong Li,Xianyuan Zhan,Qing-Shan Jia,Ya-Qin Zhang | ICLR 2024,Spotlight | Preference-based reinforcement learning (PbRL) provides a natural way to align RL agents’ behavior with human desired outcomes, but is often restrained by costly human feedback. To improve feedback efficiency, most existing PbRL methods focus on selecting queries to maximally improve the overall quality of the reward model, but counter-intuitively, we find that this may not necessarily lead to improved performance. To unravel this mystery, we identify a long-neglected issue in the query selection schemes of existing PbRL studies: Query-Policy Misalignment. We show that the seemingly informative queries selected to improve the overall quality of reward model actually may not align with RL agents’ interests, thus offering little help on policy learning and eventually resulting in poor feedback efficiency. We show that this issue can be effectively addressed via policy-aligned query and a specially designed hybrid experience replay, which together enforce the bidirectional query-policy alignment. Simple yet elegant, our method can be easily incorporated into existing approaches by changing only a few lines of code. We showcase in comprehensive experiments that our method achieves substantial gains in both human feedback and RL sample efficiency, demonstrating the importance of addressing query-policy misalignment in PbRL tasks. | https://openreview.net/pdf/7731e84686a5dad0613dd42e1b05f91259ca9066.pdf |
Feature-aligned N-BEATS with Sinkhorn divergence | https://openreview.net/forum?id=TS8HoIWAPQ | https://openreview.net/forum?id=TS8HoIWAPQ | Joonhun Lee,Myeongho Jeon,Myungjoo Kang,Kyunghyun Park | ICLR 2024,Spotlight | We propose Feature-aligned N-BEATS as a domain-generalized time series forecasting model. It is a nontrivial extension of N-BEATS with doubly residual stacking principle (Oreshkin et al. [45]) into a representation learning framework. In particular, it revolves around marginal feature probability measures induced by the intricate composition of residual and feature extracting operators of N-BEATS in each stack and aligns them stack-wise via an approximate of an optimal transport distance referred to as the Sinkhorn divergence. The training loss consists of an empirical risk minimization from multiple source domains, i.e., forecasting loss, and an alignment loss calculated with the Sinkhorn divergence, which allows the model to learn invariant features stack-wise across multiple source data sequences while retaining N-BEATS’s interpretable design and forecasting power. Comprehensive experimental evaluations with ablation studies are provided and the corresponding results demonstrate the proposed model’s forecasting and generalization capabilities. | https://openreview.net/pdf/47a2e8bbe6cc10d3bff9d06eb5871437eede86e2.pdf |
Instructive Decoding: Instruction-Tuned Large Language Models are Self-Refiner from Noisy Instructions | https://openreview.net/forum?id=LebzzClHYw | https://openreview.net/forum?id=LebzzClHYw | Taehyeon Kim,Joonkee Kim,Gihun Lee,Se-Young Yun | ICLR 2024,Spotlight | While instruction-tuned language models have demonstrated impressive zero-shot generalization, these models often struggle to generate accurate responses when faced with instructions that fall outside their training set. This paper presents Instructive Decoding (ID), a simple yet effective approach that augments the efficacy of instruction-tuned models. Specifically, ID adjusts the logits for next-token prediction in a contrastive manner, utilizing predictions generated from a manipulated version of the original instruction, referred to as a noisy instruction. This noisy instruction aims to elicit responses that could diverge from the intended instruction yet remain plausible. We conduct experiments across a spectrum of such noisy instructions, ranging from those that insert semantic noise via random words to others like 'opposite' that elicit the deviated responses. Our approach achieves considerable performance gains across various instruction-tuned models and tasks without necessitating any additional parameter updates. Notably, utilizing 'opposite' as the noisy instruction in ID, which shows the maximum divergence from the original instruction, consistently produces the most significant performance gains across multiple models and tasks. | https://openreview.net/pdf/41130f3ca565e158b2e1217fa3f5da2ba15efd6e.pdf |
Consistent Multi-Class Classification from Multiple Unlabeled Datasets | https://openreview.net/forum?id=fW7DOHDQvF | https://openreview.net/forum?id=fW7DOHDQvF | Zixi Wei,Senlin Shu,Yuzhou Cao,Hongxin Wei,Bo An,Lei Feng | ICLR 2024,Spotlight | Weakly supervised learning aims to construct effective predictive models from imperfectly labeled data. The recent trend of weakly supervised learning has focused on how to learn an accurate classifier from completely unlabeled data, given little supervised information such as class priors. In this paper, we consider a newly proposed weakly supervised learning problem called multi-class classification from multiple unlabeled datasets, where only multiple sets of unlabeled data and their class priors (i.e., the proportions of each class) are provided for training the classifier. To solve this problem, we first propose a classifier-consistent method (CCM) based on a probability transition matrix. However, CCM cannot guarantee risk consistency and lacks of purified supervision information during training. Therefore, we further propose a risk-consistent method (RCM) that progressively purifies supervision information during training by importance weighting. We provide comprehensive theoretical analyses for our methods to demonstrate the statistical consistency. Experimental results on multiple benchmark datasets and various prior matrices demonstrate the superiority of our proposed methods. | https://openreview.net/pdf/c4cacd1a99a9f6cd491de8f23cdb492b4906cbce.pdf |
SpikePoint: An Efficient Point-based Spiking Neural Network for Event Cameras Action Recognition | https://openreview.net/forum?id=7etoNfU9uF | https://openreview.net/forum?id=7etoNfU9uF | Hongwei Ren,Yue Zhou,Xiaopeng LIN,Yulong Huang,Haotian FU,Jie Song,Bojun Cheng | ICLR 2024,Spotlight | Event cameras are bio-inspired sensors that respond to local changes in light intensity and feature low latency, high energy efficiency, and high dynamic range. Meanwhile, Spiking Neural Networks (SNNs) have gained significant attention due to their remarkable efficiency and fault tolerance. By synergistically harnessing the energy efficiency inherent in event cameras and the spike-based processing capabilities of SNNs, their integration could enable ultra-low-power application scenarios, such as action recognition tasks. However, existing approaches often entail converting asynchronous events into conventional frames, leading to additional data mapping efforts and a loss of sparsity, contradicting the design concept of SNNs and event cameras. To address this challenge, we propose SpikePoint, a novel end-to-end point-based SNN architecture. SpikePoint excels at processing sparse event cloud data, effectively extracting both global and local features through a singular-stage structure. Leveraging the surrogate training method, SpikePoint achieves high accuracy with few parameters and maintains low power consumption, specifically employing the identity mapping feature extractor on diverse datasets. SpikePoint achieves state-of-the-art (SOTA) performance on four event-based action recognition datasets using only 16 timesteps, surpassing other SNN methods. Moreover, it also achieves SOTA performance across all methods on three datasets, utilizing approximately 0.3 % of the parameters and 0.5 % of power consumption employed by artificial neural networks (ANNs). These results emphasize the significance of Point Cloud and pave the way for many ultra-low-power event-based data processing applications. | https://openreview.net/pdf/1d9c8889139a212409d5faf9dc557045e96dcc89.pdf |
Inverse Approximation Theory for Nonlinear Recurrent Neural Networks | https://openreview.net/forum?id=yC2waD70Vj | https://openreview.net/forum?id=yC2waD70Vj | Shida Wang,Zhong Li,Qianxiao Li | ICLR 2024,Spotlight | We prove an inverse approximation theorem for the approximation of nonlinear sequence-to-sequence relationships using recurrent neural networks (RNNs). This is a so-called Bernstein-type result in approximation theory, which deduces properties of a target function under the assumption that it can be effectively approximated by a hypothesis space. In particular, we show that nonlinear sequence relationships that can be stably approximated by nonlinear RNNs must have an exponential decaying memory structure - a notion that can be made precise. This extends the previously identified curse of memory in linear RNNs into the general nonlinear setting, and quantifies the essential limitations of the RNN architecture for learning sequential relationships with long-term memory. Based on the analysis, we propose a principled reparameterization method to overcome the limitations. Our theoretical results are confirmed by numerical experiments. | https://openreview.net/pdf/a89df38f3e96bab890df4328af64ca3eb34b8df0.pdf |
Kernel Metric Learning for In-Sample Off-Policy Evaluation of Deterministic RL Policies | https://openreview.net/forum?id=plebgsdiiV | https://openreview.net/forum?id=plebgsdiiV | Haanvid Lee,Tri Wahyu Guntara,Jongmin Lee,Yung-Kyun Noh,Kee-Eung Kim | ICLR 2024,Spotlight | We consider off-policy evaluation (OPE) of deterministic target policies for reinforcement learning (RL) in environments with continuous action spaces. While it is common to use importance sampling for OPE, it suffers from high variance when the behavior policy deviates significantly from the target policy. In order to address this issue, some recent works on OPE proposed in-sample learning with importance resampling. Yet, these approaches are not applicable to deterministic target policies for continuous action spaces. To address this limitation, we propose to relax the deterministic target policy using a kernel and learn the kernel metrics that minimize the overall mean squared error of the estimated temporal difference update vector of an action value function, where the action value function is used for policy evaluation. We derive the bias and variance of the estimation error due to this relaxation and provide analytic solutions for the optimal kernel metric. In empirical studies using various test domains, we show that the OPE with in-sample learning using the kernel with optimized metric achieves significantly improved accuracy than other baselines. | https://openreview.net/pdf/24aac816499705b0d6a509f4908ba2e27ed10775.pdf |
Large Language Models are Efficient Learners of Noise-Robust Speech Recognition | https://openreview.net/forum?id=ceATjGPTUD | https://openreview.net/forum?id=ceATjGPTUD | Yuchen Hu,CHEN CHEN,Chao-Han Huck Yang,Ruizhe Li,Chao Zhang,Pin-Yu Chen,EngSiong Chng | ICLR 2024,Spotlight | Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR), which leverages the rich linguistic knowledge and powerful reasoning ability of LLMs to improve recognition results. The latest work proposes a GER benchmark with "HyPoradise" dataset to learn the mapping from ASR N-best hypotheses to ground-truth transcription by efficient LLM finetuning, which shows great effectiveness but lacks specificity on noise-robust ASR. In this work, we extend the benchmark to noisy conditions and investigate if we can teach LLMs to perform denoising for GER just like what robust ASR do, where one solution is introducing noise information as a conditioner into LLM. However, directly incorporating noise embeddings from audio encoder could harm the LLM tuning due to cross-modality gap. To this end, we propose to extract a language-space noise embedding from the N-best list to represent the noise conditions of source speech, which can promote the denoising process in GER. Furthermore, in order to enhance its representation ability of audio noise, we design a knowledge distillation (KD) approach via mutual information estimation to distill the real noise information in audio embeddings to our language embedding. Experiments on various latest LLMs demonstrate our approach achieves a new breakthrough with up to 53.9% correction improvement in terms of word error rate while with limited training data. Analysis shows that our language-space noise embedding can well represent the noise conditions of source speech, under which off-the-shelf LLMs show strong ability of language-space denoising. | https://openreview.net/pdf/2403a00daa1fa0949fa21d4c5bb972bd398f4dea.pdf |
H2O-SDF: Two-phase Learning for 3D Indoor Reconstruction using Object Surface Fields | https://openreview.net/forum?id=P1ANzoGg3W | https://openreview.net/forum?id=P1ANzoGg3W | Minyoung Park,Mirae Do,Yeon Jae Shin,Jaeseok Yoo,Jongkwang Hong,Joongrock Kim,Chul Lee | ICLR 2024,Spotlight | Advanced techniques using Neural Radiance Fields (NeRF), Signed Distance Fields (SDF), and Occupancy Fields have recently emerged as solutions for 3D indoor scene reconstruction. We introduce a novel two-phase learning approach, H2O-SDF, that discriminates between object and non-object regions within indoor environments. This method achieves a nuanced balance, carefully preserving the geometric integrity of room layouts while also capturing intricate surface details of specific objects. A cornerstone of our two-phase learning framework is the introduction of the Object Surface Field (OSF), a novel concept designed to mitigate the persistent vanishing gradient problem that has previously hindered the capture of high-frequency details in other methods. Our proposed approach is validated through several experiments that include ablation studies. | https://openreview.net/pdf/efccbd7a6e50a44711e740d5009616c9e19fb6e9.pdf |
Sample-Efficient Quality-Diversity by Cooperative Coevolution | https://openreview.net/forum?id=JDud6zbpFv | https://openreview.net/forum?id=JDud6zbpFv | Ke Xue,Ren-Jian Wang,Pengyi Li,Dong Li,Jianye HAO,Chao Qian | ICLR 2024,Spotlight | Quality-Diversity (QD) algorithms, as a subset of evolutionary algorithms, have emerged as a powerful optimization paradigm with the aim of generating a set of high-quality and diverse solutions. Although QD has demonstrated competitive performance in reinforcement learning, its low sample efficiency remains a significant impediment for real-world applications. Recent research has primarily focused on augmenting sample efficiency by refining selection and variation operators of QD. However, one of the less considered yet crucial factors is the inherently large-scale issue of the QD optimization problem. In this paper, we propose a novel Cooperative Coevolution QD (CCQD) framework, which decomposes a policy network naturally into two types of layers, corresponding to representation and decision respectively, and thus simplifies the problem significantly. The resulting two (representation and decision) subpopulations are coevolved cooperatively. CCQD can be implemented with different selection and variation operators. Experiments on several popular tasks within the QDAX suite demonstrate that an instantiation of CCQD achieves approximately a 200% improvement in sample efficiency. | https://openreview.net/pdf/fcc91cb60f0dd347bc02c8beadb05d7d55b9f04f.pdf |
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore | https://openreview.net/forum?id=ruk0nyQPec | https://openreview.net/forum?id=ruk0nyQPec | Sewon Min,Suchin Gururangan,Eric Wallace,Weijia Shi,Hannaneh Hajishirzi,Noah A. Smith,Luke Zettlemoyer | ICLR 2024,Spotlight | The legality of training language models (LMs) on copyrighted or otherwise restricted data is under intense debate. However, as we show, model performance significantly degrades if trained only on low-risk text (e.g., out-of-copyright books or government documents), due to its limited size and domain coverage. We present SILO, a new language model that manages this risk-performance tradeoff during inference. SILO is built by (1) training a parametric LM on the Open License Corpus (OLC), a new corpus we curate with 228B tokens of public domain and permissively licensed text and (2) augmenting it with a more general and easily modifiable nonparametric datastore (e.g., containing copyrighted books or news) that is only queried during inference. The datastore allows use of high-risk data without training on it, supports sentence-level data attribution, and enables data producers to opt out from the model by removing content from the store. These capabilities can foster compliance with data-use regulations such as the fair use doctrine in the United States and the GDPR in the European Union. Our experiments show that the parametric LM struggles on its own with domains not covered by OLC. However, access to the datastore greatly improves out of domain performance, closing 90% of the performance gap with an LM trained on the Pile, a more diverse corpus with mostly high-risk text. We also analyze which nonparametric approach works best, where the remaining errors lie, and how performance scales with datastore size. Our results suggest that it is possible to build high quality language models while mitigating legal risk. | https://openreview.net/pdf/34c8fb489d21452c21be4b0037700e7157e99c21.pdf |
Dynamic Discounted Counterfactual Regret Minimization | https://openreview.net/forum?id=6PbvbLyqT6 | https://openreview.net/forum?id=6PbvbLyqT6 | Hang Xu,Kai Li,Haobo Fu,QIANG FU,Junliang Xing,Jian Cheng | ICLR 2024,Spotlight | Counterfactual regret minimization (CFR) is a family of iterative algorithms showing promising results in solving imperfect-information games. Recent novel CFR variants (e.g., CFR+, DCFR) have significantly improved the convergence rate of the vanilla CFR. The key to these CFR variants’ performance is weighting each iteration non-uniformly, i.e., discounting earlier iterations. However, these algorithms use a fixed, manually-specified scheme to weight each iteration, which enormously limits their potential. In this work, we propose Dynamic Discounted CFR (DDCFR), the first equilibrium-finding framework that discounts prior iterations using a dynamic, automatically-learned scheme. We formalize CFR’s iteration process as a carefully designed Markov decision process and transform the discounting scheme learning problem into a policy optimization problem within it. The learned discounting scheme dynamically weights each iteration on the fly using information available at runtime. Experimental results across multiple games demonstrate that DDCFR’s dynamic discounting scheme has a strong generalization ability and leads to faster convergence with improved performance. The code is available at https://github.com/rpSebastian/DDCFR. | https://openreview.net/pdf/336422d3878e37b0144f3b3da58f90bde675aa6a.pdf |
GIO: Gradient Information Optimization for Training Dataset Selection | https://openreview.net/forum?id=3NnfJnbJT2 | https://openreview.net/forum?id=3NnfJnbJT2 | Dante Everaert,Christopher Potts | ICLR 2024,Spotlight | It is often advantageous to train models on a subset of the available train examples, because the examples are of variable quality or because one would like to train with fewer examples, without sacrificing performance. We present Gradient Information Optimization (GIO), a scalable, task-agnostic approach to this data selection problem that requires only a small set of (unlabeled) examples representing a target distribution. GIO begins from a natural, information-theoretic objective that is intractable in practice. Our contribution is in showing that it can be made highly scalable through a simple relaxation of the objective and a highly efficient implementation. In experiments with machine translation, spelling correction, and image recognition, we show that GIO delivers outstanding results with very small train sets. These findings are robust to different representation models and hyperparameters for GIO itself. GIO is task- and domain-agnostic and can be applied out-of-the-box to new datasets and domains. We open source a pip-installable implementation of the algorithm as "pip install grad-info-opt". | https://openreview.net/pdf/5ca46b1a1d6645c98d731e33e243896ae32be3d3.pdf |
SNIP: Bridging Mathematical Symbolic and Numeric Realms with Unified Pre-training | https://openreview.net/forum?id=KZSEgJGPxu | https://openreview.net/forum?id=KZSEgJGPxu | Kazem Meidani,Parshin Shojaee,Chandan K. Reddy,Amir Barati Farimani | ICLR 2024,Spotlight | In an era where symbolic mathematical equations are indispensable for modeling complex natural phenomena, scientific inquiry often involves collecting observations and translating them into mathematical expressions. Recently, deep learning has emerged as a powerful tool for extracting insights from data. However, existing models typically specialize in either numeric or symbolic domains, and are usually trained in a supervised manner tailored to specific tasks. This approach neglects the substantial benefits that could arise from a task-agnostic multi-modal understanding between symbolic equations and their numeric counterparts. To bridge the gap, we introduce SNIP, a Symbolic-Numeric Integrated Pre-training model, which employs contrastive learning between symbolic and numeric domains, enhancing their mutual similarities in the embeddings. By performing latent space analysis, we observe that SNIP provides cross-domain insights into the representations, revealing that symbolic supervision enhances the embeddings of numeric data and vice versa. We evaluate SNIP across diverse tasks, including symbolic-to-numeric mathematical property prediction and numeric-to-symbolic equation discovery, commonly known as symbolic regression. Results show that SNIP effectively transfers to various tasks, consistently outperforming fully supervised baselines and competing strongly with established task-specific methods, especially in the low data regime scenarios where available data is limited. | https://openreview.net/pdf/1ba8f83f76d43ebf7625a6e87d0060d6361310f6.pdf |
Fantastic Gains and Where to Find Them: On the Existence and Prospect of General Knowledge Transfer between Any Pretrained Model | https://openreview.net/forum?id=m50eKHCttz | https://openreview.net/forum?id=m50eKHCttz | Karsten Roth,Lukas Thede,A. Sophia Koepke,Oriol Vinyals,Olivier J Henaff,Zeynep Akata | ICLR 2024,Spotlight | Training deep networks requires various design decisions regarding for instance their architecture, data augmentation, or optimization. In this work, we find these training variations to result in networks learning unique feature sets from the data. Using public model libraries comprising thousands of models trained on canonical datasets like ImageNet, we observe that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other – independent of overall performance. Given any arbitrary pairing of pretrained models and no external rankings (such as separate test sets, e.g. due to data privacy), we investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation – a task made particularly difficult as additional knowledge can be contained in stronger, equiperformant or weaker models. Yet facilitating robust transfer in scenarios agnostic to pretrained model pairings would unlock auxiliary gains and knowledge fusion from any model repository without restrictions on model and problem specifics - including from weaker, lower-performance models. This work therefore provides an initial, in-depth exploration on the viability of such general-purpose knowledge transfer. Across large-scale experiments, we first reveal the shortcomings of standard knowledge distillation techniques, and then propose a much more general extension through data partitioning for successful transfer between nearly all pretrained models, which we show can also be done unsupervised. Finally, we assess both the scalability and impact of fundamental model properties on successful model-agnostic knowledge transfer. | https://openreview.net/pdf/122f5389127b21435f80c82696204c736a116976.pdf |
Robustifying State-space Models for Long Sequences via Approximate Diagonalization | https://openreview.net/forum?id=DjeQ39QoLQ | https://openreview.net/forum?id=DjeQ39QoLQ | Annan Yu,Arnur Nigmetov,Dmitriy Morozov,Michael W. Mahoney,N. Benjamin Erichson | ICLR 2024,Spotlight | State-space models (SSMs) have recently emerged as a framework for learning long-range sequence tasks. An example is the structured state-space sequence (S4) layer, which uses the diagonal-plus-low-rank structure of the HiPPO initialization framework. However, the complicated structure of the S4 layer poses challenges; and, in an effort to address these challenges, models such as S4D and S5 have considered a purely diagonal structure. This choice simplifies the implementation, improves computational efficiency, and allows channel communication. However, diagonalizing the HiPPO framework is itself an ill-posed problem. In this paper, we propose a general solution for this and related ill-posed diagonalization problems in machine learning. We introduce a generic, backward-stable ``perturb-then-diagonalize'' (PTD) methodology, which is based on the pseudospectral theory of non-normal operators, and which may be interpreted as the approximate diagonalization of the non-normal matrices defining SSMs. Based on this, we introduce the S4-PTD and S5-PTD models. Through theoretical analysis of the transfer functions of different initialization schemes, we demonstrate that the S4-PTD/S5-PTD initialization strongly converges to the HiPPO framework, while the S4D/S5 initialization only achieves weak convergences. As a result, our new models show resilience to Fourier-mode noise-perturbed inputs, a crucial property not achieved by the S4D/S5 models. In addition to improved robustness, our S5-PTD model averages 87.6% accuracy on the Long-Range Arena benchmark, demonstrating that the PTD methodology helps to improve the accuracy of deep learning models. | https://openreview.net/pdf/204207dab9f475c4c40ddb4a399f19c5fac72105.pdf |
Provable Offline Preference-Based Reinforcement Learning | https://openreview.net/forum?id=tVMPfEGT2w | https://openreview.net/forum?id=tVMPfEGT2w | Wenhao Zhan,Masatoshi Uehara,Nathan Kallus,Jason D. Lee,Wen Sun | ICLR 2024,Spotlight | In this paper, we investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback where feedback is available in the form of preference between trajectory pairs rather than explicit rewards. Our proposed algorithm consists of two main steps: (1) estimate the implicit reward using Maximum Likelihood Estimation (MLE) with general function approximation from offline data and (2) solve a distributionally robust planning problem over a confidence set around the MLE. We consider the general reward setting where the reward can be defined over the whole trajectory and provide a novel guarantee that allows us to learn any target policy with a polynomial number of samples, as long as the target policy is covered by the offline data. This guarantee is the first of its kind with general function approximation. To measure the coverage of the target policy, we introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability coefficient. We also establish lower bounds that highlight the necessity of such concentrability and the difference from standard RL, where state-action-wise rewards are directly observed. We further extend and analyze our algorithm when the feedback is given over action pairs. | https://openreview.net/pdf/ef2a33a9b6e9fd7ea7de7dbba6688f49e0e58206.pdf |
Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory | https://openreview.net/forum?id=gmg7t8b4s0 | https://openreview.net/forum?id=gmg7t8b4s0 | Niloofar Mireshghallah,Hyunwoo Kim,Xuhui Zhou,Yulia Tsvetkov,Maarten Sap,Reza Shokri,Yejin Choi | ICLR 2024,Spotlight | Existing efforts on quantifying privacy implications for large language models (LLMs) solely focus on measuring leakage of training data. In this work, we shed light on the often-overlooked interactive settings where an LLM receives information from multiple sources and generates an output to be shared with other entities, creating the potential of exposing sensitive input data in inappropriate contexts. In these scenarios, humans nat- urally uphold privacy by choosing whether or not to disclose information depending on the context. We ask the question “Can LLMs demonstrate an equivalent discernment and reasoning capability when considering privacy in context?” We propose CONFAIDE, a benchmark grounded in the theory of contextual integrity and designed to identify critical weaknesses in the privacy reasoning capabilities of instruction-tuned LLMs. CONFAIDE consists of four tiers, gradually increasing in complexity, with the final tier evaluating contextual privacy reasoning and theory of mind capabilities. Our experiments show that even commercial models such as GPT-4 and ChatGPT reveal private information in contexts that humans would not, 39% and 57% of the time, respectively, highlighting the urgent need for a new direction of privacy-preserving approaches as we demonstrate a larger underlying problem stemmed in the models’ lack of reasoning capabilities. | https://openreview.net/pdf/915e98b16264c3e1d6d3db0a8d69afc76b90ae14.pdf |
Provable Reward-Agnostic Preference-Based Reinforcement Learning | https://openreview.net/forum?id=yTBXeXdbMf | https://openreview.net/forum?id=yTBXeXdbMf | Wenhao Zhan,Masatoshi Uehara,Wen Sun,Jason D. Lee | ICLR 2024,Spotlight | Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories, rather than explicit reward signals. While PbRL has demonstrated practical success in fine-tuning language models, existing theoretical work focuses on regret minimization and fails to capture most of the practical frameworks. In this study, we fill in such a gap between theoretical PbRL and practical algorithms by proposing a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired before collecting any human feedback. Theoretical analysis demonstrates that our algorithm requires less human feedback for learning the optimal policy under preference-based models with linear parameterization and unknown transitions, compared to the existing theoretical literature. Specifically, our framework can incorporate linear and low-rank MDPs with efficient sample complexity. Additionally, we investigate reward-agnostic RL with action-based comparison feedback and introduce an efficient querying algorithm tailored to this scenario. | https://openreview.net/pdf/cc1b1fb83857ac10b10a13b0a2c7d061594bfdd8.pdf |
Unleashing the Potential of Fractional Calculus in Graph Neural Networks with FROND | https://openreview.net/forum?id=wcka3bd7P4 | https://openreview.net/forum?id=wcka3bd7P4 | Qiyu Kang,Kai Zhao,Qinxu Ding,Feng Ji,Xuhao Li,Wenfei Liang,Yang Song,Wee Peng Tay | ICLR 2024,Spotlight | We introduce the FRactional-Order graph Neural Dynamical network (FROND), a new continuous graph neural network (GNN) framework. Unlike traditional continuous GNNs that rely on integer-order differential equations, FROND employs the Caputo fractional derivative to leverage the non-local properties of fractional calculus. This approach enables the capture of long-term dependencies in feature updates, moving beyond the Markovian update mechanisms in conventional integer-order models and offering enhanced capabilities in graph representation learning.
We offer an interpretation of the node feature updating process in FROND from a non-Markovian random walk perspective when the feature updating is particularly governed by a diffusion process.
We demonstrate analytically that oversmoothing can be mitigated in this setting.
Experimentally, we validate the FROND framework by comparing the fractional adaptations of various established integer-order continuous GNNs, demonstrating their consistently improved performance and underscoring the framework's potential as an effective extension to enhance traditional continuous GNNs.
The code is available at \url{https://github.com/zknus/ICLR2024-FROND}. | https://openreview.net/pdf/62b51824b9a914534dd00158380ffd4aa835c48a.pdf |
MetaPhysiCa: Improving OOD Robustness in Physics-informed Machine Learning | https://openreview.net/forum?id=KrWuDiW4Qm | https://openreview.net/forum?id=KrWuDiW4Qm | S Chandra Mouli,Muhammad Alam,Bruno Ribeiro | ICLR 2024,Spotlight | A fundamental challenge in physics-informed machine learning (PIML) is the design of robust PIML methods for out-of-distribution (OOD) forecasting tasks. These OOD tasks require learning-to-learn from observations of the same (ODE) dynamical system with different unknown ODE parameters, and demand accurate forecasts even under out-of-support initial conditions and out-of-support ODE parameters. In this work we propose to improve the OOD robustness of PIML via a meta-learning procedure for causal structure discovery. Using three different OOD tasks, we empirically observe that the proposed approach significantly outperforms existing state-of-the-art PIML and deep learning methods (with $2\times$ to $28\times$ lower OOD errors). | https://openreview.net/pdf/e8efd660b312112ef0fd22c7e460d8a72eb51253.pdf |
Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation | https://openreview.net/forum?id=mutJBk3ILg | https://openreview.net/forum?id=mutJBk3ILg | Kimia Hamidieh,Haoran Zhang,Swami Sankaranarayanan,Marzyeh Ghassemi | ICLR 2024,Spotlight | Supervised learning methods have been found to exhibit inductive biases favoring simpler features. When such features are spuriously correlated with the label, this can result in suboptimal performance on minority subgroups. Despite the growing popularity of methods which learn from unlabeled data, the extent to which these representations rely on spurious features for prediction is unclear. In this work, we explore the impact of spurious features on Self-Supervised Learning (SSL) for visual representation learning. We first empirically show that commonly used augmentations in SSL can cause undesired invariances in the image space, and illustrate this with a simple example. We further show that classical approaches in combating spurious correlations, such as dataset re-sampling during SSL, do not consistently lead to invariant representations. Motivated by these findings, we propose LateTVG to remove spurious information from these representations during pre-training, by regularizing later layers of the encoder via pruning. We find that our method produces representations which outperform the baselines on several benchmarks, without the need for group or label information during SSL. | https://openreview.net/pdf/b3e9f812dd9a2de2308f2211b33b7d419ab89fc1.pdf |
Project and Probe: Sample-Efficient Adaptation by Interpolating Orthogonal Features | https://openreview.net/forum?id=f6CBQYxXvr | https://openreview.net/forum?id=f6CBQYxXvr | Annie S Chen,Yoonho Lee,Amrith Setlur,Sergey Levine,Chelsea Finn | ICLR 2024,Spotlight | Transfer learning with a small amount of target data is an effective and common approach to adapting a pre-trained model to distribution shifts. In some situations, target data labels may be expensive to obtain, so we may only have access to a limited number of target data points. To make the most of a very small target dataset, we propose a lightweight, sample-efficient approach that learns a diverse set of features and adapts to a target distribution by interpolating these features. Our approach, Project and Probe (Pro$^2$), first learns a linear projection that maps a pre-trained embedding onto orthogonal directions while being predictive of labels in the source dataset. The goal of this step is to learn a variety of predictive features, so that at least some of them remain useful after distribution shift. Pro$^2$ then learns a linear classifier on top of these projected features using a small target dataset. Theoretically, we find that Pro$^2$ results in more sample-efficient generalization by inducing a favorable bias-variance tradeoff. Our experiments on four datasets, with multiple distribution shift settings for each, show that Pro$^2$ improves performance by 5-15% when given limited target data compared to prior methods such as standard linear probing. | https://openreview.net/pdf/c7ebd7fa822b912a9fa27ca0702572a707ec85e6.pdf |
Implicit bias of SGD in $L_2$-regularized linear DNNs: One-way jumps from high to low rank | https://openreview.net/forum?id=P1aobHnjjj | https://openreview.net/forum?id=P1aobHnjjj | Zihan Wang,Arthur Jacot | ICLR 2024,Spotlight | The $L_{2}$-regularized loss of Deep Linear Networks (DLNs) with
more than one hidden layers has multiple local minima, corresponding
to matrices with different ranks. In tasks such as matrix completion,
the goal is to converge to the local minimum with the smallest rank
that still fits the training data. While rank-underestimating minima
can be avoided since they do not fit the data, GD might get
stuck at rank-overestimating minima. We show that with SGD, there is always a probability to jump
from a higher rank minimum to a lower rank one, but the probability
of jumping back is zero. More precisely, we define a sequence of sets
$B_{1}\subset B_{2}\subset\cdots\subset B_{R}$ so that $B_{r}$
contains all minima of rank $r$ or less (and not more) that are absorbing
for small enough ridge parameters $\lambda$ and learning rates $\eta$:
SGD has prob. 0 of leaving $B_{r}$, and from any starting point there
is a non-zero prob. for SGD to go in $B_{r}$. | https://openreview.net/pdf/c809967fdabfa761319f0239253e78d629fa1684.pdf |
Whole-Song Hierarchical Generation of Symbolic Music Using Cascaded Diffusion Models | https://openreview.net/forum?id=sn7CYWyavh | https://openreview.net/forum?id=sn7CYWyavh | Ziyu Wang,Lejun Min,Gus Xia | ICLR 2024,Spotlight | Recent deep music generation studies have put much emphasis on long-term generation with structures. However, we are yet to see high-quality, well-structured **whole-song** generation. In this paper, we make the first attempt to model a full music piece under the realization of *compositional hierarchy*. With a focus on symbolic representations of pop songs, we define a hierarchical language, in which each level of hierarchy focuses on the semantics and context dependency at a certain music scope. The high-level languages reveal whole-song form, phrase, and cadence, whereas the low-level languages focus on notes, chords, and their local patterns. A cascaded diffusion model is trained to model the hierarchical language, where each level is conditioned on its upper levels. Experiments and analysis show that our model is capable of generating full-piece music with recognizable global verse-chorus structure and cadences, and the music quality is higher than the baselines. Additionally, we show that the proposed model is *controllable* in a flexible way. By sampling from the interpretable hierarchical languages or adjusting pre-trained external representations, users can control the music flow via various features such as phrase harmonic structures, rhythmic patterns, and accompaniment texture. | https://openreview.net/pdf/36e2505cb773c92384616d6a2cc198d112c0cfab.pdf |
Evaluating the Zero-shot Robustness of Instruction-tuned Language Models | https://openreview.net/forum?id=g9diuvxN6D | https://openreview.net/forum?id=g9diuvxN6D | Jiuding Sun,Chantal Shaib,Byron C Wallace | ICLR 2024,Spotlight | Instruction fine-tuning has recently emerged as a promising approach for improving the zero-shot capabilities of Large Language Models (LLMs) on new tasks. This technique has shown particular strength in improving the performance of modestly sized LLMs, sometimes inducing performance competitive with much larger model variants. In this paper, we ask two questions: (1) How sensitive are instruction-tuned models to the particular phrasings of instructions, and, (2) How can we make them more robust to such natural language variation? To answer the former, we collect a set of 319 instructions manually written by NLP practitioners for over 80 unique tasks included in widely used benchmarks, and we evaluate the variance and average performance of these instructions as compared to instruction phrasings observed during instruction fine-tuning. We find that using novel (unobserved) but appropriate instruction phrasings consistently degrades model performance, sometimes substantially so. Further, such natural instructions yield a wide variance in downstream performance, despite their semantic equivalence. Put another way, instruction-tuned models are not especially robust to instruction re-phrasings.
We propose a simple method to mitigate this issue by introducing ``soft prompt'' embedding parameters and optimizing these to maximize the similarity between representations of semantically equivalent instructions. We show that this method consistently improves the robustness of instruction-tuned models. | https://openreview.net/pdf/96fd9e94bdfa38510258729a25b2ba3b5aa4064d.pdf |
Critical Learning Periods Emerge Even in Deep Linear Networks | https://openreview.net/forum?id=Aq35gl2c1k | https://openreview.net/forum?id=Aq35gl2c1k | Michael Kleinman,Alessandro Achille,Stefano Soatto | ICLR 2024,Spotlight | Critical learning periods are periods early in development where temporary sensory deficits can have a permanent effect on behavior and learned representations.
Despite the radical differences between biological and artificial networks, critical learning periods have been empirically observed in both systems. This suggests that critical periods may be fundamental to learning and not an accident of biology.
Yet, why exactly critical periods emerge in deep networks is still an open question, and in particular it is unclear whether the critical periods observed in both systems depend on particular architectural or optimization details. To isolate the key underlying factors, we focus on deep linear network models, and show that, surprisingly, such networks also display much of the behavior seen in biology and artificial networks, while being amenable to analytical treatment. We show that critical periods depend on the depth of the model and structure of the data distribution. We also show analytically and in simulations that the learning of features is tied to competition between sources. Finally, we extend our analysis to multi-task learning to show that pre-training on certain tasks can damage the transfer performance on new tasks, and show how this depends on the relationship between tasks and the duration of the pre-training stage. To the best of our knowledge, our work provides the first analytically tractable model that sheds light into why critical learning periods emerge in biological and artificial networks. | https://openreview.net/pdf/62e86f3312ebf0b894a2af8cffc4f37094ff6695.pdf |
MOTOR: A Time-to-Event Foundation Model For Structured Medical Records | https://openreview.net/forum?id=NialiwI2V6 | https://openreview.net/forum?id=NialiwI2V6 | Ethan Steinberg,Jason Alan Fries,Yizhe Xu,Nigam Shah | ICLR 2024,Spotlight | We present a self-supervised, time-to-event (TTE) foundation model called MOTOR (Many Outcome Time Oriented Representations) which is pretrained on timestamped sequences of events in electronic health records (EHR) and health insurance claims. TTE models are used for estimating the probability distribution of the time until a specific event occurs, which is an important task in medical settings. TTE models provide many advantages over classification using fixed time horizons, including naturally handling censored observations, but are challenging to train with limited labeled data. MOTOR addresses this challenge by pretraining on up to 55M patient records (9B clinical events). We evaluate MOTOR's transfer learning performance on 19 tasks, across 3 patient databases (a private EHR system, MIMIC-IV, and Merative claims data). Task-specific models adapted from MOTOR improve time-dependent C statistics by 4.6\% over state-of-the-art, improve label efficiency by up to 95\%, and are more robust to temporal distributional shifts. We further evaluate cross-site portability by adapting our MOTOR foundation model for six prediction tasks on the MIMIC-IV dataset, where it outperforms all baselines. MOTOR is the first foundation model for medical TTE predictions and we release a 143M parameter pretrained model for research use at https://huggingface.co/StanfordShahLab/motor-t-base. | https://openreview.net/pdf/4183f6aeee58dddcc690f9265adb21cd0dac6757.pdf |
GenSim: Generating Robotic Simulation Tasks via Large Language Models | https://openreview.net/forum?id=OI3RoHoWAN | https://openreview.net/forum?id=OI3RoHoWAN | Lirui Wang,Yiyang Ling,Zhecheng Yuan,Mohit Shridhar,Chen Bao,Yuzhe Qin,Bailin Wang,Huazhe Xu,Xiaolong Wang | ICLR 2024,Spotlight | Collecting large amounts of real-world interaction data to train general robotic policies is often prohibitively expensive, thus motivating the use of simulation data. However, existing methods for data generation have generally focused on scene-level diversity (e.g., object instances and poses) rather than task-level diversity, due to the human effort required to come up with and verify novel tasks. This has made it challenging for policies trained on simulation data to demonstrate significant task-level generalization. In this paper, we propose to automatically generate rich simulation environments and expert demonstrations by exploiting a large language models' (LLM) grounding and coding ability. Our approach, dubbed GenSim, has two modes: goal-directed generation, wherein a target task is given to the LLM and the LLM proposes a task curriculum to solve the target task, and exploratory generation, wherein the LLM bootstraps from previous tasks and iteratively proposes novel tasks that would be helpful in solving more complex tasks. We use GPT4 to expand the existing benchmark by ten times to over 100 tasks, on which we conduct supervised finetuning and evaluate several LLMs including finetuned GPTs and Code Llama on code generation for robotic simulation tasks. Furthermore, we observe that LLMs-generated simulation programs can enhance task-level generalization significantly when used for multitask policy training. We further find that with minimal sim-to-real adaptation, the multitask policies pretrained on GPT4-generated simulation tasks exhibit stronger transfer to unseen long-horizon tasks in the real world and outperform baselines by 25%. See our project website (https://gen-sim.github.io) and demo (https://huggingface.co/spaces/Gen-Sim/Gen-Sim) for visualizations and open-source models and datasets. | https://openreview.net/pdf/d84b32393144549665a7888268a368b1eb84b7c3.pdf |
Understanding Augmentation-based Self-Supervised Representation Learning via RKHS Approximation and Regression | https://openreview.net/forum?id=Ax2yRhCQr1 | https://openreview.net/forum?id=Ax2yRhCQr1 | Runtian Zhai,Bingbin Liu,Andrej Risteski,J Zico Kolter,Pradeep Kumar Ravikumar | ICLR 2024,Spotlight | Data augmentation is critical to the empirical success of modern self-supervised representation learning, such as contrastive learning and masked language modeling.
However, a theoretical understanding of the exact role of the augmentation remains limited.
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator, suggesting that learning a linear probe atop such representation can be connected to RKHS regression.
Building on this insight, this work delves into a statistical analysis of augmentation-based pretraining.
Starting from the isometry property, a geometric characterization of the target function given by the augmentation, we disentangle the effects of the model and the augmentation,
and prove two generalization bounds that are free of model complexity.
Our first bound works for an arbitrary encoder, and it is the sum of an estimation error bound incurred by fitting a linear probe, and an approximation error bound by RKHS approximation.
Our second bound specifically addresses the case
where the encoder extracts the top-d eigenspace of a finite-sample-based approximation of the underlying RKHS.
A key ingredient in our analysis is the *augmentation complexity*,
which we use to quantitatively compare different augmentations and analyze their impact on downstream performance. | https://openreview.net/pdf/2e845de474870e7f97f44a9beeff24e04dd224b6.pdf |
Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs | https://openreview.net/forum?id=MO5PiKHELW | https://openreview.net/forum?id=MO5PiKHELW | Angelica Chen,Ravid Shwartz-Ziv,Kyunghyun Cho,Matthew L Leavitt,Naomi Saphra | ICLR 2024,Spotlight | Most interpretability research in NLP focuses on understanding the behavior and features of a fully trained model. However, certain insights into model behavior may only be accessible by observing the trajectory of the training process. We present a case study of syntax acquisition in masked language models (MLMs) that demonstrates how analyzing the evolution of interpretable artifacts throughout training deepens our understanding of emergent behavior. In particular, we study Syntactic Attention Structure (SAS), a naturally emerging property of MLMs wherein specific Transformer heads tend to focus on specific syntactic relations. We identify a brief window in pretraining when models abruptly acquire SAS, concurrent with a steep drop in loss. This breakthrough precipitates the subsequent acquisition of linguistic capabilities. We then examine the causal role of SAS by manipulating SAS during training, and demonstrate that SAS is necessary for the development of grammatical capabilities. We further find that SAS competes with other beneficial traits during training, and that briefly suppressing SAS improves model quality. These findings offer an interpretation of a real-world example of both simplicity bias and breakthrough training dynamics. | https://openreview.net/pdf/8d2d2b9084da09d4b41f5ad2da660350019c5412.pdf |
SE(3)-Stochastic Flow Matching for Protein Backbone Generation | https://openreview.net/forum?id=kJFIH23hXb | https://openreview.net/forum?id=kJFIH23hXb | Joey Bose,Tara Akhound-Sadegh,Guillaume Huguet,Kilian FATRAS,Jarrid Rector-Brooks,Cheng-Hao Liu,Andrei Cristian Nica,Maksym Korablyov,Michael M. Bronstein,Alexander Tong | ICLR 2024,Spotlight | The computational design of novel protein structures has the potential to impact numerous scientific disciplines greatly. Toward this goal, we introduce \foldflow, a series of novel generative models of increasing modeling power based on the flow-matching paradigm over $3\mathrm{D}$ rigid motions---i.e. the group $\mathrm{SE(3)}$---enabling accurate modeling of protein backbones. We first introduce $\text{FoldFlow-Base}$, a simulation-free approach to learning deterministic continuous-time dynamics and matching invariant target distributions on $\mathrm{SE(3)}$. We next accelerate training by incorporating Riemannian optimal transport to create $\text{FoldFlow-OT}$, leading to the construction of both more simple and stable flows. Finally, we design \foldflowsfm, coupling both Riemannian OT and simulation-free training to learn stochastic continuous-time dynamics over $\mathrm{SE(3)}$. Our family of $\text{FoldFlow}$, generative models offers several key advantages over previous approaches to the generative modeling of proteins: they are more stable and faster to train than diffusion-based approaches, and our models enjoy the ability to map any invariant source distribution to any invariant target distribution over $\mathrm{SE(3)}$. Empirically, we validate $\text{FoldFlow}$, on protein backbone generation of up to $300$ amino acids leading to high-quality designable, diverse, and novel samples. | https://openreview.net/pdf/2ecf8626dae97e88a2770c4d2e119db485d03748.pdf |
DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer | https://openreview.net/forum?id=Ifz3IgsEPX | https://openreview.net/forum?id=Ifz3IgsEPX | Junyuan Hong,Jiachen T. Wang,Chenhui Zhang,Zhangheng LI,Bo Li,Zhangyang Wang | ICLR 2024,Spotlight | Large Language Models (LLMs) have emerged as dominant tools for various tasks, particularly when tailored for a specific target by prompt tuning. Nevertheless, concerns surrounding data privacy present obstacles due to the tuned prompts' dependency on sensitive private information. A practical solution is to host a local LLM and optimize a soft prompt privately using data. Yet, hosting a local model becomes problematic when model ownership is protected. Alternative methods, like sending data to the model's provider for training, intensify these privacy issues facing an untrusted provider. In this paper, we present a novel solution called Differentially-Private Offsite Prompt Tuning (DP-OPT) to address this challenge. Our approach involves tuning a discrete prompt on the client side and then applying it to the desired cloud models. We demonstrate that prompts suggested by LLMs themselves can be transferred without compromising performance significantly. To ensure that the prompts do not leak private information, we introduce the first private prompt generation mechanism, by a differentially-private (DP) ensemble of in-context learning with private demonstrations. With DP-OPT, generating privacy-preserving prompts by Vicuna-7b can yield competitive performance compared to non-private in-context learning on GPT3.5 or local private prompt tuning.
Codes are available at https://github.com/VITA-Group/DP-OPT. | https://openreview.net/pdf/6dfeb74c7c420594a6132f6bfe094a53dbf73317.pdf |
Geographic Location Encoding with Spherical Harmonics and Sinusoidal Representation Networks | https://openreview.net/forum?id=PudduufFLa | https://openreview.net/forum?id=PudduufFLa | Marc Rußwurm,Konstantin Klemmer,Esther Rolf,Robin Zbinden,Devis Tuia | ICLR 2024,Spotlight | Learning representations of geographical space is vital for any machine learning model that integrates geolocated data, spanning application domains such as remote sensing, ecology, or epidemiology. Recent work embeds coordinates using sine and cosine projections based on Double Fourier Sphere (DFS) features. These embeddings assume a rectangular data domain even on global data, which can lead to artifacts, especially at the poles. At the same time, little attention has been paid to the exact design of the neural network architectures with which these functional embeddings are combined. This work proposes a novel location encoder for globally distributed geographic data that combines spherical harmonic basis functions, natively defined on spherical surfaces, with sinusoidal representation networks (SirenNets) that can be interpreted as learned Double Fourier Sphere embedding. We systematically evaluate positional embeddings and neural network architectures across various benchmarks and synthetic evaluation datasets. In contrast to previous approaches that require the combination of both positional encoding and neural networks to learn meaningful representations, we show that both spherical harmonics and sinusoidal representation networks are competitive on their own but set state-of-the-art performances across tasks when combined. The model code and experiments are available at https://github.com/marccoru/locationencoder. | https://openreview.net/pdf/11eead9eb25de1cd772e111da1b931604a8fe49a.pdf |
A General Framework for User-Guided Bayesian Optimization | https://openreview.net/forum?id=NjU0jtXcYn | https://openreview.net/forum?id=NjU0jtXcYn | Carl Hvarfner,Frank Hutter,Luigi Nardi | ICLR 2024,Spotlight | The optimization of expensive-to-evaluate black-box functions is prevalent in various scientific disciplines. Bayesian optimization is an automatic, general and sample-efficient method to solve these problems with minimal knowledge of the the underlying function dynamics. However, the ability of Bayesian optimization to incorporate prior knowledge or beliefs about the function at hand in order to accelerate the optimization is limited, which reduces its appeal for knowledgeable practitioners with tight budgets. To allow domain experts to customize the optimization routine, we propose ColaBO, the first Bayesian-principled framework for incorporating prior beliefs beyond the typical kernel structure, such as the likely location of the optimizer or the optimal value. The generality of ColaBO makes it applicable across different Monte Carlo acquisition functions and types of user beliefs. We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading. | https://openreview.net/pdf/9fcb921b69956d08aba85b088ea1c5d6c9b8c037.pdf |
Lemur: Harmonizing Natural Language and Code for Language Agents | https://openreview.net/forum?id=hNhwSmtXRh | https://openreview.net/forum?id=hNhwSmtXRh | Yiheng Xu,Hongjin SU,Chen Xing,Boyu Mi,Qian Liu,Weijia Shi,Binyuan Hui,Fan Zhou,Yitao Liu,Tianbao Xie,Zhoujun Cheng,Siheng Zhao,Lingpeng Kong,Bailin Wang,Caiming Xiong,Tao Yu | ICLR 2024,Spotlight | We introduce Lemur and Lemur-Chat, openly accessible language models optimized
for both natural language and coding capabilities to serve as the backbone
of versatile language agents. The evolution from language chat models to
functional language agents demands that models not only master human interaction,
reasoning, and planning but also ensure grounding in the relevant environments.
This calls for a harmonious blend of language and coding capabilities
in the models. Lemur and Lemur-Chat are proposed to address this necessity,
demonstrating balanced proficiencies in both domains, unlike existing
open-source models that tend to specialize in either. Through meticulous pretraining
using a code-intensive corpus and instruction fine-tuning on text and code
data, our models achieve state-of-the-art averaged performance across diverse
text and coding benchmarks. Comprehensive experiments demonstrate Lemur’s
superiority over existing open-source models and its proficiency across various
agent tasks involving human communication, tool usage, and interaction under
fully- and partially- observable environments. The harmonization between natural
and programming languages enables Lemur-Chat to significantly narrow the
gap with proprietary models on agent abilities, providing key insights into developing
advanced open-source agents adept at reasoning, planning, and operating
seamlessly across environments. Our model and code have been open-sourced at
https://github.com/OpenLemur/Lemur. | https://openreview.net/pdf/8ef62990871ebf2cac77dc6ea498085f167f070a.pdf |
A path-norm toolkit for modern networks: consequences, promises and challenges | https://openreview.net/forum?id=hiHZVUIYik | https://openreview.net/forum?id=hiHZVUIYik | Antoine Gonon,Nicolas Brisebarre,Elisa Riccietti,Rémi Gribonval | ICLR 2024,Spotlight | This work introduces the first toolkit around path-norms that fully encompasses general DAG ReLU networks with biases, skip connections and any operation based on the extraction of order statistics: max pooling, GroupSort etc.
This toolkit notably allows us to establish generalization bounds for modern neural networks that are not only the most widely applicable path-norm based ones, but also recover or beat the sharpest known bounds of this type.
These extended path-norms further enjoy the usual benefits of path-norms: ease of computation, invariance under the symmetries of the network, and improved sharpness on layered fully-connected networks compared to the product of operator norms, another complexity measure most commonly used.
The versatility of the toolkit and its ease of implementation allow us to challenge the concrete promises of path-norm-based generalization bounds, by numerically evaluating the sharpest known bounds for ResNets on ImageNet. | https://openreview.net/pdf/6dba7f474e1381840f1c444d21ab27a1c1a22129.pdf |
Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages | https://openreview.net/forum?id=Kuh5qgCGCp | https://openreview.net/forum?id=Kuh5qgCGCp | Jinyi Hu,Yuan Yao,Chongyi Wang,SHAN WANG,Yinxu Pan,Qianyu Chen,Tianyu Yu,Hanghao Wu,Yue Zhao,Haoye Zhang,Xu Han,Yankai Lin,Jiao Xue,dahai li,Zhiyuan Liu,Maosong Sun | ICLR 2024,Spotlight | Recently there has been a significant surge in multimodal learning in terms of both image-to-text and text-to-image generation. However, the success is typically limited to English, leaving other languages largely behind. Building a competitive counterpart in other languages is highly challenging due to the low-resource nature of non-English multimodal data (i.e., lack of large-scale, high-quality image-text data). In this work, we propose MPM, an effective training paradigm for training large multimodal models in low-resource languages. MPM demonstrates that Multilingual language models can Pivot zero-shot Multimodal learning across languages. Specifically, based on a strong multilingual large language model, multimodal models pretrained on English-only image-text data can well generalize to other languages in a (quasi)-zero-shot manner, even surpassing models trained on image-text data in native languages. Taking Chinese as a practice of MPM, we build large multimodal models VisCPM in image-to-text and text-to-image generation, which achieve state-of-the-art (open-source) performance in Chinese. To facilitate future research, we open-source codes and model weights at https://github.com/OpenBMB/VisCPM. | https://openreview.net/pdf/07df702c6aa71499ac1bb0cc1988bd883407f9de.pdf |
From Sparse to Soft Mixtures of Experts | https://openreview.net/forum?id=jxpsAj7ltE | https://openreview.net/forum?id=jxpsAj7ltE | Joan Puigcerver,Carlos Riquelme Ruiz,Basil Mustafa,Neil Houlsby | ICLR 2024,Spotlight | Sparse mixture of expert architectures (MoEs) scale model capacity without significant increases in training or inference costs.
Despite their success, MoEs suffer from a number of issues: training instability, token dropping, inability to scale the number of experts, or ineffective finetuning.
In this work, we propose Soft MoE, a fully-differentiable sparse Transformer that addresses these challenges, while maintaining the benefits of MoEs.
Soft MoE performs an implicit soft assignment by passing different weighted combinations of all input tokens to each expert.
As in other MoEs, experts in Soft MoE only process a subset of the (combined) tokens, enabling larger model capacity (and performance) at lower inference cost.
In the context of visual recognition, Soft MoE greatly outperforms dense Transformers (ViTs) and popular MoEs (Tokens Choice and Experts Choice).
Soft MoE scales well: Soft MoE Huge/14 with 128 experts in 16 MoE layers has over 40x more parameters than ViT Huge/14, with only 2% increased inference time, and substantially better quality. | https://openreview.net/pdf/fd68ff38ff599fb1021a7e6add08b00e8fec95b9.pdf |
Selective Mixup Fine-Tuning for Optimizing Non-Decomposable Objectives | https://openreview.net/forum?id=rxVBKhyfSo | https://openreview.net/forum?id=rxVBKhyfSo | Shrinivas Ramasubramanian,Harsh Rangwani,Sho Takemori,Kunal Samanta,Yuhei Umeda,Venkatesh Babu Radhakrishnan | ICLR 2024,Spotlight | The rise in internet usage has led to the generation of massive amounts of data, resulting in the adoption of various supervised and semi-supervised machine learning algorithms, which can effectively utilize the colossal amount of data to train models. However, before deploying these models in the real world, these must be strictly evaluated on performance measures like worst-case recall and satisfy constraints such as fairness. We find that current state-of-the-art empirical techniques offer sub-optimal performance on these practical, non-decomposable performance objectives. On the other hand, the theoretical techniques necessitate training a new model from scratch for each performance objective. To bridge the gap, we propose SelMix, a selective mixup-based inexpensive fine-tuning technique for pre-trained models, to optimize for the desired objective. The core idea of our framework is to determine a sampling distribution to perform a mixup of features between samples from particular classes such that it optimizes the given objective. We comprehensively evaluate our technique against the existing empirical and theoretically principled methods on standard benchmark datasets for imbalanced classification. We find that proposed SelMix fine-tuning significantly improves the performance for various practical non-decomposable objectives across benchmarks. | https://openreview.net/pdf/42154f6a78eb07727368d3e4f20969606728ec4b.pdf |
NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation | https://openreview.net/forum?id=6O3Q6AFUTu | https://openreview.net/forum?id=6O3Q6AFUTu | PengFei Zheng,Yonggang Zhang,Zhen Fang,Tongliang Liu,Defu Lian,Bo Han | ICLR 2024,Spotlight | Image interpolation based on diffusion models is promising in creating fresh and interesting images.
Advanced interpolation methods mainly focus on spherical linear interpolation, where images are encoded into the noise space and then interpolated for denoising to images.
However, existing methods face challenges in effectively interpolating natural images (not generated by diffusion models), thereby restricting their practical applicability.
Our experimental investigations reveal that these challenges stem from the invalidity of the encoding noise, which may no longer obey the expected noise distribution, e.g., a normal distribution.
To address these challenges, we propose a novel approach to correct noise for image interpolation, NoiseDiffusion. Specifically, NoiseDiffusion approaches the invalid noise to the expected distribution by introducing subtle Gaussian noise and introduces a constraint to suppress noise with extreme values. In this context, promoting noise validity contributes to mitigating image artifacts, but the constraint and introduced exogenous noise typically lead to a reduction in signal-to-noise ratio, i.e., loss of original image information. Hence, NoiseDiffusion performs interpolation within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss. Consequently, NoiseDiffusion enables us to interpolate natural images without causing artifacts or information loss, thus achieving the best interpolation results. | https://openreview.net/pdf/9bcefae56342d9cecd1e962c0a0c0cab8b325854.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.