abs
stringlengths
44
64
Download PDF
stringlengths
75
115
OpenReview
stringlengths
42
42
title
stringlengths
15
148
url
stringlengths
44
64
authors
stringlengths
6
903
detail_url
stringlengths
44
64
tags
stringclasses
1 value
abstract
stringlengths
422
5.84k
https://proceedings.mlr.press/v235/daras24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/daras24a/daras24a.pdf
https://openreview.net/forum?id=PlVjIGaFdH
Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data
https://proceedings.mlr.press/v235/daras24a.html
Giannis Daras, Alex Dimakis, Constantinos Costis Daskalakis
https://proceedings.mlr.press/v235/daras24a.html
ICML 2024
Ambient diffusion is a recently proposed framework for training diffusion models using corrupted data. Both Ambient Diffusion and alternative SURE-based approaches for learning diffusion models from corrupted data resort to approximations which deteriorate performance. We present the first framework for training diffusion models that provably sample from the uncorrupted distribution given only noisy training data, solving an open problem in Ambient diffusion. Our key technical contribution is a method that uses a double application of Tweedie’s formula and a consistency loss function that allows us to extend sampling at noise levels below the observed data noise. We also provide further evidence that diffusion models memorize from their training sets by identifying extremely corrupted images that are almost perfectly reconstructed, raising copyright and privacy concerns. Our method for training using corrupted samples can be used to mitigate this problem. We demonstrate this by fine-tuning Stable Diffusion XL to generate samples from a distribution using only noisy samples. Our framework reduces the amount of memorization of the fine-tuning dataset, while maintaining competitive performance.
https://proceedings.mlr.press/v235/das24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/das24a/das24a.pdf
https://openreview.net/forum?id=t8mt4YrPsq
Larimar: Large Language Models with Episodic Memory Control
https://proceedings.mlr.press/v235/das24a.html
Payel Das, Subhajit Chaudhury, Elliot Nelson, Igor Melnyk, Sarathkrishna Swaminathan, Sihui Dai, Aurelie Lozano, Georgios Kollias, Vijil Chenthamarakshan, Jiri Navratil, Soham Dan, Pin-Yu Chen
https://proceedings.mlr.press/v235/das24a.html
ICML 2024
Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar’s memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed—yielding speed-ups of 8-10x depending on the base LLM —as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting, information leakage prevention, and input context length generalization with Larimar and show their effectiveness. Our code is available at https://github.com/IBM/larimar.
https://proceedings.mlr.press/v235/das24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/das24b/das24b.pdf
https://openreview.net/forum?id=uun4fzaiat
Understanding the Training Speedup from Sampling with Approximate Losses
https://proceedings.mlr.press/v235/das24b.html
Rudrajit Das, Xi Chen, Bertram Ieong, Parikshit Bansal, Sujay Sanghavi
https://proceedings.mlr.press/v235/das24b.html
ICML 2024
It is well known that selecting samples with large losses/gradients can significantly reduce the number of training steps. However, the selection overhead is often too high to yield any meaningful gains in terms of overall training time. In this work, we focus on the greedy approach of selecting samples with large approximate losses instead of exact losses in order to reduce the selection overhead. For smooth convex losses, we show that such a greedy strategy can converge to a constant factor of the minimum value of the average loss in fewer iterations than the standard approach of random selection. We also theoretically quantify the effect of the approximation level. We then develop SIFT which uses early exiting to obtain approximate losses with an intermediate layer’s representations for sample selection. We evaluate SIFT on the task of training a 110M parameter 12 layer BERT base model, and show significant gains (in terms of training hours and number of backpropagation steps) without any optimized implementation over vanilla training. For e.g., to reach 64% validation accuracy, SIFT with exit at the first layer takes $\sim$ 43 hours compared to $\sim$ 57 hours of vanilla training.
https://proceedings.mlr.press/v235/das24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/das24c/das24c.pdf
https://openreview.net/forum?id=jn2iTJas6h
A decoder-only foundation model for time-series forecasting
https://proceedings.mlr.press/v235/das24c.html
Abhimanyu Das, Weihao Kong, Rajat Sen, Yichen Zhou
https://proceedings.mlr.press/v235/das24c.html
ICML 2024
Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a decoder style attention model with input patching, using a large time-series corpus comprising both real-world and synthetic datasets. Experiments on a diverse set of previously unseen forecasting datasets suggests that the model can yield accurate zero-shot forecasts across different domains, forecasting horizons and temporal granularities.
https://proceedings.mlr.press/v235/das24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/das24d/das24d.pdf
https://openreview.net/forum?id=6VZOONPn8S
Disparate Impact on Group Accuracy of Linearization for Private Inference
https://proceedings.mlr.press/v235/das24d.html
Saswat Das, Marco Romanelli, Ferdinando Fioretto
https://proceedings.mlr.press/v235/das24d.html
ICML 2024
Ensuring privacy-preserving inference on cryptographically secure data is a well-known computational challenge. To alleviate the bottleneck of costly cryptographic computations in non-linear activations, recent methods have suggested linearizing a targeted portion of these activations in neural networks. This technique results in significantly reduced runtimes with often negligible impacts on accuracy. In this paper, we demonstrate that such computational benefits may lead to increased fairness costs. Specifically, we find that reducing the number of ReLU activations disproportionately decreases the accuracy for minority groups compared to majority groups. To explain these observations, we provide a mathematical interpretation under restricted assumptions about the nature of the decision boundary, while also showing the prevalence of this problem across widely used datasets and architectures. Finally, we show how a simple procedure altering the finetuning step for linearized models can serve as an effective mitigation strategy.
https://proceedings.mlr.press/v235/dasgupta24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dasgupta24a/dasgupta24a.pdf
https://openreview.net/forum?id=gL5djEYLx2
New Bounds on the Cohesion of Complete-link and Other Linkage Methods for Agglomerative Clustering
https://proceedings.mlr.press/v235/dasgupta24a.html
Sanjoy Dasgupta, Eduardo Sany Laber
https://proceedings.mlr.press/v235/dasgupta24a.html
ICML 2024
Linkage methods are among the most popular algorithms for hierarchical clustering. Despite their relevance, the current knowledge regarding the quality of the clustering produced by these methods is limited. Here, we improve the currently available bounds on the maximum diameter of the clustering obtained by complete-link for metric spaces. One of our new bounds, in contrast to the existing ones, allows us to separate complete-link from single-link in terms of approximation for the diameter, which corroborates the common perception that the former is more suitable than the latter when the goal is producing compact clusters. We also show that our techniques can be employed to derive upper bounds on the cohesion of a class of linkage methods that includes the quite popular average-link.
https://proceedings.mlr.press/v235/de-santi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/de-santi24a/de-santi24a.pdf
https://openreview.net/forum?id=2JYOxcGlRe
Geometric Active Exploration in Markov Decision Processes: the Benefit of Abstraction
https://proceedings.mlr.press/v235/de-santi24a.html
Riccardo De Santi, Federico Arangath Joseph, Noah Liniger, Mirco Mutti, Andreas Krause
https://proceedings.mlr.press/v235/de-santi24a.html
ICML 2024
How can a scientist use a Reinforcement Learning (RL) algorithm to design experiments over a dynamical system’s state space? In the case of finite and Markovian systems, an area called Active Exploration (AE) relaxes the optimization problem of experiments design into Convex RL, a generalization of RL admitting a wider notion of reward. Unfortunately, this framework is currently not scalable and the potential of AE is hindered by the vastness of experiments spaces typical of scientific discovery applications. However, these spaces are often endowed with natural geometries, e.g., permutation invariance in molecular design, that an agent could leverage to improve the statistical and computational efficiency of AE. To achieve this, we bridge AE and MDP homomorphisms, which offer a way to exploit known geometric structures via abstraction. Towards this goal, we make two fundamental contributions: we extend MDP homomorphisms formalism to Convex RL, and we present, to the best of our knowledge, the first analysis that formally captures the benefit of abstraction via homomorphisms on sample efficiency. Ultimately, we propose the Geometric Active Exploration (GAE) algorithm, which we analyse theoretically and experimentally in environments motivated by problems in scientific discovery.
https://proceedings.mlr.press/v235/de-santi24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/de-santi24b/de-santi24b.pdf
https://openreview.net/forum?id=0M2tNui8jX
Global Reinforcement Learning : Beyond Linear and Convex Rewards via Submodular Semi-gradient Methods
https://proceedings.mlr.press/v235/de-santi24b.html
Riccardo De Santi, Manish Prajapat, Andreas Krause
https://proceedings.mlr.press/v235/de-santi24b.html
ICML 2024
In classic Reinforcement Learning (RL), the agent maximizes an additive objective of the visited states, e.g., a value function. Unfortunately, objectives of this type cannot model many real-world applications such as experiment design, exploration, imitation learning, and risk-averse RL to name a few. This is due to the fact that additive objectives disregard interactions between states that are crucial for certain tasks. To tackle this problem, we introduce Global RL (GRL), where rewards are globally defined over trajectories instead of locally over states. Global rewards can capture negative interactions among states, e.g., in exploration, via submodularity, positive interactions, e.g., synergetic effects, via supermodularity, while mixed interactions via combinations of them. By exploiting ideas from submodular optimization, we propose a novel algorithmic scheme that converts any GRL problem to a sequence of classic RL problems and solves it efficiently with curvature-dependent approximation guarantees. We also provide hardness of approximation results and empirically demonstrate the effectiveness of our method on several GRL instances.
https://proceedings.mlr.press/v235/decker24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/decker24a/decker24a.pdf
https://openreview.net/forum?id=3VnSgdget6
Provably Better Explanations with Optimized Aggregation of Feature Attributions
https://proceedings.mlr.press/v235/decker24a.html
Thomas Decker, Ananta R. Bhattarai, Jindong Gu, Volker Tresp, Florian Buettner
https://proceedings.mlr.press/v235/decker24a.html
ICML 2024
Using feature attributions for post-hoc explanations is a common practice to understand and verify the predictions of opaque machine learning models. Despite the numerous techniques available, individual methods often produce inconsistent and unstable results, putting their overall reliability into question. In this work, we aim to systematically improve the quality of feature attributions by combining multiple explanations across distinct methods or their variations. For this purpose, we propose a novel approach to derive optimal convex combinations of feature attributions that yield provable improvements of desired quality criteria such as robustness or faithfulness to the model behavior. Through extensive experiments involving various model architectures and popular feature attribution techniques, we demonstrate that our combination strategy consistently outperforms individual methods and existing baselines.
https://proceedings.mlr.press/v235/dedieu24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dedieu24a/dedieu24a.pdf
https://openreview.net/forum?id=JUa5XNXuoT
Learning Cognitive Maps from Transformer Representations for Efficient Planning in Partially Observed Environments
https://proceedings.mlr.press/v235/dedieu24a.html
Antoine Dedieu, Wolfgang Lehrach, Guangyao Zhou, Dileep George, Miguel Lazaro-Gredilla
https://proceedings.mlr.press/v235/dedieu24a.html
ICML 2024
Despite their stellar performance on a wide range of tasks, including in-context tasks only revealed during inference, vanilla transformers and variants trained for next-token predictions (a) do not learn an explicit world model of their environment which can be flexibly queried and (b) cannot be used for planning or navigation. In this paper, we consider partially observed environments (POEs), where an agent receives perceptually aliased observations as it navigates, which makes path planning hard. We introduce a transformer with (multiple) discrete bottleneck(s), TDB, whose latent codes learn a compressed representation of the history of observations and actions. After training a TDB to predict the future observation(s) given the history, we extract interpretable cognitive maps of the environment from its active bottleneck(s) indices. These maps are then paired with an external solver to solve (constrained) path planning problems. First, we show that a TDB trained on POEs (a) retains the near-perfect predictive performance of a vanilla transformer or an LSTM while (b) solving shortest path problems exponentially faster. Second, a TDB extracts interpretable representations from text datasets, while reaching higher in-context accuracy than vanilla sequence models. Finally, in new POEs, a TDB (a) reaches near-perfect in-context accuracy, (b) learns accurate in-context cognitive maps (c) solves in-context path planning problems.
https://proceedings.mlr.press/v235/deep24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deep24a/deep24a.pdf
https://openreview.net/forum?id=eqIGoEoI10
Asymptotically Optimal and Computationally Efficient Average Treatment Effect Estimation in A/B testing
https://proceedings.mlr.press/v235/deep24a.html
Vikas Deep, Achal Bassamboo, Sandeep Kumar Juneja
https://proceedings.mlr.press/v235/deep24a.html
ICML 2024
Motivated by practical applications in clinical trials and online platforms, we study A/B testing with the aim of estimating a confidence interval (CI) for the average treatment effect (ATE) using the minimum expected sample size. This CI should have a width at most $\epsilon$ while ensuring that the probability of the CI not containing the true ATE is at most $\delta$. To answer this, we first establish a lower bound on the expected sample size needed for any adaptive policy which constructs a CI of ATE with desired properties. Specifically, we prove that the lower bound is based on the solution to a max-min non-convex optimization problem for small $\delta$. Tailoring the “plug-in” approach for the ATE problem, we construct an adaptive policy that is asymptotically optimal, i.e., matches the lower bound on the expected sample size for small $\delta$. Interestingly, we find that, for small $\epsilon$ and $\delta$, the asymptotically optimal fraction of treatment assignment for A and B is proportional to the standard deviation of the outcome distributions of treatments A and B, respectively. However, as the proposed approach can be computationally intensive, we propose an alternative adaptive policy. This new policy, informed by insights from our lower bound analysis, is computationally efficient while remaining asymptotically optimal for small values of $\epsilon$ and $\delta$. Numerical comparisons demonstrate that both policies perform similarly across practical values of $\epsilon$ and $\delta$, offering efficient solutions for A/B testing.
https://proceedings.mlr.press/v235/demelas24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/demelas24a/demelas24a.pdf
https://openreview.net/forum?id=aZnZOqUOHq
Predicting Lagrangian Multipliers for Mixed Integer Linear Programs
https://proceedings.mlr.press/v235/demelas24a.html
Francesco Demelas, Joseph Le Roux, Mathieu Lacroix, Axel Parmentier
https://proceedings.mlr.press/v235/demelas24a.html
ICML 2024
Lagrangian Relaxation stands among the most efficient approaches for solving Mixed Integer Linear Programs (MILPs) with difficult constraints. Given any duals for these constraints, called Lagrangian Multipliers (LMs), it returns a bound on the optimal value of the MILP, and Lagrangian methods seek the LMs giving the best such bound. But these methods generally rely on iterative algorithms resembling gradient descent to maximize the concave piecewise linear dual function: the computational burden grows quickly with the number of relaxed constraints. We introduce a deep learning approach that bypasses the descent, effectively amortizing per instance optimization. A probabilistic encoder based on a graph neural network computes, given a MILP instance and its Continuous Relaxation (CR) solution, high-dimensional representations of relaxed constraints, which are turned into LMs by a decoder. We train the encoder and the decoder jointly by directly optimizing the bound obtained from the predicted multipliers. Our method is applicable to any problem with a compact MILP formulation, and to any Lagrangian Relaxation providing a tighter bound than CR. Experiments on two widely known problems, Multi-Commodity Network Design and Generalized Assignment, show that our approach closes up to 85% of the gap between the continuous relaxation and the best Lagrangian bound, and provides a high-quality warm-start for descent-based Lagrangian methods.
https://proceedings.mlr.press/v235/demirel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/demirel24a/demirel24a.pdf
https://openreview.net/forum?id=QKnWXX3aVm
Prediction-powered Generalization of Causal Inferences
https://proceedings.mlr.press/v235/demirel24a.html
Ilker Demirel, Ahmed Alaa, Anthony Philippakis, David Sontag
https://proceedings.mlr.press/v235/demirel24a.html
ICML 2024
Causal inferences from a randomized controlled trial (RCT) may not pertain to a target population where some effect modifiers have a different distribution. Prior work studies generalizing the results of a trial to a target population with no outcome but covariate data available. We show how the limited size of trials makes generalization a statistically infeasible task, as it requires estimating complex nuisance functions. We develop generalization algorithms that supplement the trial data with a prediction model learned from an additional observational study (OS), without making any assumptions on the OS. We theoretically and empirically show that our methods facilitate better generalization when the OS is "high-quality", and remain robust when it is not, and e.g., have unmeasured confounding.
https://proceedings.mlr.press/v235/demirel24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/demirel24b/demirel24b.pdf
https://openreview.net/forum?id=bwZlD7mYoa
An Unsupervised Approach for Periodic Source Detection in Time Series
https://proceedings.mlr.press/v235/demirel24b.html
Berken Utku Demirel, Christian Holz
https://proceedings.mlr.press/v235/demirel24b.html
ICML 2024
Detection of periodic patterns of interest within noisy time series data plays a critical role in various tasks, spanning from health monitoring to behavior analysis. Existing learning techniques often rely on labels or clean versions of signals for detecting the periodicity, and those employing self-supervised methods are required to apply proper augmentations, which is already challenging for time series and can result in collapse—all representations collapse to a single point due to strong augmentation. In this work, we propose a novel method to detect the periodicity in time series without the need for any labels or requiring tailored positive or negative data generation mechanisms. We mitigate the collapse issue by ensuring the learned representations retain information from the original samples without imposing any variance constraints on the batch. Our experiments in three time-series tasks against state-of-the-art learning methods show that the proposed approach consistently outperforms prior works, achieving performance improvements of more than 45–50%, showing its effectiveness.
https://proceedings.mlr.press/v235/deng24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deng24a/deng24a.pdf
https://openreview.net/forum?id=l4H7Hv7LhJ
Multi-group Learning for Hierarchical Groups
https://proceedings.mlr.press/v235/deng24a.html
Samuel Deng, Daniel Hsu
https://proceedings.mlr.press/v235/deng24a.html
ICML 2024
The multi-group learning model formalizes the learning scenario in which a single predictor must generalize well on multiple, possibly overlapping subgroups of interest. We extend the study of multi-group learning to the natural case where the groups are hierarchically structured. We design an algorithm for this setting that outputs an interpretable and deterministic decision tree predictor with near-optimal sample complexity. We then conduct an empirical evaluation of our algorithm and find that it achieves attractive generalization properties on real datasets with hierarchical group structure.
https://proceedings.mlr.press/v235/deng24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deng24b/deng24b.pdf
https://openreview.net/forum?id=c18noxRh3X
A3S: A General Active Clustering Method with Pairwise Constraints
https://proceedings.mlr.press/v235/deng24b.html
Xun Deng, Junlong Liu, Han Zhong, Fuli Feng, Chen Shen, Xiangnan He, Jieping Ye, Zheng Wang
https://proceedings.mlr.press/v235/deng24b.html
ICML 2024
Active clustering aims to boost the clustering performance by integrating human-annotated pairwise constraints through strategic querying. Conventional approaches with semi-supervised clustering schemes encounter high query costs when applied to large datasets with numerous classes. To address these limitations, we propose a novel Adaptive Active Aggregation and Splitting (A3S) framework, falling within the cluster-adjustment scheme in active clustering. A3S features strategic active clustering adjustment on the initial cluster result, which is obtained by an adaptive clustering algorithm. In particular, our cluster adjustment is inspired by the quantitative analysis of Normalized mutual information gain under the information theory framework and can provably improve the clustering quality. The proposed A3S framework significantly elevates the performance and scalability of active clustering. In extensive experiments across diverse real-world datasets, A3S achieves desired results with significantly fewer human queries compared with existing methods.
https://proceedings.mlr.press/v235/deng24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deng24c/deng24c.pdf
https://openreview.net/forum?id=kRv0WPJd00
Variational Schrödinger Diffusion Models
https://proceedings.mlr.press/v235/deng24c.html
Wei Deng, Weijian Luo, Yixin Tan, Marin Biloš, Yu Chen, Yuriy Nevmyvaka, Ricky T. Q. Chen
https://proceedings.mlr.press/v235/deng24c.html
ICML 2024
Schrödinger bridge (SB) has emerged as the go-to method for optimizing transportation plans in diffusion models. However, SB requires estimating the intractable forward score functions, inevitably resulting in the (costly) implicit training loss based on simulated trajectories. To improve the scalability while preserving efficient transportation plans, we leverage variational inference to linearize the forward score functions (variational scores) of SB and restore simulation-free properties in training backward scores. We propose the variational Schrödinger diffusion model (VSDM), where the forward process is a multivariate diffusion and the variational scores are adaptively optimized for efficient transport. Theoretically, we use stochastic approximation to prove the convergence of the variational scores and show the convergence of the adaptively generated samples based on the optimal variational scores. Empirically, we test the algorithm in simulated examples and observe that VSDM is efficient in generations of anisotropic shapes and yields straighter sample trajectories compared to the single-variate diffusion. We also verify the scalability of the algorithm in real-world data and achieve competitive unconditional generation performance in CIFAR10 and conditional generation in time series modeling. Notably, VSDM no longer depends on warm-up initializations required by SB.
https://proceedings.mlr.press/v235/deng24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deng24d/deng24d.pdf
https://openreview.net/forum?id=UKHfmzLR7P
Collaborative Learning with Different Labeling Functions
https://proceedings.mlr.press/v235/deng24d.html
Yuyang Deng, Mingda Qiao
https://proceedings.mlr.press/v235/deng24d.html
ICML 2024
We study a variant of Collaborative PAC Learning, in which we aim to learn an accurate classifier for each of the $n$ data distributions, while minimizing the number of samples drawn from them in total. Unlike in the usual collaborative learning setup, it is not assumed that there exists a single classifier that is simultaneously accurate for all distributions. We show that, when the data distributions satisfy a weaker realizability assumption, which appeared in (Crammer & Mansour, 2012) in the context of multi-task learning, sample-efficient learning is still feasible. We give a learning algorithm based on Empirical Risk Minimization (ERM) on a natural augmentation of the hypothesis class, and the analysis relies on an upper bound on the VC dimension of this augmented class. In terms of the computational efficiency, we show that ERM on the augmented hypothesis class is $\mathsf{NP}$-hard, which gives evidence against the existence of computationally efficient learners in general. On the positive side, for two special cases, we give learners that are both sample- and computationally-efficient.
https://proceedings.mlr.press/v235/deng24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deng24e/deng24e.pdf
https://openreview.net/forum?id=0f4u3Wg9zT
Exploring the Low-Pass Filtering Behavior in Image Super-Resolution
https://proceedings.mlr.press/v235/deng24e.html
Haoyu Deng, Zijing Xu, Yule Duan, Xiao Wu, Wenjie Shu, Liang-Jian Deng
https://proceedings.mlr.press/v235/deng24e.html
ICML 2024
Deep neural networks for image super-resolution (ISR) have shown significant advantages over traditional approaches like the interpolation. However, they are often criticized as ’black boxes’ compared to traditional approaches with solid mathematical foundations. In this paper, we attempt to interpret the behavior of deep neural networks in ISR using theories from the field of signal processing. First, we report an intriguing phenomenon, referred to as ‘the sinc phenomenon.’ It occurs when an impulse input is fed to a neural network. Then, building on this observation, we propose a method named Hybrid Response Analysis (HyRA) to analyze the behavior of neural networks in ISR tasks. Specifically, HyRA decomposes a neural network into a parallel connection of a linear system and a non-linear system and demonstrates that the linear system functions as a low-pass filter while the non-linear system injects high-frequency information. Finally, to quantify the injected high-frequency information, we introduce a metric for image-to-image tasks called Frequency Spectrum Distribution Similarity (FSDS). FSDS reflects the distribution similarity of different frequency components and can capture nuances that traditional metrics may overlook. Code, videos and raw experimental results for this paper can be found in: https://github.com/RisingEntropy/LPFInISR.
https://proceedings.mlr.press/v235/deng24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deng24f/deng24f.pdf
https://openreview.net/forum?id=XQz7ytgETQ
Network Tight Community Detection
https://proceedings.mlr.press/v235/deng24f.html
Jiayi Deng, Xiaodong Yang, Jun Yu, Jun Liu, Zhaiming Shen, Danyang Huang, Huimin Cheng
https://proceedings.mlr.press/v235/deng24f.html
ICML 2024
Conventional community detection methods often categorize all nodes into clusters. However, the presumed community structure of interest may only be valid for a subset of nodes (named as ‘tight nodes’), while the rest of the network may consist of noninformative “scattered nodes”. For example, a protein-protein network often contains proteins that do not belong to specific biological functional modules but are involved in more general processes, or act as bridges between different functional modules. Forcing each of these proteins into a single cluster introduces unwanted biases and obscures the underlying biological implication. To address this issue, we propose a tight community detection (TCD) method to identify tight communities excluding scattered nodes. The algorithm enjoys a strong theoretical guarantee of tight node identification accuracy and is scalable for large networks. The superiority of the proposed method is demonstrated by various synthetic and real experiments.
https://proceedings.mlr.press/v235/deschenaux24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deschenaux24a/deschenaux24a.pdf
https://openreview.net/forum?id=1pj0Sk8GfP
Going beyond Compositions, DDPMs Can Produce Zero-Shot Interpolations
https://proceedings.mlr.press/v235/deschenaux24a.html
Justin Deschenaux, Igor Krawczuk, Grigorios Chrysos, Volkan Cevher
https://proceedings.mlr.press/v235/deschenaux24a.html
ICML 2024
Denoising Diffusion Probabilistic Models (DDPMs) exhibit remarkable capabilities in image generation, with studies suggesting that they can generalize by composing latent factors learned from the training data. In this work, we go further and study DDPMs trained on strictly separate subsets of the data distribution with large gaps on the support of the latent factors. We show that such a model can effectively generate images in the unexplored, intermediate regions of the distribution. For instance, when trained on clearly smiling and non-smiling faces, we demonstrate a sampling procedure which can generate slightly smiling faces without reference images (zero-shot interpolation). We replicate these findings for other attributes as well as other datasets. Our code is available on GitHub.
https://proceedings.mlr.press/v235/detommaso24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/detommaso24a/detommaso24a.pdf
https://openreview.net/forum?id=6Wauue8pWd
Multicalibration for Confidence Scoring in LLMs
https://proceedings.mlr.press/v235/detommaso24a.html
Gianluca Detommaso, Martin Andres Bertran, Riccardo Fogliato, Aaron Roth
https://proceedings.mlr.press/v235/detommaso24a.html
ICML 2024
This paper proposes the use of "multicalibration": to yield interpretable and reliable confidence scores for outputs generated by large language models (LLMs). Multicalibration asks for calibration not just marginally, but simultaneously across various intersecting groupings of the data. We show how to form groupings for prompt/completion pairs that are correlated with the probability of correctness via two techniques: clustering within an embedding space, and "self-annotation" - querying the LLM by asking it various yes-or-no questions about the prompt. We also develop novel variants of multicalibration algorithms that offer performance improvements by reducing their tendency to overfit. Through systematic benchmarking across various question answering datasets and LLMs, we show how our techniques can yield confidence scores that provide substantial improvements in fine-grained measures of both calibration and accuracy compared to existing methods.
https://proceedings.mlr.press/v235/deuschel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deuschel24a/deuschel24a.pdf
https://openreview.net/forum?id=YEQM0asWCH
Contextualized Policy Recovery: Modeling and Interpreting Medical Decisions with Adaptive Imitation Learning
https://proceedings.mlr.press/v235/deuschel24a.html
Jannik Deuschel, Caleb Ellington, Yingtao Luo, Ben Lengerich, Pascal Friederich, Eric P. Xing
https://proceedings.mlr.press/v235/deuschel24a.html
ICML 2024
Interpretable policy learning seeks to estimate intelligible decision policies from observed actions; however, existing models force a tradeoff between accuracy and interpretability, limiting data-driven interpretations of human decision-making processes. Fundamentally, existing approaches are burdened by this tradeoff because they represent the underlying decision process as a universal policy, when in fact human decisions are dynamic and can change drastically under different contexts. Thus, we develop Contextualized Policy Recovery (CPR), which re-frames the problem of modeling complex decision processes as a multi-task learning problem, where each context poses a unique task and complex decision policies can be constructed piece-wise from many simple context-specific policies. CPR models each context-specific policy as a linear map, and generates new policy models on-demand as contexts are updated with new observations. We provide two flavors of the CPR framework: one focusing on exact local interpretability, and one retaining full global interpretability. We assess CPR through studies on simulated and real data, achieving state-of-the-art performance on predicting antibiotic prescription in intensive care units ($+22$% AUROC vs. previous SOTA) and predicting MRI prescription for Alzheimer’s patients ($+7.7$% AUROC vs. previous SOTA). With this improvement, CPR closes the accuracy gap between interpretable and black-box methods, allowing high-resolution exploration and analysis of context-specific decision models.
https://proceedings.mlr.press/v235/devic24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/devic24a/devic24a.pdf
https://openreview.net/forum?id=YiblhkVl2w
Stability and Multigroup Fairness in Ranking with Uncertain Predictions
https://proceedings.mlr.press/v235/devic24a.html
Siddartha Devic, Aleksandra Korolova, David Kempe, Vatsal Sharan
https://proceedings.mlr.press/v235/devic24a.html
ICML 2024
Rankings are ubiquitous across many applications, from search engines to hiring committees. In practice, many rankings are derived from the output of predictors. However, when predictors trained for classification tasks have intrinsic uncertainty, it is not obvious how this uncertainty should be represented in the derived rankings. Our work considers ranking functions: maps from individual predictions for a classification task to distributions over rankings. We focus on two aspects of ranking functions: stability to perturbations in predictions and fairness towards both individuals and subgroups. Not only is stability an important requirement for its own sake, but — as we show — it composes harmoniously with individual fairness in the sense of Dwork et al. (2012). While deterministic ranking functions cannot be stable aside from trivial scenarios, we show that the recently proposed uncertainty aware (UA) ranking functions of Singh et al. (2021) are stable. Our main result is that UA rankings also achieve group fairness through successful composition with multiaccurate or multicalibrated predictors. Our work demonstrates that UA rankings naturally interpolate between group and individual level fairness guarantees, while simultaneously satisfying stability guarantees important whenever machine-learned predictions are used.
https://proceedings.mlr.press/v235/deweese24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/deweese24a/deweese24a.pdf
https://openreview.net/forum?id=iYYA5zDoCm
Locally Interdependent Multi-Agent MDP: Theoretical Framework for Decentralized Agents with Dynamic Dependencies
https://proceedings.mlr.press/v235/deweese24a.html
Alex Deweese, Guannan Qu
https://proceedings.mlr.press/v235/deweese24a.html
ICML 2024
Many multi-agent systems in practice are decentralized and have dynamically varying dependencies. There has been a lack of attempts in the literature to analyze these systems theoretically. In this paper, we propose and theoretically analyze a decentralized model with dynamically varying dependencies called the Locally Interdependent Multi-Agent MDP. This model can represent problems in many disparate domains such as cooperative navigation, obstacle avoidance, and formation control. Despite the intractability that general partially observable multi-agent systems suffer from, we propose three closed-form policies that are theoretically near-optimal in this setting and can be scalable to compute and store. Consequentially, we reveal a fundamental property of Locally Interdependent Multi-Agent MDP’s that the partially observable decentralized solution is exponentially close to the fully observable solution with respect to the visibility radius. We then discuss extensions of our closed-form policies to further improve tractability. We conclude by providing simulations to investigate some long horizon behaviors of our closed-form policies.
https://proceedings.mlr.press/v235/dhir24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dhir24a/dhir24a.pdf
https://openreview.net/forum?id=twm7qPVX1F
Bivariate Causal Discovery using Bayesian Model Selection
https://proceedings.mlr.press/v235/dhir24a.html
Anish Dhir, Samuel Power, Mark Van Der Wilk
https://proceedings.mlr.press/v235/dhir24a.html
ICML 2024
Much of the causal discovery literature prioritises guaranteeing the identifiability of causal direction in statistical models. For structures within a Markov equivalence class, this requires strong assumptions which may not hold in real-world datasets, ultimately limiting the usability of these methods. Building on previous attempts, we show how to incorporate causal assumptions within the Bayesian framework. Identifying causal direction then becomes a Bayesian model selection problem. This enables us to construct models with realistic assumptions, and consequently allows for the differentiation between Markov equivalent causal structures. We analyse why Bayesian model selection works in situations where methods based on maximum likelihood fail. To demonstrate our approach, we construct a Bayesian non-parametric model that can flexibly model the joint distribution. We then outperform previous methods on a wide range of benchmark datasets with varying data generating assumptions.
https://proceedings.mlr.press/v235/dhurandhar24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dhurandhar24a/dhurandhar24a.pdf
https://openreview.net/forum?id=F3RdeyiR5H
Trust Regions for Explanations via Black-Box Probabilistic Certification
https://proceedings.mlr.press/v235/dhurandhar24a.html
Amit Dhurandhar, Swagatam Haldar, Dennis Wei, Karthikeyan Natesan Ramamurthy
https://proceedings.mlr.press/v235/dhurandhar24a.html
ICML 2024
Given the black box nature of machine learning models, a plethora of explainability methods have been developed to decipher the factors behind individual decisions. In this paper, we introduce a novel problem of black box (probabilistic) explanation certification. We ask the question: Given a black box model with only query access, an explanation for an example and a quality metric (viz. fidelity, stability), can we find the largest hypercube (i.e., $\ell_{\infty}$ ball) centered at the example such that when the explanation is applied to all examples within the hypercube, (with high probability) a quality criterion is met (viz. fidelity greater than some value)? Being able to efficiently find such a trust region has multiple benefits: i) insight into model behavior in a region, with a guarantee; ii) ascertained stability of the explanation; iii) explanation reuse, which can save time, energy and money by not having to find explanations for every example; and iv) a possible meta-metric to compare explanation methods. Our contributions include formalizing this problem, proposing solutions, providing theoretical guarantees for these solutions that are computable, and experimentally showing their efficacy on synthetic and real data.
https://proceedings.mlr.press/v235/di24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/di24a/di24a.pdf
https://openreview.net/forum?id=mkbSXxovP5
Double Stochasticity Gazes Faster: Snap-Shot Decentralized Stochastic Gradient Tracking Methods
https://proceedings.mlr.press/v235/di24a.html
Hao Di, Haishan Ye, Xiangyu Chang, Guang Dai, Ivor Tsang
https://proceedings.mlr.press/v235/di24a.html
ICML 2024
In decentralized optimization, $m$ agents form a network and only communicate with their neighbors, which gives advantages in data ownership, privacy, and scalability. At the same time, decentralized stochastic gradient descent ($\texttt{SGD}$) methods, as popular decentralized algorithms for training large-scale machine learning models, have shown their superiority over centralized counterparts. Distributed stochastic gradient tracking $\texttt{DSGT}$ has been recognized as the popular and state-of-the-art decentralized $\texttt{SGD}$ method due to its proper theoretical guarantees. However, the theoretical analysis of $\texttt{DSGT}$ shows that its iteration complexity is $\tilde{\mathcal{O}} \left(\frac{\bar{\sigma}^2}{m\mu \varepsilon} + \frac{\sqrt{L}\bar{\sigma}}{\mu(1 - \lambda_2(W))^{1/2} C_W \sqrt{\varepsilon} }\right)$, where the doubly stochastic matrix $W$ represents the network topology and $ C_W $ is a parameter that depends on $W$. Thus, it indicates that the convergence property of $\texttt{DSGT}$ is heavily affected by the topology of the communication network. To overcome the weakness of $\texttt{DSGT}$, we resort to the snap-shot gradient tracking skill and propose two novel algorithms, snap-shot $\texttt{DSGT}$ ($\texttt{SS-DSGT}$) and accelerated snap-shot $\texttt{DSGT}$ ($\texttt{ASS-DSGT}$). We further justify that $\texttt{SS-DSGT}$ exhibits a lower iteration complexity compared to $\texttt{DSGT}$ in the general communication network topology. Additionally, $\texttt{ASS-DSGT}$ matches $\texttt{DSGT}$’s iteration complexity $\mathcal{O}\left( \frac{\bar{\sigma}^2}{m\mu \varepsilon} + \frac{\sqrt{L}\bar{\sigma}}{\mu (1 - \lambda_2(W))^{1/2}\sqrt{\varepsilon}} \right)$ under the same conditions as $\texttt{DSGT}$. Numerical experiments validate $\texttt{SS-DSGT}$’s superior performance performance in the general communication network topology and exhibit better practical performance of $\texttt{ASS-DSGT}$ on the specified $W$ compared to $\texttt{DSGT}$.
https://proceedings.mlr.press/v235/di24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/di24b/di24b.pdf
https://openreview.net/forum?id=e1jPdRJeo7
Double Variance Reduction: A Smoothing Trick for Composite Optimization Problems without First-Order Gradient
https://proceedings.mlr.press/v235/di24b.html
Hao Di, Haishan Ye, Yueling Zhang, Xiangyu Chang, Guang Dai, Ivor Tsang
https://proceedings.mlr.press/v235/di24b.html
ICML 2024
Variance reduction techniques are designed to decrease the sampling variance, thereby accelerating convergence rates of first-order (FO) and zeroth-order (ZO) optimization methods. However, in composite optimization problems, ZO methods encounter an additional variance called the coordinate-wise variance, which stems from the random gradient estimation. To reduce this variance, prior works require estimating all partial derivatives, essentially approximating FO information. This approach demands $\mathcal{O}(d)$ function evaluations ($d$ is the dimension size), which incurs substantial computational costs and is prohibitive in high-dimensional scenarios. This paper proposes the Zeroth-order Proximal Double Variance Reduction ($\texttt{ZPDVR}$) method, which utilizes the averaging trick to reduce both sampling and coordinate-wise variances. Compared to prior methods, $\texttt{ZPDVR}$ relies solely on random gradient estimates, calls the stochastic zeroth-order oracle (SZO) in expectation $\mathcal{O}(1)$ times per iteration, and achieves the optimal $\mathcal{O}(d(n + \kappa)\log (\frac{1}{\epsilon}))$ SZO query complexity in the strongly convex and smooth setting, where $\kappa$ represents the condition number and $\epsilon$ is the desired accuracy. Empirical results validate $\texttt{ZPDVR}$’s linear convergence and demonstrate its superior performance over other related methods.
https://proceedings.mlr.press/v235/diakonikolas24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/diakonikolas24a/diakonikolas24a.pdf
https://openreview.net/forum?id=E3V5MMwFgd
Robust Sparse Estimation for Gaussians with Optimal Error under Huber Contamination
https://proceedings.mlr.press/v235/diakonikolas24a.html
Ilias Diakonikolas, Daniel Kane, Sushrut Karmalkar, Ankit Pensia, Thanasis Pittas
https://proceedings.mlr.press/v235/diakonikolas24a.html
ICML 2024
We study Gaussian sparse estimation tasks in Huber’s contamination model with a focus on mean estimation, PCA, and linear regression. For each of these tasks, we give the first sample and computationally efficient robust estimators with optimal error guarantees, within constant factors. All prior efficient algorithms for these tasks incur quantitatively suboptimal error. Concretely, for Gaussian robust $k$-sparse mean estimation on $\mathbb{R}^d$ with corruption rate $\epsilon>0$, our algorithm has sample complexity $(k^2/\epsilon ^2)\mathrm{polylog}(d/\epsilon)$, runs in sample polynomial time, and approximates the target mean within $\ell_2$-error $O(\epsilon)$. Previous efficient algorithms inherently incur error $\Omega(\epsilon \sqrt{\log(1/\epsilon)})$. At the technical level, we develop a novel multidimensional filtering method in the sparse regime that may find other applications.
https://proceedings.mlr.press/v235/diakonikolas24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/diakonikolas24b/diakonikolas24b.pdf
https://openreview.net/forum?id=GqWy1wZKeE
Fast Co-Training under Weak Dependence via Stream-Based Active Learning
https://proceedings.mlr.press/v235/diakonikolas24b.html
Ilias Diakonikolas, Mingchen Ma, Lisheng Ren, Christos Tzamos
https://proceedings.mlr.press/v235/diakonikolas24b.html
ICML 2024
Co-training is a classical semi-supervised learning method which only requires a small number of labeled examples for learning, under reasonable assumptions. Despite extensive literature on the topic, very few hypothesis classes are known to be provably efficiently learnable via co-training, even under very strong distributional assumptions. In this work, we study the co-training problem in the stream-based active learning model. We show that a range of natural concept classes are efficiently learnable via co-training, in terms of both label efficiency and computational efficiency. We provide an efficient reduction of co-training under the standard assumption of weak dependence, in the stream-based active model, to online classification. As a corollary, we obtain efficient co-training algorithms with error independent label complexity for every concept class class efficiently learnable in the mistake bound online model. Our framework also gives co-training algorithms with label complexity $\tilde{O}(d\log (1/\epsilon))$ for any concept class with VC dimension $d$, though in general this reduction is not computationally efficient. Finally, using additional ideas from online learning, we design the first efficient co-training algorithms with label complexity $\tilde{O}(d^2\log (1/\epsilon))$ for several concept classes, including unions of intervals and homogeneous halfspaces.
https://proceedings.mlr.press/v235/dickens24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dickens24a/dickens24a.pdf
https://openreview.net/forum?id=6NQ77Vj3DT
Convex and Bilevel Optimization for Neural-Symbolic Inference and Learning
https://proceedings.mlr.press/v235/dickens24a.html
Charles Andrew Dickens, Changyu Gao, Connor Pryor, Stephen Wright, Lise Getoor
https://proceedings.mlr.press/v235/dickens24a.html
ICML 2024
We leverage convex and bilevel optimization techniques to develop a general gradient-based parameter learning framework for neural-symbolic (NeSy) systems. We demonstrate our framework with NeuPSL, a state-of-the-art NeSy architecture. To achieve this, we propose a smooth primal and dual formulation of NeuPSL inference and show learning gradients are functions of the optimal dual variables. Additionally, we develop a dual block coordinate descent algorithm for the new formulation that naturally exploits warm-starts. This leads to over $100 \times$ learning runtime improvements over the current best NeuPSL inference method. Finally, we provide extensive empirical evaluations across $8$ datasets covering a range of tasks and demonstrate our learning framework achieves up to a $16$% point prediction performance improvement over alternative learning methods.
https://proceedings.mlr.press/v235/dimitriou24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dimitriou24a/dimitriou24a.pdf
https://openreview.net/forum?id=OenMwDPqWn
Structure Your Data: Towards Semantic Graph Counterfactuals
https://proceedings.mlr.press/v235/dimitriou24a.html
Angeliki Dimitriou, Maria Lymperaiou, Georgios Filandrianos, Konstantinos Thomas, Giorgos Stamou
https://proceedings.mlr.press/v235/dimitriou24a.html
ICML 2024
Counterfactual explanations (CEs) based on concepts are explanations that consider alternative scenarios to understand which high-level semantic features contributed to particular model predictions. In this work, we propose CEs based on the semantic graphs accompanying input data to achieve more descriptive, accurate, and human-aligned explanations. Building upon state-of-the-art (SotA) conceptual attempts, we adopt a model-agnostic edit-based approach and introduce leveraging GNNs for efficient Graph Edit Distance (GED) computation. With a focus on the visual domain, we represent images as scene graphs and obtain their GNN embeddings to bypass solving the NP-hard graph similarity problem for all input pairs, an integral part of CE computation process. We apply our method to benchmark and real-world datasets with varying difficulty and availability of semantic annotations. Testing on diverse classifiers, we find that our CEs outperform previous SotA explanation models based on semantics, including both white and black-box as well as conceptual and pixel-level approaches. Their superiority is proven quantitatively and qualitatively, as validated by human subjects, highlighting the significance of leveraging semantic edges in the presence of intricate relationships. Our model-agnostic graph-based approach is widely applicable and easily extensible, producing actionable explanations across different contexts. The code is available at https://github.com/aggeliki-dimitriou/SGCE.
https://proceedings.mlr.press/v235/ding24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ding24a/ding24a.pdf
https://openreview.net/forum?id=jsmaWEdx9g
Efficient Algorithms for Sum-Of-Minimum Optimization
https://proceedings.mlr.press/v235/ding24a.html
Lisang Ding, Ziang Chen, Xinshang Wang, Wotao Yin
https://proceedings.mlr.press/v235/ding24a.html
ICML 2024
In this work, we propose a novel optimization model termed “sum-of-minimum" optimization. This model seeks to minimize the sum or average of $N$ objective functions over $k$ parameters, where each objective takes the minimum value of a predefined sub-function with respect to the $k$ parameters. This universal framework encompasses numerous clustering applications in machine learning and related fields. We develop efficient algorithms for solving sum-of-minimum optimization problems, inspired by a randomized initialization algorithm for the classic $k$-means (Arthur & Vassilvitskii, 2007) and Lloyd’s algorithm (Lloyd, 1982). We establish a new tight bound for the generalized initialization algorithm and prove a gradient-descent-like convergence rate for generalized Lloyd’s algorithm. The efficiency of our algorithms is numerically examined on multiple tasks, including generalized principal component analysis, mixed linear regression, and small-scale neural network training. Our approach compares favorably to previous ones based on simpler-but-less-precise optimization reformulations.
https://proceedings.mlr.press/v235/ding24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ding24b/ding24b.pdf
https://openreview.net/forum?id=HfxFasUfbN
AMPA: Adaptive Mixed Precision Allocation for Low-Bit Integer Training
https://proceedings.mlr.press/v235/ding24b.html
Li Ding, Wen Fei, Yuyang Huang, Shuangrui Ding, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong
https://proceedings.mlr.press/v235/ding24b.html
ICML 2024
Low-bit integer training emerges as a promising approach to mitigate the heavy burden during network training by quantizing the weights, activations, and gradients. However, existing methods cannot well achieve mixed-precision quantization for low-bit training and are commonly limited to INT8 precision. In this paper, we propose a novel low-bit integer training framework that, for the first time, achieves adaptive mixed-precision allocation (AMPA) for weights, activations, and gradients, and pushes the boundaries to a precision level below INT8. We develop a novel magnitude-based sensitivity measurement with regard to the quantization losses of weight, activation, and gradient quantization and the average gradient magnitudes, which is demonstrated as an upper bound of quantization influence in theory. We further design a layer-wise precision update strategy under observations on the quantization losses and their effects on model performance in low-bit training. Extensive experiments on different backbones and datasets show that, compared to INT8 quantization, the proposed method can achieve more than 38% BitOPs reduction with a tolerable loss below 2% in image classification, image segmentation, and language modeling.
https://proceedings.mlr.press/v235/ding24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ding24c/ding24c.pdf
https://openreview.net/forum?id=89kZWloYQx
Understanding Forgetting in Continual Learning with Linear Regression
https://proceedings.mlr.press/v235/ding24c.html
Meng Ding, Kaiyi Ji, Di Wang, Jinhui Xu
https://proceedings.mlr.press/v235/ding24c.html
ICML 2024
Continual learning, focused on sequentially learning multiple tasks, has gained significant attention recently. Despite the tremendous progress made in the past, the theoretical understanding, especially factors contributing to $\textit{catastrophic forgetting}$, remains relatively unexplored. In this paper, we provide a general theoretical analysis of forgetting in the linear regression model via Stochastic Gradient Descent (SGD) applicable to both under-parameterized and overparameterized regimes. Our theoretical framework reveals some interesting insights into the intricate relationship between task sequence and algorithmic parameters, an aspect not fully captured in previous studies due to their restrictive assumptions. Specifically, we demonstrate that, given a sufficiently large data size, the arrangement of tasks in a sequence—where tasks with larger eigenvalues in their population data covariance matrices are trained later—tends to result in increased forgetting. Additionally, our findings highlight that an appropriate choice of step size will help mitigate forgetting in both under-parameterized and overparameterized settings. To validate our theoretical analysis, we conducted simulation experiments on both linear regression models and Deep Neural Networks (DNNs). Results from these simulations substantiate our theoretical findings.
https://proceedings.mlr.press/v235/ding24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ding24d/ding24d.pdf
https://openreview.net/forum?id=5kGfm3Pa41
Recurrent Distance Filtering for Graph Representation Learning
https://proceedings.mlr.press/v235/ding24d.html
Yuhui Ding, Antonio Orvieto, Bobby He, Thomas Hofmann
https://proceedings.mlr.press/v235/ding24d.html
ICML 2024
Graph neural networks based on iterative one-hop message passing have been shown to struggle in harnessing the information from distant nodes effectively. Conversely, graph transformers allow each node to attend to all other nodes directly, but lack graph inductive bias and have to rely on ad-hoc positional encoding. In this paper, we propose a new architecture to reconcile these challenges. Our approach stems from the recent breakthroughs in long-range modeling provided by deep state-space models: for a given target node, our model aggregates other nodes by their shortest distances to the target and uses a linear RNN to encode the sequence of hop representations. The linear RNN is parameterized in a particular diagonal form for stable long-range signal propagation and is theoretically expressive enough to encode the neighborhood hierarchy. With no need for positional encoding, we empirically show that the performance of our model is comparable to or better than that of state-of-the-art graph transformers on various benchmarks, with a significantly reduced computational cost. Our code is open-source at https://github.com/skeletondyh/GRED.
https://proceedings.mlr.press/v235/ding24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ding24e/ding24e.pdf
https://openreview.net/forum?id=lIYtJtpJR0
Robust Stable Spiking Neural Networks
https://proceedings.mlr.press/v235/ding24e.html
Jianhao Ding, Zhiyu Pan, Yujia Liu, Zhaofei Yu, Tiejun Huang
https://proceedings.mlr.press/v235/ding24e.html
ICML 2024
Spiking neural networks (SNNs) are gaining popularity in deep learning due to their low energy budget on neuromorphic hardware. However, they still face challenges in lacking sufficient robustness to guard safety-critical applications such as autonomous driving. Many studies have been conducted to defend SNNs from the threat of adversarial attacks. This paper aims to uncover the robustness of SNN through the lens of the stability of nonlinear systems. We are inspired by the fact that searching for parameters altering the leaky integrate-and-fire dynamics can enhance their robustness. Thus, we dive into the dynamics of membrane potential perturbation and simplify the formulation of the dynamics. We present that membrane potential perturbation dynamics can reliably convey the intensity of perturbation. Our theoretical analyses imply that the simplified perturbation dynamics satisfy input-output stability. Thus, we propose a training framework with modified SNN neurons and to reduce the mean square of membrane potential perturbation aiming at enhancing the robustness of SNN. Finally, we experimentally verify the effectiveness of the framework in the setting of Gaussian noise training and adversarial training on the image classification task. Please refer to https://github.com/DingJianhao/stable-snn for our code implementation.
https://proceedings.mlr.press/v235/ding24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ding24f/ding24f.pdf
https://openreview.net/forum?id=kRxCDDFNpp
Fewer Truncations Improve Language Modeling
https://proceedings.mlr.press/v235/ding24f.html
Hantian Ding, Zijian Wang, Giovanni Paolini, Varun Kumar, Anoop Deoras, Dan Roth, Stefano Soatto
https://proceedings.mlr.press/v235/ding24f.html
ICML 2024
In large language model training, input documents are typically concatenated together and then split into sequences of equal length to avoid padding tokens. Despite its efficiency, the concatenation approach compromises data integrity—it inevitably breaks many documents into incomplete pieces, leading to excessive truncations that hinder the model from learning to compose logically coherent and factually consistent content that is grounded on the complete context. To address the issue, we propose Best-fit Packing, a scalable and efficient method that packs documents into training sequences through length-aware combinatorial optimization. Our method completely eliminates unnecessary truncations while retaining the same training efficiency as concatenation. Empirical results from both text and code pre-training show that our method achieves superior performance (e.g., +4.7% on reading comprehension; +16.8% in context following; and +9.2% on program synthesis), and reduces closed-domain hallucination effectively by up to 58.3%.
https://proceedings.mlr.press/v235/ding24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ding24g/ding24g.pdf
https://openreview.net/forum?id=FzyMdAm2fZ
Delving into Differentially Private Transformer
https://proceedings.mlr.press/v235/ding24g.html
Youlong Ding, Xueyang Wu, Yining Meng, Yonggang Luo, Hao Wang, Weike Pan
https://proceedings.mlr.press/v235/ding24g.html
ICML 2024
Deep learning with differential privacy (DP) has garnered significant attention over the past years, leading to the development of numerous methods aimed at enhancing model accuracy and training efficiency. This paper delves into the problem of training Transformer models with differential privacy. Our treatment is modular: the logic is to ’reduce’ the problem of training DP Transformer to the more basic problem of training DP vanilla neural nets. The latter is better understood and amenable to many model-agnostic methods. Such ’reduction’ is done by first identifying the hardness unique to DP Transformer training: the attention distraction phenomenon and a lack of compatibility with existing techniques for efficient gradient clipping. To deal with these two issues, we propose the Re-Attention Mechanism and Phantom Clipping, respectively. We believe that our work not only casts new light on training DP Transformers but also promotes a modular treatment to advance research in the field of differentially private deep learning.
https://proceedings.mlr.press/v235/ding24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ding24h/ding24h.pdf
https://openreview.net/forum?id=9zlZuAAb08
Quality Diversity through Human Feedback: Towards Open-Ended Diversity-Driven Optimization
https://proceedings.mlr.press/v235/ding24h.html
Li Ding, Jenny Zhang, Jeff Clune, Lee Spector, Joel Lehman
https://proceedings.mlr.press/v235/ding24h.html
ICML 2024
Reinforcement Learning from Human Feedback (RLHF) has shown potential in qualitative tasks where easily defined performance measures are lacking. However, there are drawbacks when RLHF is commonly used to optimize for average human preferences, especially in generative tasks that demand diverse model responses. Meanwhile, Quality Diversity (QD) algorithms excel at identifying diverse and high-quality solutions but often rely on manually crafted diversity metrics. This paper introduces Quality Diversity through Human Feedback (QDHF), a novel approach that progressively infers diversity metrics from human judgments of similarity among solutions, thereby enhancing the applicability and effectiveness of QD algorithms in complex and open-ended domains. Empirical studies show that QDHF significantly outperforms state-of-the-art methods in automatic diversity discovery and matches the efficacy of QD with manually crafted diversity metrics on standard benchmarks in robotics and reinforcement learning. Notably, in open-ended generative tasks, QDHF substantially enhances the diversity of text-to-image generation from a diffusion model and is more favorably received in user studies. We conclude by analyzing QDHF’s scalability, robustness, and quality of derived diversity metrics, emphasizing its strength in open-ended optimization tasks. Code and tutorials are available at https://liding.info/qdhf.
https://proceedings.mlr.press/v235/ding24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/ding24i/ding24i.pdf
https://openreview.net/forum?id=ONOtpXLqqw
LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens
https://proceedings.mlr.press/v235/ding24i.html
Yiran Ding, Li Lyna Zhang, Chengruidong Zhang, Yuanyuan Xu, Ning Shang, Jiahang Xu, Fan Yang, Mao Yang
https://proceedings.mlr.press/v235/ding24i.html
ICML 2024
Large context window is a desirable feature in large language models (LLMs). However, due to high fine-tuning costs, scarcity of long texts, and catastrophic values introduced by new token positions, current extended context windows are limited to around 128k tokens. This paper introduces LongRoPE that, for the first time, extends the context window of pre-trained LLMs to an impressive 2048k tokens, with up to only 1k fine-tuning steps at within 256k training lengths, while maintaining performance at the original short context window. This is achieved by three key innovations: (i) we identify and exploit two forms of non-uniformities in positional interpolation through an efficient search, providing a better initialization for fine-tuning and enabling an 8x extension in non-fine-tuning scenarios; (ii) we introduce a progressive extension strategy that first fine-tunes a 256k length LLM and then conducts a second positional interpolation on the fine-tuned extended LLM to achieve a 2048k context window; (iii) we readjust LongRoPE on 8k length to recover the short context window performance. Extensive experiments on LLaMA2 and Mistral across various tasks demonstrate the effectiveness of our method. Models extended via LongRoPE retain the original architecture with minor modifications to the positional embedding, and can reuse most pre-existing optimizations. Code is available at https://github.com/microsoft/LongRoPE
https://proceedings.mlr.press/v235/dodd24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dodd24a/dodd24a.pdf
https://openreview.net/forum?id=eY98MVffrD
Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds
https://proceedings.mlr.press/v235/dodd24a.html
Daniel Dodd, Louis Sharrock, Christopher Nemeth
https://proceedings.mlr.press/v235/dodd24a.html
ICML 2024
In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning-rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.
https://proceedings.mlr.press/v235/dohmatob24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dohmatob24a/dohmatob24a.pdf
https://openreview.net/forum?id=MV2b44zDd3
Consistent Adversarially Robust Linear Classification: Non-Parametric Setting
https://proceedings.mlr.press/v235/dohmatob24a.html
Elvis Dohmatob
https://proceedings.mlr.press/v235/dohmatob24a.html
ICML 2024
For binary classification in $d$ dimensions, it is known that with a sample size of $n$, an excess adversarial risk of $O(d/n)$ is achievable under strong parametric assumptions about the underlying data distribution (e.g., assuming a Gaussian mixture model). In the case of well-separated distributions, this rate can be further refined to $O(1/n)$. Our work studies the non-parametric setting, where very little is known. With only mild regularity conditions on the conditional distribution of the features, we examine adversarial attacks with respect to arbitrary norms and introduce a straightforward yet effective estimator with provable consistency w.r.t adversarial risk. Our estimator is given by minimizing a series of smoothed versions of the robust 0/1 loss, with a smoothing bandwidth that adapts to both $n$ and $d$. Furthermore, we demonstrate that our estimator can achieve the minimax excess adversarial risk of $\widetilde O(\sqrt{d/n})$ for linear classifiers, at the cost of solving possibly rougher optimization problems.
https://proceedings.mlr.press/v235/dohmatob24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dohmatob24b/dohmatob24b.pdf
https://openreview.net/forum?id=KVvku47shW
A Tale of Tails: Model Collapse as a Change of Scaling Laws
https://proceedings.mlr.press/v235/dohmatob24b.html
Elvis Dohmatob, Yunzhen Feng, Pu Yang, Francois Charton, Julia Kempe
https://proceedings.mlr.press/v235/dohmatob24b.html
ICML 2024
As AI model size grows, neural scaling laws have become a crucial tool to predict the improvements of large models when increasing capacity and the size of original (human or natural) training data. Yet, the widespread use of popular models means that the ecosystem of online data and text will co-evolve to progressively contain increased amounts of synthesized data. In this paper we ask: How will the scaling laws change in the inevitable regime where synthetic data makes its way into the training corpus? Will future models, still improve, or be doomed to degenerate up to total (model) collapse? We develop a theoretical framework of model collapse through the lens of scaling laws. We discover a wide range of decay phenomena, analyzing loss of scaling, shifted scaling with number of generations, the ”un-learning" of skills, and grokking when mixing human and synthesized data. Our theory is validated by large-scale experiments with a transformer on an arithmetic task and text generation using the large language model Llama2.
https://proceedings.mlr.press/v235/dohmatob24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dohmatob24c/dohmatob24c.pdf
https://openreview.net/forum?id=btYeH65fI3
Precise Accuracy / Robustness Tradeoffs in Regression: Case of General Norms
https://proceedings.mlr.press/v235/dohmatob24c.html
Elvis Dohmatob, Meyer Scetbon
https://proceedings.mlr.press/v235/dohmatob24c.html
ICML 2024
In this paper, we investigate the impact of test-time adversarial attacks on linear regression models and determine the optimal level of robustness that any model can reach while maintaining a given level of standard predictive performance (accuracy). Through quantitative estimates, we uncover fundamental tradeoffs between adversarial robustness and accuracy in different regimes. We obtain a precise characterization which distinguishes between regimes where robustness is achievable without hurting standard accuracy and regimes where a tradeoff might be unavoidable. Our findings are empirically confirmed with simple experiments that represent a variety of settings. This work covers feature covariance matrices and attack norms of any nature, extending previous works in this area.
https://proceedings.mlr.press/v235/doikov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/doikov24a/doikov24a.pdf
https://openreview.net/forum?id=NvBJOcmti6
Spectral Preconditioning for Gradient Methods on Graded Non-convex Functions
https://proceedings.mlr.press/v235/doikov24a.html
Nikita Doikov, Sebastian U Stich, Martin Jaggi
https://proceedings.mlr.press/v235/doikov24a.html
ICML 2024
The performance of optimization methods is often tied to the spectrum of the objective Hessian. Yet, conventional assumptions, such as smoothness, do often not enable us to make finely-grained convergence statements—particularly not for non-convex problems. Striving for a more intricate characterization of complexity, we introduce a unique concept termed graded non-convexity. This allows to partition the class of non-convex problems into a nested chain of subclasses. Interestingly, many traditional non-convex objectives, including partially convex problems, matrix factorizations, and neural networks, fall within these subclasses. As a second contribution, we propose gradient methods with spectral preconditioning, which employ inexact top eigenvectors of the Hessian to address the ill-conditioning of the problem, contingent on the grade. Our analysis reveals that these new methods provide provably superior convergence rates compared to basic gradient descent on applicable problem classes, particularly when large gaps exist between the top eigenvalues of the Hessian. Our theory is validated by numerical experiments executed on multiple practical machine learning problems.
https://proceedings.mlr.press/v235/donahue24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/donahue24a/donahue24a.pdf
https://openreview.net/forum?id=zMsMQJraEj
Impact of Decentralized Learning on Player Utilities in Stackelberg Games
https://proceedings.mlr.press/v235/donahue24a.html
Kate Donahue, Nicole Immorlica, Meena Jagadeesan, Brendan Lucier, Aleksandrs Slivkins
https://proceedings.mlr.press/v235/donahue24a.html
ICML 2024
When deployed in the world, a learning agent such as a recommender system or a chatbot often repeatedly interacts with another learning agent (such as a user) over time. In many such two-agent systems, each agent learns separately and the rewards of the two agents are not perfectly aligned. To better understand such cases, we examine the learning dynamics of the two-agent system and the implications for each agent’s objective. We model these systems as Stackelberg games with decentralized learning and show that standard regret benchmarks (such as Stackelberg equilibrium payoffs) result in worst-case linear regret for at least one player. To better capture these systems, we construct a relaxed regret benchmark that is tolerant to small learning errors by agents. We show that standard learning algorithms fail to provide sublinear regret, and we develop algorithms to achieve near-optimal $\mathcal{O}(T^{2/3})$ regret for both players with respect to these benchmarks. We further design relaxed environments under which faster learning ($\mathcal{O}(\sqrt{T})$) is possible. Altogether, our results take a step towards assessing how two-agent interactions in sequential and decentralized learning environments affect the utility of both agents.
https://proceedings.mlr.press/v235/dong24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dong24a/dong24a.pdf
https://openreview.net/forum?id=yXlQL9goY8
Towards Generalization beyond Pointwise Learning: A Unified Information-theoretic Perspective
https://proceedings.mlr.press/v235/dong24a.html
Yuxin Dong, Tieliang Gong, Hong Chen, Zhongjiang He, Mengxiang Li, Shuangyong Song, Chen Li
https://proceedings.mlr.press/v235/dong24a.html
ICML 2024
The recent surge in contrastive learning has intensified the interest in understanding the generalization of non-pointwise learning paradigms. While information-theoretic analysis achieves remarkable success in characterizing the generalization behavior of learning algorithms, its applicability is largely confined to pointwise learning, with extensions to the simplest pairwise settings remaining unexplored due to the challenges of non-i.i.d losses and dimensionality explosion. In this paper, we develop the first series of information-theoretic bounds extending beyond pointwise scenarios, encompassing pointwise, pairwise, triplet, quadruplet, and higher-order scenarios, all within a unified framework. Specifically, our hypothesis-based bounds elucidate the generalization behavior of iterative and noisy learning algorithms via gradient covariance analysis, and our prediction-based bounds accurately estimate the generalization gap with computationally tractable low-dimensional information metrics. Comprehensive numerical studies then demonstrate the effectiveness of our bounds in capturing the generalization dynamics across diverse learning scenarios.
https://proceedings.mlr.press/v235/dong24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dong24b/dong24b.pdf
https://openreview.net/forum?id=1tRLxQzdep
Pruner-Zero: Evolving Symbolic Pruning Metric From Scratch for Large Language Models
https://proceedings.mlr.press/v235/dong24b.html
Peijie Dong, Lujun Li, Zhenheng Tang, Xiang Liu, Xinglin Pan, Qiang Wang, Xiaowen Chu
https://proceedings.mlr.press/v235/dong24b.html
ICML 2024
Despite the remarkable capabilities, Large Language Models (LLMs) face deployment challenges due to their extensive size. Pruning methods drop a subset of weights to accelerate, but many of them require retraining, which is prohibitively expensive and computationally demanding. Recently, post-training pruning approaches introduced novel metrics, enabling the pruning of LLMs without retraining. However, these metrics require the involvement of human experts and tedious trial and error. To efficiently identify superior pruning metrics, we develop an automatic framework for searching symbolic pruning metrics using genetic programming. In particular, we devise an elaborate search space encompassing the existing pruning metrics to discover the potential symbolic pruning metric. We propose an opposing operation simplification strategy to increase the diversity of the population. In this way, Pruner-Zero allows auto-generation of symbolic pruning metrics. Based on the searched results, we explore the correlation between pruning metrics and performance after pruning and summarize some principles. Extensive experiments on LLaMA and LLaMA-2 on language modeling and zero-shot tasks demonstrate that our Pruner-Zero obtains superior performance than SOTA post-training pruning methods. Code at: https://github.com/pprp/Pruner-Zero.
https://proceedings.mlr.press/v235/dong24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dong24c/dong24c.pdf
https://openreview.net/forum?id=JvMLkGF2Ms
Position: Building Guardrails for Large Language Models Requires Systematic Design
https://proceedings.mlr.press/v235/dong24c.html
Yi Dong, Ronghui Mu, Gaojie Jin, Yi Qi, Jinwei Hu, Xingyu Zhao, Jie Meng, Wenjie Ruan, Xiaowei Huang
https://proceedings.mlr.press/v235/dong24c.html
ICML 2024
As Large Language Models (LLMs) become more integrated into our daily lives, it is crucial to identify and mitigate their risks, especially when the risks can have profound impacts on human users and societies. Guardrails, which filter the inputs or outputs of LLMs, have emerged as a core safeguarding technology. This position paper takes a deep look at current open-source solutions (Llama Guard, Nvidia NeMo, Guardrails AI), and discusses the challenges and the road towards building more complete solutions. Drawing on robust evidence from previous research, we advocate for a systematic approach to construct guardrails for LLMs, based on comprehensive consideration of diverse contexts across various LLMs applications. We propose employing socio-technical methods through collaboration with a multi-disciplinary team to pinpoint precise technical requirements, exploring advanced neural-symbolic implementations to embrace the complexity of the requirements, and developing verification and testing to ensure the utmost quality of the final product.
https://proceedings.mlr.press/v235/dong24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dong24d/dong24d.pdf
https://openreview.net/forum?id=Fv9GLw0LkO
Accelerating PDE Data Generation via Differential Operator Action in Solution Space
https://proceedings.mlr.press/v235/dong24d.html
Huanshuo Dong, Hong Wang, Haoyang Liu, Jian Luo, Jie Wang
https://proceedings.mlr.press/v235/dong24d.html
ICML 2024
Recent advancements in data-driven approaches, such as Neural Operator (NO), have demonstrated their effectiveness in reducing the solving time of Partial Differential Equations (PDEs). However, one major challenge faced by these approaches is the requirement for a large amount of high-precision training data, which needs significant computational costs during the generation process. To address this challenge, we propose a novel PDE dataset generation algorithm, namely Differential Operator Action in Solution space (DiffOAS), which speeds up the data generation process and enhances the precision of the generated data simultaneously. Specifically, DiffOAS obtains a few basic PDE solutions and then combines them to get solutions. It applies differential operators on these solutions, a process we call ’operator action’, to efficiently generate precise PDE data points. Theoretical analysis shows that the time complexity of DiffOAS method is one order lower than the existing generation method. Experimental results show that DiffOAS accelerates the generation of large-scale datasets with 10,000 instances by 300 times. Even with just 5% of the generation time, NO trained on the data generated by DiffOAS exhibits comparable performance to that using the existing generation method, which highlights the efficiency of DiffOAS.
https://proceedings.mlr.press/v235/dong24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dong24e/dong24e.pdf
https://openreview.net/forum?id=wrTzLoqbCg
TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling
https://proceedings.mlr.press/v235/dong24e.html
Jiaxiang Dong, Haixu Wu, Yuxuan Wang, Yun-Zhong Qiu, Li Zhang, Jianmin Wang, Mingsheng Long
https://proceedings.mlr.press/v235/dong24e.html
ICML 2024
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks. Prior methods are mainly based on pre-training techniques well-acknowledged in vision or language, such as masked modeling and contrastive learning. However, randomly masking time series or calculating series-wise similarity will distort or neglect inherent temporal correlations crucial in time series data. To emphasize temporal correlation modeling, this paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks. Concretely, TimeSiam pre-trains Siamese encoders to capture intrinsic temporal correlations between randomly sampled past and current subseries. With a simple data augmentation method (e.g. masking), TimeSiam can benefit from diverse augmented subseries and learn internal time-dependent representations through a past-to-current reconstruction. Moreover, learnable lineage embeddings are also introduced to distinguish temporal distance between sampled series and further foster the learning of diverse temporal correlations. TimeSiam consistently outperforms extensive advanced pre-training baselines, demonstrating superior forecasting and classification capabilities across 13 standard benchmarks in both intra- and cross-domain scenarios.
https://proceedings.mlr.press/v235/dong24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dong24f/dong24f.pdf
https://openreview.net/forum?id=uhHDhVKFMW
Get More with LESS: Synthesizing Recurrence with KV Cache Compression for Efficient LLM Inference
https://proceedings.mlr.press/v235/dong24f.html
Harry Dong, Xinyu Yang, Zhenyu Zhang, Zhangyang Wang, Yuejie Chi, Beidi Chen
https://proceedings.mlr.press/v235/dong24f.html
ICML 2024
Many computational factors limit broader deployment of large language models. In this paper, we focus on a memory bottleneck imposed by the key-value (KV) cache, a computational shortcut that requires storing previous KV pairs during decoding. While existing KV cache methods approach this problem by pruning or evicting large swaths of relatively less important KV pairs to dramatically reduce the memory footprint of the cache, they can have limited success in tasks that require recollecting a majority of previous tokens. To alleviate this issue, we propose LESS, a simple integration of a (nearly free) constant sized cache with eviction-based cache methods, such that all tokens can be queried at later decoding steps. Its ability to retain information throughout time shows merit on a variety of tasks where we demonstrate LESS can help reduce the performance gap from caching everything, sometimes even matching it, all while being efficient. Relevant code can be found at https://github.com/hdong920/LESS.
https://proceedings.mlr.press/v235/donhauser24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/donhauser24a/donhauser24a.pdf
https://openreview.net/forum?id=4zN9tvZfns
Privacy-Preserving Data Release Leveraging Optimal Transport and Particle Gradient Descent
https://proceedings.mlr.press/v235/donhauser24a.html
Konstantin Donhauser, Javier Abad, Neha Hulkund, Fanny Yang
https://proceedings.mlr.press/v235/donhauser24a.html
ICML 2024
We present a novel approach for differentially private data synthesis of protected tabular datasets, a relevant task in highly sensitive domains such as healthcare and government. Current state-of-the-art methods predominantly use marginal-based approaches, where a dataset is generated from private estimates of the marginals. In this paper, we introduce PrivPGD, a new generation method for marginal-based private data synthesis, leveraging tools from optimal transport and particle gradient descent. Our algorithm outperforms existing methods on a large range of datasets while being highly scalable and offering the flexibility to incorporate additional domain-specific constraints.
https://proceedings.mlr.press/v235/doran24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/doran24a/doran24a.pdf
https://openreview.net/forum?id=limyQ1Kk0k
Spike Distance Function as a Learning Objective for Spike Prediction
https://proceedings.mlr.press/v235/doran24a.html
Kevin Doran, Marvin Seifert, Carola A. M. Yovanovich, Tom Baden
https://proceedings.mlr.press/v235/doran24a.html
ICML 2024
Approaches to predicting neuronal spike responses commonly use a Poisson learning objective. This objective quantizes responses into spike counts within a fixed summation interval, typically on the order of 10 to 100 milliseconds in duration; however, neuronal responses are often time accurate down to a few milliseconds, and Poisson models struggle to precisely model them at these timescales. We propose the concept of a spike distance function that maps points in time to the temporal distance to the nearest spike. We show that neural networks can be trained to approximate spike distance functions, and we present an efficient algorithm for inferring spike trains from the outputs of these models. Using recordings of chicken and frog retinal ganglion cells responding to visual stimuli, we compare the performance of our approach to that of Poisson models trained with various summation intervals. We show that our approach outperforms the use of Poisson models at spike train inference.
https://proceedings.mlr.press/v235/dorfman24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dorfman24a/dorfman24a.pdf
https://openreview.net/forum?id=NwYsuFuelg
Dynamic Byzantine-Robust Learning: Adapting to Switching Byzantine Workers
https://proceedings.mlr.press/v235/dorfman24a.html
Ron Dorfman, Naseem Amin Yehya, Kfir Yehuda Levy
https://proceedings.mlr.press/v235/dorfman24a.html
ICML 2024
Byzantine-robust learning has emerged as a prominent fault-tolerant distributed machine learning framework. However, most techniques focus on the static setting, wherein the identity of Byzantine workers remains unchanged throughout the learning process. This assumption fails to capture real-world dynamic Byzantine behaviors, which may include intermittent malfunctions or targeted, time-limited attacks. Addressing this limitation, we propose DynaBRO – a new method capable of withstanding any sub-linear number of identity changes across rounds. Specifically, when the number of such changes is $\mathcal{O}(\sqrt{T})$ (where $T$ is the total number of training rounds), DynaBRO nearly matches the state-of-the-art asymptotic convergence rate of the static setting. Our method utilizes a multi-level Monte Carlo (MLMC) gradient estimation technique applied at the server to robustly aggregated worker updates. By additionally leveraging an adaptive learning rate, we circumvent the need for prior knowledge of the fraction of Byzantine workers.
https://proceedings.mlr.press/v235/dorner24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dorner24a/dorner24a.pdf
https://openreview.net/forum?id=zkcya47Sq5
Don’t Label Twice: Quantity Beats Quality when Comparing Binary Classifiers on a Budget
https://proceedings.mlr.press/v235/dorner24a.html
Florian E. Dorner, Moritz Hardt
https://proceedings.mlr.press/v235/dorner24a.html
ICML 2024
We study how to best spend a budget of noisy labels to compare the accuracy of two binary classifiers. It’s common practice to collect and aggregate multiple noisy labels for a given data point into a less noisy label via a majority vote. We prove a theorem that runs counter to conventional wisdom. If the goal is to identify the better of two classifiers, we show it’s best to spend the budget on collecting a single label for more samples. Our result follows from a non-trivial application of Cramér’s theorem, a staple in the theory of large deviations. We discuss the implications of our work for the design of machine learning benchmarks, where they overturn some time-honored recommendations. In addition, our results provide sample size bounds superior to what follows from Hoeffding’s bound.
https://proceedings.mlr.press/v235/dotzel24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dotzel24a/dotzel24a.pdf
https://openreview.net/forum?id=iJlPJsTw2B
Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
https://proceedings.mlr.press/v235/dotzel24a.html
Jordan Dotzel, Yuzong Chen, Bahaa Kotb, Sushma Prasad, Gang Wu, Sheng Li, Mohamed S Abdelfattah, Zhiru Zhang
https://proceedings.mlr.press/v235/dotzel24a.html
ICML 2024
The increasing size of large language models (LLMs) traditionally requires low-precision integer formats to meet strict latency and power demands. Yet recently, alternative formats such as Normal Float (NF4) have increased model accuracy at the cost of increased chip area. In this work, we first conduct a large-scale analysis of LLM weights and activations across 30 networks and conclude that most distributions follow a Student’s t-distribution. We then derive a new theoretically optimal format, Student Float (SF4), that improves over NF4 across modern LLMs, for example increasing the average accuracy on LLaMA2-7B by 0.76% across tasks. Using this format as a high-accuracy reference, we then propose augmenting E2M1 with two variants of supernormal support for higher model accuracy. Finally, we explore the quality and efficiency frontier across 11 datatypes by evaluating their model accuracy and hardware complexity. We discover a Pareto curve composed of INT4, E2M1, and E2M1 with supernormal support, which offers a continuous tradeoff between model accuracy and chip area. For example, E2M1 with supernormal support increases the accuracy of Phi-2 by up to 2.19% with 1.22% area overhead, enabling more LLM-based applications to be run at four bits. The supporting code is hosted at https://github.com/cornell-zhang/llm-datatypes.
https://proceedings.mlr.press/v235/dou24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dou24a/dou24a.pdf
https://openreview.net/forum?id=pAPykbqUHf
Theory of Consistency Diffusion Models: Distribution Estimation Meets Fast Sampling
https://proceedings.mlr.press/v235/dou24a.html
Zehao Dou, Minshuo Chen, Mengdi Wang, Zhuoran Yang
https://proceedings.mlr.press/v235/dou24a.html
ICML 2024
Diffusion models have revolutionized various application domains, including computer vision and audio generation. Despite the state-of-the-art performance, diffusion models are known for their slow sample generation due to the extensive number of steps involved. In response, consistency models have been developed to merge multiple steps in the sampling process, thereby significantly boosting the speed of sample generation without compromising quality. This paper contributes towards the first statistical theory for consistency models, formulating their training as a distribution discrepancy minimization problem. Our analysis yields statistical estimation rates based on the Wasserstein distance for consistency models, matching those of vanilla diffusion models. Additionally, our results encompass the training of consistency models through both distillation and isolation methods, demystifying their underlying advantage.
https://proceedings.mlr.press/v235/draxler24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/draxler24a/draxler24a.pdf
https://openreview.net/forum?id=uA3FRvO2DJ
On the Universality of Volume-Preserving and Coupling-Based Normalizing Flows
https://proceedings.mlr.press/v235/draxler24a.html
Felix Draxler, Stefan Wahl, Christoph Schnoerr, Ullrich Koethe
https://proceedings.mlr.press/v235/draxler24a.html
ICML 2024
We present a novel theoretical framework for understanding the expressive power of normalizing flows. Despite their prevalence in scientific applications, a comprehensive understanding of flows remains elusive due to their restricted architectures. Existing theorems fall short as they require the use of arbitrarily ill-conditioned neural networks, limiting practical applicability. We propose a distributional universality theorem for well-conditioned coupling-based normalizing flows such as RealNVP. In addition, we show that volume-preserving normalizing flows are not universal, what distribution they learn instead, and how to fix their expressivity. Our results support the general wisdom that affine and related couplings are expressive and in general outperform volume-preserving flows, bridging a gap between empirical results and theoretical understanding.
https://proceedings.mlr.press/v235/drouin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/drouin24a/drouin24a.pdf
https://openreview.net/forum?id=BRfqYrikdo
WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks?
https://proceedings.mlr.press/v235/drouin24a.html
Alexandre Drouin, Maxime Gasse, Massimo Caccia, Issam H. Laradji, Manuel Del Verme, Tom Marty, David Vazquez, Nicolas Chapados, Alexandre Lacoste
https://proceedings.mlr.press/v235/drouin24a.html
ICML 2024
We study the use of large language model-based agents for interacting with software via web browsers. Unlike prior work, we focus on measuring the agents’ ability to perform tasks that span the typical daily work of knowledge workers utilizing enterprise software systems. To this end, we propose WorkArena, a remote-hosted benchmark of 33 tasks based on the widely-used ServiceNow platform. We also introduce BrowserGym, an environment for the design and evaluation of such agents, offering a rich set of actions as well as multimodal observations. Our empirical evaluation reveals that while current agents show promise on WorkArena, there remains a considerable gap towards achieving full task automation. Notably, our analysis uncovers a significant performance disparity between open and closed-source LLMs, highlighting a critical area for future exploration and development in the field.
https://proceedings.mlr.press/v235/du24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24a/du24a.pdf
https://openreview.net/forum?id=AwLLSlJAeJ
Principled Gradient-Based MCMC for Conditional Sampling of Text
https://proceedings.mlr.press/v235/du24a.html
Li Du, Afra Amini, Lucas Torroba Hennigen, Xinyan Velocity Yu, Holden Lee, Jason Eisner, Ryan Cotterell
https://proceedings.mlr.press/v235/du24a.html
ICML 2024
We consider the problem of sampling text from an energy-based model. This arises, for example, when sampling text from a neural language model subject to soft constraints. Although the target distribution is discrete, the internal computations of the energy function (given by the language model) are differentiable, so one would like to exploit gradient information within a method such as MCMC. Alas, all previous attempts to generalize gradient-based MCMC to text sampling fail to sample correctly from the target distribution. We propose a solution, along with variants, and study its theoretical properties. Through experiments on various forms of text generation, we demonstrate that our unbiased samplers are able to generate more fluent text while better adhering to the control objectives. The same methods could be used to sample from discrete energy-based models unrelated to text.
https://proceedings.mlr.press/v235/du24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24b/du24b.pdf
https://openreview.net/forum?id=NbOlmrB59Z
SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning
https://proceedings.mlr.press/v235/du24b.html
Chaoqun Du, Yizeng Han, Gao Huang
https://proceedings.mlr.press/v235/du24b.html
ICML 2024
Recent advancements in semi-supervised learning have focused on a more realistic yet challenging task: addressing imbalances in labeled data while the class distribution of unlabeled data remains both unknown and potentially mismatched. Current approaches in this sphere often presuppose rigid assumptions regarding the class distribution of unlabeled data, thereby limiting the adaptability of models to only certain distribution ranges. In this study, we propose a novel approach, introducing a highly adaptable framework, designated as SimPro, which does not rely on any predefined assumptions about the distribution of unlabeled data. Our framework, grounded in a probabilistic model, innovatively refines the expectation-maximization (EM) method by separating the modeling of conditional and marginal class distributions. This separation facilitates a closed-form solution for class distribution estimation during the maximization phase, leading to the formulation of a Bayes classifier. The Bayes classifier, in turn, enhances the quality of pseudo-labels in the expectation phase. Remarkably, the SimPro framework is not only straightforward to implement but also comes with theoretical guarantees. Moreover, we introduce two novel class distributions broadening the scope of the evaluation. Our method showcases consistent state-of-the-art performance across diverse benchmarks and data distribution scenarios. benchmarks and data distribution scenarios. Our code is available at https://github.com/LeapLabTHU/SimPro.
https://proceedings.mlr.press/v235/du24c.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24c/du24c.pdf
https://openreview.net/forum?id=mk8oRhox2l
GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding
https://proceedings.mlr.press/v235/du24c.html
Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, Yang You
https://proceedings.mlr.press/v235/du24c.html
ICML 2024
Speculative decoding is a relatively new decoding framework that leverages small and efficient draft models to reduce the latency of LLMs. In this study, we introduce GliDe and CaPE, two low-hassle modifications to vanilla speculative decoding to further improve the decoding speed of a frozen LLM. Specifically, GliDe is a modified draft model architecture that reuses the cached keys and values from the target LLM, while CaPE is a proposal expansion method that uses the draft model’s confidence scores to help select additional candidate tokens for verification. Extensive experiments on different benchmarks demonstrate that our proposed GliDe draft model significantly reduces the expected decoding latency. Additional evaluation using walltime reveals that GliDe can accelerate Vicuna models up to 2.17x and further extend the improvement to 2.61x with CaPE. We will release our code, data, and the trained draft models.
https://proceedings.mlr.press/v235/du24d.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24d/du24d.pdf
https://openreview.net/forum?id=SoNexFx8qz
Position: Compositional Generative Modeling: A Single Model is Not All You Need
https://proceedings.mlr.press/v235/du24d.html
Yilun Du, Leslie Pack Kaelbling
https://proceedings.mlr.press/v235/du24d.html
ICML 2024
Large monolithic generative models trained on massive amounts of data have become an increasingly dominant approach in AI research. In this paper, we argue that we should instead construct large generative systems by composing smaller generative models together. We show how such a compositional generative approach enables us to learn distributions in a more data-efficient manner, enabling generalization to parts of the data distribution unseen at training time. We further show how this enables us to program and construct new generative models for tasks completely unseen at training. Finally, we show that in many cases, we can discover separate compositional components from data.
https://proceedings.mlr.press/v235/du24e.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24e/du24e.pdf
https://openreview.net/forum?id=zj7YuTE4t8
Improving Factuality and Reasoning in Language Models through Multiagent Debate
https://proceedings.mlr.press/v235/du24e.html
Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch
https://proceedings.mlr.press/v235/du24e.html
ICML 2024
Large language models (LLMs) have demonstrated remarkable capabilities in language generation, understanding, and few-shot learning in recent years. An extensive body of work has explored how their performance may be further improved through the tools of prompting, ranging from verification, self-consistency, or intermediate scratchpads. In this paper, we present a complementary approach to improve language responses where multiple language model instances propose and debate their individual responses and reasoning processes over multiple rounds to arrive at a common final answer. Our findings indicate that this approach significantly enhances mathematical and strategic reasoning across a number of tasks. We also demonstrate that our approach improves the factual validity of generated content, reducing fallacious answers and hallucinations that contemporary models are prone to. Our approach may be directly applied to existing black-box models and uses identical procedure and prompts for all tasks we investigate. Overall, our findings suggest that such "society of minds" approach has the potential to significantly advance the capabilities of LLMs and pave the way for further breakthroughs in language generation and understanding.
https://proceedings.mlr.press/v235/du24f.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24f/du24f.pdf
https://openreview.net/forum?id=CduFAALvGe
Learning Iterative Reasoning through Energy Diffusion
https://proceedings.mlr.press/v235/du24f.html
Yilun Du, Jiayuan Mao, Joshua B. Tenenbaum
https://proceedings.mlr.press/v235/du24f.html
ICML 2024
We introduce iterative reasoning through energy diffusion (IRED), a novel framework for learning to reason for a variety of tasks by formulating reasoning and decision-making problems with energy-based optimization. IRED learns energy functions to represent the constraints between input conditions and desired outputs. After training, IRED adapts the number of optimization steps during inference based on problem difficulty, enabling it to solve problems outside its training distribution — such as more complex Sudoku puzzles, matrix completion with large value magnitudes, and path finding in larger graphs. Key to our method’s success is two novel techniques: learning a sequence of annealed energy landscapes for easier inference and a combination of score function and energy landscape supervision for faster and more stable training. Our experiments show that IRED outperforms existing methods in continuous-space reasoning, discrete-space reasoning, and planning tasks, particularly in more challenging scenarios.
https://proceedings.mlr.press/v235/du24g.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24g/du24g.pdf
https://openreview.net/forum?id=knhbhDLdry
When and How Does In-Distribution Label Help Out-of-Distribution Detection?
https://proceedings.mlr.press/v235/du24g.html
Xuefeng Du, Yiyou Sun, Yixuan Li
https://proceedings.mlr.press/v235/du24g.html
ICML 2024
Detecting data points deviating from the training distribution is pivotal for ensuring reliable machine learning. Extensive research has been dedicated to the challenge, spanning classical anomaly detection techniques to contemporary out-of-distribution (OOD) detection approaches. While OOD detection commonly relies on supervised learning from a labeled in-distribution (ID) dataset, anomaly detection may treat the entire ID data as a single class and disregard ID labels. This fundamental distinction raises a significant question that has yet to be rigorously explored: when and how does ID label help OOD detection? This paper bridges this gap by offering a formal understanding to theoretically delineate the impact of ID labels on OOD detection. We employ a graph-theoretic approach, rigorously analyzing the separability of ID data from OOD data in a closed-form manner. Key to our approach is the characterization of data representations through spectral decomposition on the graph. Leveraging these representations, we establish a provable error bound that compares the OOD detection performance with and without ID labels, unveiling conditions for achieving enhanced OOD detection. Lastly, we present empirical results on both simulated and real datasets, validating theoretical guarantees and reinforcing our insights.
https://proceedings.mlr.press/v235/du24h.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24h/du24h.pdf
https://openreview.net/forum?id=qFILbkTQWw
AnyTool: Self-Reflective, Hierarchical Agents for Large-Scale API Calls
https://proceedings.mlr.press/v235/du24h.html
Yu Du, Fangyun Wei, Hongyang Zhang
https://proceedings.mlr.press/v235/du24h.html
ICML 2024
We introduce AnyTool, a large language model agent designed to revolutionize the utilization of a vast array of tools in addressing user queries. We utilize over 16,000 APIs from Rapid API, operating under the assumption that a subset of these APIs could potentially resolve the queries. AnyTool primarily incorporates three elements: an API retriever with a hierarchical structure, a solver aimed at resolving user queries using a selected set of API candidates, and a self-reflection mechanism, which re-activates AnyTool if the initial solution proves impracticable. AnyTool is powered by the function calling feature of GPT-4, eliminating the need for training external modules. We also revisit the evaluation protocol introduced by previous works and identify a limitation in this protocol that leads to an artificially high pass rate. By revising the evaluation protocol to better reflect practical application scenarios, we introduce an additional benchmark, termed AnyToolBench. Experiments across various datasets demonstrate the superiority of our AnyTool over strong baselines such as ToolLLM and a GPT-4 variant tailored for tool utilization. For instance, AnyTool outperforms ToolLLM by +35.5% in terms of average pass rate on ToolBench.
https://proceedings.mlr.press/v235/du24i.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24i/du24i.pdf
https://openreview.net/forum?id=hLGxDYo0eF
Exploration-Driven Policy Optimization in RLHF: Theoretical Insights on Efficient Data Utilization
https://proceedings.mlr.press/v235/du24i.html
Yihan Du, Anna Winnicki, Gal Dalal, Shie Mannor, R. Srikant
https://proceedings.mlr.press/v235/du24i.html
ICML 2024
Reinforcement Learning from Human Feedback (RLHF) has achieved impressive empirical successes while relying on a small amount of human feedback. However, there is limited theoretical justification for this phenomenon. Additionally, most recent studies focus on value-based algorithms despite the recent empirical successes of policy-based algorithms. In this work, we consider an RLHF algorithm based on policy optimization (PO-RLHF). The algorithm is based on the popular Policy Cover-Policy Gradient (PC-PG) algorithm, which assumes knowledge of the reward function. In PO-RLHF, knowledge of the reward function is not assumed and the algorithm relies on trajectory-based comparison feedback to infer the reward function. We provide performance bounds for PO-RLHF with low query complexity, which provides insight into why a small amount of human feedback may be sufficient to get good performance with RLHF. A key novelty is our trajectory-level elliptical potential analysis technique used to infer reward function parameters when comparison queries rather than reward observations are used. We provide and analyze algorithms in two settings: linear and neural function approximation, PG-RLHF and NN-PG-RLHF, respectively.
https://proceedings.mlr.press/v235/du24j.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/du24j/du24j.pdf
https://openreview.net/forum?id=MFPYCvWsNR
Bottleneck-Minimal Indexing for Generative Document Retrieval
https://proceedings.mlr.press/v235/du24j.html
Xin Du, Lixin Xiu, Kumiko Tanaka-Ishii
https://proceedings.mlr.press/v235/du24j.html
ICML 2024
We apply an information-theoretic perspective to reconsider generative document retrieval (GDR), in which a document $x \in \mathcal{X}$ is indexed by $t \in \mathcal{T}$, and a neural autoregressive model is trained to map queries $\mathcal{Q}$ to $\mathcal{T}$. GDR can be considered to involve information transmission from documents $\mathcal{X}$ to queries $\mathcal{Q}$, with the requirement to transmit more bits via the indexes $\mathcal{T}$. By applying Shannon’s rate-distortion theory, the optimality of indexing can be analyzed in terms of the mutual information, and the design of the indexes $\mathcal{T}$ can then be regarded as a bottleneck in GDR. After reformulating GDR from this perspective, we empirically quantify the bottleneck underlying GDR. Finally, using the NQ320K and MARCO datasets, we evaluate our proposed bottleneck-minimal indexing method in comparison with various previous indexing methods, and we show that it outperforms those methods.
https://proceedings.mlr.press/v235/duan24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/duan24a/duan24a.pdf
https://openreview.net/forum?id=R0SoZvqXyQ
MuxServe: Flexible Spatial-Temporal Multiplexing for Multiple LLM Serving
https://proceedings.mlr.press/v235/duan24a.html
Jiangfei Duan, Runyu Lu, Haojie Duanmu, Xiuhong Li, Xingcheng Zhang, Dahua Lin, Ion Stoica, Hao Zhang
https://proceedings.mlr.press/v235/duan24a.html
ICML 2024
Large language models (LLMs) have demonstrated remarkable performance, and organizations are racing to serve LLMs of varying sizes as endpoints for use-cases like chat, programming and search. However, efficiently serving multiple LLMs poses significant challenges for existing approaches due to varying popularity of LLMs. In the paper, we present MuxServe, a flexible spatial-temporal multiplexing system for efficient multiple LLM serving. The key insight behind is to colocate LLMs considering their popularity to multiplex memory resources, and leverage the characteristics of prefill and decoding phases to separate and flexibly colocate them to multiplex computation resources. MuxServe formally formulates the multiplexing problem, and proposes a novel placement algorithm and adaptive batch scheduling strategy to identify optimal colocations and maximize utilization. MuxServe designs a unified resource manager to enable flexible and efficient multiplexing. Evaluation results show that MuxServe can achieves up to $1.8\times$ higher throughput or processes $2.9\times$ more requests within $99%$ SLO attainment. The code is available at: https://github.com/hao-ai-lab/MuxServe.
https://proceedings.mlr.press/v235/duan24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/duan24b/duan24b.pdf
https://openreview.net/forum?id=ecO7WOIlMD
MF-CLR: Multi-Frequency Contrastive Learning Representation for Time Series
https://proceedings.mlr.press/v235/duan24b.html
Jufang Duan, Wei Zheng, Yangzhou Du, Wenfa Wu, Haipeng Jiang, Hongsheng Qi
https://proceedings.mlr.press/v235/duan24b.html
ICML 2024
Learning a decent representation from unlabeled time series is a challenging task, especially when the time series data is derived from diverse channels at different sampling rates. Our motivation stems from the financial domain, where sparsely labeled covariates are commonly collected at different frequencies, e.g., daily stock market index, monthly unemployment rate and quarterly net revenue of a certain listed corporation. This paper presents Multi-Frequency Contrastive Learning Representation (MF-CLR), aimed at learning a good representation of multi-frequency time series in a self-supervised paradigm by leveraging the ability of contrastive learning. MF-CLR introduces a hierarchical mechanism that spans across different frequencies along the feature dimension. Within each contrastive block, two groups of subseries with adjacent frequencies are embedded based on our proposed cross-frequency consistency. To validate the effectiveness of MF-CLR, we conduct extensive experiments on five downstream tasks, including long-term and short-term forecasting, classification, anomaly detection and imputation. Experimental evidence shows that MF-CLR delivers a leading performance in all the downstream tasks and keeps consistent performance across different target dataset scales in the transfer learning scenario.
https://proceedings.mlr.press/v235/duarte24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/duarte24a/duarte24a.pdf
https://openreview.net/forum?id=LO4xhXmFal
DE-COP: Detecting Copyrighted Content in Language Models Training Data
https://proceedings.mlr.press/v235/duarte24a.html
André Vicente Duarte, Xuandong Zhao, Arlindo L. Oliveira, Lei Li
https://proceedings.mlr.press/v235/duarte24a.html
ICML 2024
How can we detect if copyrighted content was used in the training process of a language model, considering that the training data is typically undisclosed? We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text. We propose DE-COP, a method to determine whether a piece of copyrighted content is included in training. DE-COP’s core approach is to probe an LLM with multiple-choice questions, whose options include both verbatim text and their paraphrases. We construct BookTection, a benchmark with excerpts from 165 books published prior and subsequent to a model’s training cutoff, along with their paraphrases. Our experiments show that DE-COP outperforms the prior best method by 8.6% in detection accuracy (AUC) on models with logits available. Moreover, DE-COP also achieves an average accuracy of 72% for detecting suspect books on fully black-box models where prior methods give approximately 0% accuracy. The code and datasets are available at https://github.com/LeiLiLab/DE-COP.
https://proceedings.mlr.press/v235/dubrovsky24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dubrovsky24a/dubrovsky24a.pdf
https://openreview.net/forum?id=5nuW5iBAJS
Unveiling the Potential of AI for Nanomaterial Morphology Prediction
https://proceedings.mlr.press/v235/dubrovsky24a.html
Ivan Dubrovsky, Andrei Dmitrenko, Aleksei Dmitrenko, Nikita Serov, Vladimir Vinogradov
https://proceedings.mlr.press/v235/dubrovsky24a.html
ICML 2024
Creation of nanomaterials with specific morphology remains a complex experimental process, even though there is a growing demand for these materials in various industry sectors. This study explores the potential of AI to predict the morphology of nanoparticles within the data availability constraints. For that, we first generated a new multi-modal dataset that is double the size of analogous studies. Then, we systematically evaluated performance of classical machine learning and large language models in prediction of nanomaterial shapes and sizes. Finally, we prototyped a text-to-image system, discussed the obtained empirical results, as well as the limitations and promises of existing approaches.
https://proceedings.mlr.press/v235/duetting24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/duetting24a/duetting24a.pdf
https://openreview.net/forum?id=AlJkqMnyjL
Consistent Submodular Maximization
https://proceedings.mlr.press/v235/duetting24a.html
Paul Duetting, Federico Fusco, Silvio Lattanzi, Ashkan Norouzi-Fard, Morteza Zadimoghaddam
https://proceedings.mlr.press/v235/duetting24a.html
ICML 2024
Maximizing monotone submodular functions under cardinality constraints is a classic optimization task with several applications in data mining and machine learning. In this paper, we study this problem in a dynamic environment with consistency constraints: elements arrive in a streaming fashion, and the goal is maintaining a constant approximation to the optimal solution while having a stable solution (i.e., the number of changes between two consecutive solutions is bounded). In this setting, we provide algorithms with different trade-offs between consistency and approximation quality. We also complement our theoretical results with an experimental analysis showing the effectiveness of our algorithms in real-world instances.
https://proceedings.mlr.press/v235/dunefsky24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dunefsky24a/dunefsky24a.pdf
https://openreview.net/forum?id=ETNx4SekbY
Observable Propagation: Uncovering Feature Vectors in Transformers
https://proceedings.mlr.press/v235/dunefsky24a.html
Jacob Dunefsky, Arman Cohan
https://proceedings.mlr.press/v235/dunefsky24a.html
ICML 2024
A key goal of current mechanistic interpretability research in NLP is to find linear features (also called "feature vectors") for transformers: directions in activation space corresponding to concepts that are used by a given model in its computation. Present state-of-the-art methods for finding linear features require large amounts of labelled data – both laborious to acquire and computationally expensive to utilize. In this work, we introduce a novel method, called "observable propagation" (in short: ObProp), for finding linear features used by transformer language models in computing a given task – using almost no data. Our paradigm centers on the concept of "observables", linear functionals corresponding to given tasks. We then introduce a mathematical theory for the analysis of feature vectors, including a similarity metric between feature vectors called the coupling coefficient which estimates the degree to which one feature’s output correlates with another’s. We use ObProp to perform extensive qualitative investigations into several tasks, including gendered occupational bias, political party prediction, and programming language detection. Our results suggest that ObProp surpasses traditional approaches for finding feature vectors in the low-data regime, and that ObProp can be used to better understand the mechanisms responsible for bias in large language models.
https://proceedings.mlr.press/v235/dung24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dung24a/dung24a.pdf
https://openreview.net/forum?id=8mKXMnhnFW
Sharpness-Aware Data Generation for Zero-shot Quantization
https://proceedings.mlr.press/v235/dung24a.html
Hoang Anh Dung, Cuong Pham, Trung Le, Jianfei Cai, Thanh-Toan Do
https://proceedings.mlr.press/v235/dung24a.html
ICML 2024
Zero-shot quantization aims to learn a quantized model from a pre-trained full-precision model with no access to original real training data. The common idea in zero-shot quantization approaches is to generate synthetic data for quantizing the full-precision model. While it is well-known that deep neural networks with low sharpness have better generalization ability, none of the previous zero-shot quantization works considers the sharpness of the quantized model as a criterion for generating training data. This paper introduces a novel methodology that takes into account quantized model sharpness in synthetic data generation to enhance generalization. Specifically, we first demonstrate that sharpness minimization can be attained by maximizing gradient matching between the reconstruction loss gradients computed on synthetic and real validation data, under certain assumptions. We then circumvent the problem of the gradient matching without real validation set by approximating it with the gradient matching between each generated sample and its neighbors. Experimental evaluations on CIFAR-100 and ImageNet datasets demonstrate the superiority of the proposed method over the state-of-the-art techniques in low-bit quantization settings.
https://proceedings.mlr.press/v235/dupre-la-tour24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dupre-la-tour24a/dupre-la-tour24a.pdf
https://openreview.net/forum?id=3ajK5xplDL
Making Old Things New: A Unified Algorithm for Differentially Private Clustering
https://proceedings.mlr.press/v235/dupre-la-tour24a.html
Max Dupre La Tour, Monika Henzinger, David Saulpic
https://proceedings.mlr.press/v235/dupre-la-tour24a.html
ICML 2024
As a staple of data analysis and unsupervised learning, the problem of private clustering has been widely studied, under various privacy models. Centralized differential privacy is the first of them, and the problem has also been studied for the local and the shuffle variation. In each case, the goal is to design an algorithm that computes privately a clustering, with the smallest possible error. The study of each variation gave rise to new algorithm: the landscape of private clustering algorithm is therefore quite intricate. In this paper, we show that a 20 year-old algorithm can be slightly modified to work for any of those models. This provides a unified picture: while matching almost all previously known results, it allows us to improve some of them, and extend to a new privacy model, the continual observation setting, where the input is changing over time and the algorithm must output a new solution at each time step.
https://proceedings.mlr.press/v235/dupuis24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dupuis24a/dupuis24a.pdf
https://openreview.net/forum?id=eFSppFiVYG
Generalization Bounds for Heavy-Tailed SDEs through the Fractional Fokker-Planck Equation
https://proceedings.mlr.press/v235/dupuis24a.html
Benjamin Dupuis, Umut Simsekli
https://proceedings.mlr.press/v235/dupuis24a.html
ICML 2024
Understanding the generalization properties of heavy-tailed stochastic optimization algorithms has attracted increasing attention over the past years. While illuminating interesting aspects of stochastic optimizers by using heavy-tailed stochastic differential equations as proxies, prior works either provided expected generalization bounds, or introduced non-computable information theoretic terms. Addressing these drawbacks, in this work, we prove high-probability generalization bounds for heavy-tailed SDEs which do not contain any nontrivial information theoretic terms. To achieve this goal, we develop new proof techniques based on estimating the entropy flows associated with the so-called fractional Fokker-Planck equation (a partial differential equation that governs the evolution of the distribution of the corresponding heavy-tailed SDE). In addition to obtaining high-probability bounds, we show that our bounds have a better dependence on the dimension of parameters as compared to prior art. Our results further identify a phase transition phenomenon, which suggests that heavy tails can be either beneficial or harmful depending on the problem structure. We support our theory with experiments conducted in a variety of settings.
https://proceedings.mlr.press/v235/duran-martin24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/duran-martin24a/duran-martin24a.pdf
https://openreview.net/forum?id=D2MNVeVh5J
Outlier-robust Kalman Filtering through Generalised Bayes
https://proceedings.mlr.press/v235/duran-martin24a.html
Gerardo Duran-Martin, Matias Altamirano, Alex Shestopaloff, Leandro Sánchez-Betancourt, Jeremias Knoblauch, Matt Jones, Francois-Xavier Briol, Kevin Patrick Murphy
https://proceedings.mlr.press/v235/duran-martin24a.html
ICML 2024
We derive a novel, provably robust, efficient, and closed-form Bayesian update rule for online filtering in state-space models in the presence of outliers and misspecified measurement models. Our method combines generalised Bayesian inference with filtering methods such as the extended and ensemble Kalman filter. We use the former to show robustness and the latter to ensure computational efficiency in the case of nonlinear models. Our method matches or outperforms other robust filtering methods (such as those based on variational Bayes) at a much lower computational cost. We show this empirically on a range of filtering problems with outlier measurements, such as object tracking, state estimation in high-dimensional chaotic systems, and online learning of neural networks.
https://proceedings.mlr.press/v235/durasov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/durasov24a/durasov24a.pdf
https://openreview.net/forum?id=N6A6t6xlKm
Enabling Uncertainty Estimation in Iterative Neural Networks
https://proceedings.mlr.press/v235/durasov24a.html
Nikita Durasov, Doruk Oner, Jonathan Donier, Hieu Le, Pascal Fua
https://proceedings.mlr.press/v235/durasov24a.html
ICML 2024
Turning pass-through network architectures into iterative ones, which use their own output as input, is a well-known approach for boosting performance. In this paper, we argue that such architectures offer an additional benefit: The convergence rate of their successive outputs is highly correlated with the accuracy of the value to which they converge. Thus, we can use the convergence rate as a useful proxy for uncertainty. This results in an approach to uncertainty estimation that provides state-of-the-art estimates at a much lower computational cost than techniques like Ensembles, and without requiring any modifications to the original iterative model. We demonstrate its practical value by embedding it in two application domains: road detection in aerial images and the estimation of aerodynamic properties of 2D and 3D shapes.
https://proceedings.mlr.press/v235/dvurechensky24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dvurechensky24a/dvurechensky24a.pdf
https://openreview.net/forum?id=tRESfzWFtf
Barrier Algorithms for Constrained Non-Convex Optimization
https://proceedings.mlr.press/v235/dvurechensky24a.html
Pavel Dvurechensky, Mathias Staudigl
https://proceedings.mlr.press/v235/dvurechensky24a.html
ICML 2024
In this paper we theoretically show that interior-point methods based on self-concordant barriers possess favorable global complexity beyond their standard application area of convex optimization. To do that we propose first- and second-order methods for non-convex optimization problems with general convex set constraints and linear constraints. Our methods attain a suitably defined class of approximate first- or second-order KKT points with the worst-case iteration complexity similar to unconstrained problems, namely $O(\varepsilon^{-2})$ (first-order) and $O(\varepsilon^{-3/2})$ (second-order), respectively.
https://proceedings.mlr.press/v235/dwaracherla24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dwaracherla24a/dwaracherla24a.pdf
https://openreview.net/forum?id=PpPZ6W7rxy
Efficient Exploration for LLMs
https://proceedings.mlr.press/v235/dwaracherla24a.html
Vikranth Dwaracherla, Seyed Mohammad Asghari, Botao Hao, Benjamin Van Roy
https://proceedings.mlr.press/v235/dwaracherla24a.html
ICML 2024
We present evidence of substantial benefit from efficient exploration in gathering human feedback to improve large language models. In our experiments, an agent sequentially generates queries while fitting a reward model to the feedback received. Our best-performing agent generates queries using double Thompson sampling, with uncertainty represented by an epistemic neural network. Our results demonstrate that efficient exploration enables high levels of performance with far fewer queries. Further, both uncertainty estimation and the choice of exploration scheme play critical roles.
https://proceedings.mlr.press/v235/dym24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/dym24a/dym24a.pdf
https://openreview.net/forum?id=4iy0q0carb
Equivariant Frames and the Impossibility of Continuous Canonicalization
https://proceedings.mlr.press/v235/dym24a.html
Nadav Dym, Hannah Lawrence, Jonathan W. Siegel
https://proceedings.mlr.press/v235/dym24a.html
ICML 2024
Canonicalization provides an architecture-agnostic method for enforcing equivariance, with generalizations such as frame-averaging recently gaining prominence as a lightweight and flexible alternative to equivariant architectures. Recent works have found an empirical benefit to using probabilistic frames instead, which learn weighted distributions over group elements. In this work, we provide strong theoretical justification for this phenomenon: for commonly-used groups, there is no efficiently computable choice of frame that preserves continuity of the function being averaged. In other words, unweighted frame-averaging can turn a smooth, non-symmetric function into a discontinuous, symmetric function. To address this fundamental robustness problem, we formally define and construct weighted frames, which provably preserve continuity, and demonstrate their utility by constructing efficient and continuous weighted frames for the actions of $SO(d)$, $O(d)$, and $S_n$ on point clouds.
https://proceedings.mlr.press/v235/eckman24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/eckman24a/eckman24a.pdf
https://openreview.net/forum?id=c3ls5AVOw7
Position: Insights from Survey Methodology can Improve Training Data
https://proceedings.mlr.press/v235/eckman24a.html
Stephanie Eckman, Barbara Plank, Frauke Kreuter
https://proceedings.mlr.press/v235/eckman24a.html
ICML 2024
Whether future AI models are fair, trustworthy, and aligned with the public’s interests rests in part on our ability to collect accurate data about what we want the models to do. However, collecting high-quality data is difficult, and few AI/ML researchers are trained in data collection methods. Recent research in data-centric AI has show that higher quality training data leads to better performing models, making this the right moment to introduce AI/ML researchers to the field of survey methodology, the science of data collection. We summarize insights from the survey methodology literature and discuss how they can improve the quality of training and feedback data. We also suggest collaborative research ideas into how biases in data collection can be mitigated, making models more accurate and human-centric.
https://proceedings.mlr.press/v235/egiazarian24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/egiazarian24a/egiazarian24a.pdf
https://openreview.net/forum?id=5mCaITRTmO
Extreme Compression of Large Language Models via Additive Quantization
https://proceedings.mlr.press/v235/egiazarian24a.html
Vage Egiazarian, Andrei Panferov, Denis Kuznedelev, Elias Frantar, Artem Babenko, Dan Alistarh
https://proceedings.mlr.press/v235/egiazarian24a.html
ICML 2024
The emergence of accurate open large language models (LLMs) has led to a race towards performant quantization techniques which can enable their execution on end-user devices. In this paper, we revisit the problem of “extreme” LLM compression—defined as targeting extremely low bit counts, such as 2 to 3 bits per parameter—from the point of view of classic methods in Multi-Codebook Quantization (MCQ). Our algorithm, called AQLM, generalizes the classic Additive Quantization (AQ) approach for information retrieval to advance the state-of-the-art in LLM compression, via two innovations: 1) learned additive quantization of weight matrices in input-adaptive fashion, and 2) joint optimization of codebook parameters across each transformer blocks. Broadly, AQLM is the first scheme that is Pareto optimal in terms of accuracy-vs-model-size when compressing to less than 3 bits per parameter, and significantly improves upon all known schemes in the extreme compression (2bit) regime. In addition, AQLM is practical: we provide fast GPU and CPU implementations of AQLM for token generation, which enable us to match or outperform optimized FP16 implementations for speed, while executing in a much smaller memory footprint.
https://proceedings.mlr.press/v235/egorov24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/egorov24a/egorov24a.pdf
https://openreview.net/forum?id=xFCA2yWVs4
Ai-sampler: Adversarial Learning of Markov kernels with involutive maps
https://proceedings.mlr.press/v235/egorov24a.html
Evgenii Egorov, Riccardo Valperga, Stratis Gavves
https://proceedings.mlr.press/v235/egorov24a.html
ICML 2024
Markov chain Monte Carlo methods have become popular in statistics as versatile techniques to sample from complicated probability distributions. In this work, we propose a method to parameterize and train transition kernels of Markov chains to achieve efficient sampling and good mixing. This training procedure minimizes the total variation distance between the stationary distribution of the chain and the empirical distribution of the data. Our approach leverages involutive Metropolis-Hastings kernels constructed from reversible neural networks that ensure detailed balance by construction. We find that reversibility also implies $C_2$-equivariance of the discriminator function which can be used to restrict its function space.
https://proceedings.mlr.press/v235/eiras24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/eiras24a/eiras24a.pdf
https://openreview.net/forum?id=5t4V7Q6lmz
Efficient Error Certification for Physics-Informed Neural Networks
https://proceedings.mlr.press/v235/eiras24a.html
Francisco Eiras, Adel Bibi, Rudy R Bunel, Krishnamurthy Dj Dvijotham, Philip Torr, M. Pawan Kumar
https://proceedings.mlr.press/v235/eiras24a.html
ICML 2024
Recent work provides promising evidence that Physics-Informed Neural Networks (PINN) can efficiently solve partial differential equations (PDE). However, previous works have failed to provide guarantees on the worst-case residual error of a PINN across the spatio-temporal domain - a measure akin to the tolerance of numerical solvers - focusing instead on point-wise comparisons between their solution and the ones obtained by a solver on a set of inputs. In real-world applications, one cannot consider tests on a finite set of points to be sufficient grounds for deployment, as the performance could be substantially worse on a different set. To alleviate this issue, we establish guaranteed error-based conditions for PINNs over their continuous applicability domain. To verify the extent to which they hold, we introduce $\partial$-CROWN: a general, efficient and scalable post-training framework to bound PINN residual errors. We demonstrate its effectiveness in obtaining tight certificates by applying it to two classically studied PINNs – Burgers’ and Schrödinger’s equations –, and two more challenging ones with real-world applications – the Allan-Cahn and Diffusion-Sorption equations.
https://proceedings.mlr.press/v235/eiras24b.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/eiras24b/eiras24b.pdf
https://openreview.net/forum?id=8q4EPdjTLE
Position: Near to Mid-term Risks and Opportunities of Open-Source Generative AI
https://proceedings.mlr.press/v235/eiras24b.html
Francisco Eiras, Aleksandar Petrov, Bertie Vidgen, Christian Schroeder De Witt, Fabio Pizzati, Katherine Elkins, Supratik Mukhopadhyay, Adel Bibi, Botos Csaba, Fabro Steibel, Fazl Barez, Genevieve Smith, Gianluca Guadagni, Jon Chun, Jordi Cabot, Joseph Marvin Imperial, Juan A. Nolazco-Flores, Lori Landay, Matthew Thomas Jackson, Paul Rottger, Philip Torr, Trevor Darrell, Yong Suk Lee, Jakob Nicolaus Foerster
https://proceedings.mlr.press/v235/eiras24b.html
ICML 2024
In the next few years, applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation, in particular from some of the major tech companies who are leading in AI development. While regulation is important, it is key that it does not put at risk the budding field of open-source Generative AI. We argue for the responsible open sourcing of generative AI models in the near and medium term. To set the stage, we first introduce an AI openness taxonomy system and apply it to 40 current large language models. We then outline differential benefits and risks of open versus closed source AI and present potential risk mitigation, ranging from best practices to calls for technical and scientific contributions. We hope that this report will add a much needed missing voice to the current public discourse on near to mid-term AI safety and other societal impact.
https://proceedings.mlr.press/v235/el-nouby24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/el-nouby24a/el-nouby24a.pdf
https://openreview.net/forum?id=c92KDfEZTg
Scalable Pre-training of Large Autoregressive Image Models
https://proceedings.mlr.press/v235/el-nouby24a.html
Alaaeldin El-Nouby, Michal Klein, Shuangfei Zhai, Miguel Ángel Bautista, Vaishaal Shankar, Alexander T Toshev, Joshua M. Susskind, Armand Joulin
https://proceedings.mlr.press/v235/el-nouby24a.html
ICML 2024
This paper introduces AIM, a collection of vision models pre-trained with an autoregressive objective. These models are inspired by their textual counterparts, i.e., Large Language Models (LLMs), and exhibit similar scaling properties. Specifically, we highlight two key findings: (1) the performance of the visual features scale with both the model capacity and the quantity of data, (2) the value of the objective function correlates with the performance of the model on downstream tasks. We illustrate the practical implication of these findings by pre-training a 7 billion parameter AIM on 2 billion images, that achieves 84.0% on ImageNet-1k with a frozen trunk. Interestingly, even at this scale, we observe no sign of saturation in performance, suggesting that AIM potentially represents a new frontier for training large-scale vision models. The pre-training of AIM is similar to the pre-training of LLMs, and does not require any image-specific strategy to stabilize the training at scale.
https://proceedings.mlr.press/v235/elahi24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/elahi24a/elahi24a.pdf
https://openreview.net/forum?id=nJzf3TVnOn
Adaptive Online Experimental Design for Causal Discovery
https://proceedings.mlr.press/v235/elahi24a.html
Muhammad Qasim Elahi, Lai Wei, Murat Kocaoglu, Mahsa Ghasemi
https://proceedings.mlr.press/v235/elahi24a.html
ICML 2024
Causal discovery aims to uncover cause-and-effect relationships encoded in causal graphs by leveraging observational, interventional data, or their combination. The majority of existing causal discovery methods are developed assuming infinite interventional data. We focus on interventional data efficiency and formalize causal discovery from the perspective of online learning, inspired by pure exploration in bandit problems. A graph separating system, consisting of interventions that cut every edge of the graph at least once, is sufficient for learning causal graphs when infinite interventional data is available, even in the worst case. We propose a track-and-stop causal discovery algorithm that adaptively selects interventions from the graph separating system via allocation matching and learns the causal graph based on sampling history. Given any desired confidence value, the algorithm determines a termination condition and runs until it is met. We analyze the algorithm to establish a problem-dependent upper bound on the expected number of required interventional samples. Our proposed algorithm outperforms existing methods in simulations across various randomly generated causal graphs. It achieves higher accuracy, measured by the structural hamming distance (SHD) between the learned causal graph and the ground truth, with significantly fewer samples.
https://proceedings.mlr.press/v235/eldele24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/eldele24a/eldele24a.pdf
https://openreview.net/forum?id=CGR3vpX63X
TSLANet: Rethinking Transformers for Time Series Representation Learning
https://proceedings.mlr.press/v235/eldele24a.html
Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Xiaoli Li
https://proceedings.mlr.press/v235/eldele24a.html
ICML 2024
Time series data, characterized by its intrinsic long and short-range dependencies, poses a unique challenge across analytical applications. While Transformer-based models excel at capturing long-range dependencies, they face limitations in noise sensitivity, computational efficiency, and overfitting with smaller datasets. In response, we introduce a novel Time Series Lightweight Adaptive Network (TSLANet), as a universal convolutional model for diverse time series tasks. Specifically, we propose an Adaptive Spectral Block, harnessing Fourier analysis to enhance feature representation and to capture both long-term and short-term interactions while mitigating noise via adaptive thresholding. Additionally, we introduce an Interactive Convolution Block and leverage self-supervised learning to refine the capacity of TSLANet for decoding complex temporal patterns and improve its robustness on different datasets. Our comprehensive experiments demonstrate that TSLANet outperforms state-of-the-art models in various tasks spanning classification, forecasting, and anomaly detection, showcasing its resilience and adaptability across a spectrum of noise levels and data sizes. The code is available at https://github.com/emadeldeen24/TSLANet.
https://proceedings.mlr.press/v235/elhamod24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/elhamod24a/elhamod24a.pdf
https://openreview.net/forum?id=XiemSZpvh0
Neuro-Visualizer: A Novel Auto-Encoder-Based Loss Landscape Visualization Method With an Application in Knowledge-Guided Machine Learning
https://proceedings.mlr.press/v235/elhamod24a.html
Mohannad Elhamod, Anuj Karpatne
https://proceedings.mlr.press/v235/elhamod24a.html
ICML 2024
In recent years, there has been a growing interest in visualizing the loss landscape of neural networks. Linear landscape visualization methods, such as principal component analysis, have become widely used as they intuitively help researchers study neural networks and their training process. However, these linear methods suffer from limitations and drawbacks due to their lack of flexibility and low fidelity at representing the high dimensional landscape. In this paper, we present a novel auto-encoder-based non-linear landscape visualization method called Neuro-Visualizer that addresses these shortcoming and provides useful insights about neural network loss landscapes. To demonstrate its potential, we run experiments on a variety of problems in two separate applications of knowledge-guided machine learning (KGML). Our findings show that Neuro-Visualizer outperforms other linear and non-linear baselines and helps corroborate, and sometime challenge, claims proposed by machine learning community. All code and data used in the experiments of this paper can be found at the link below.
https://proceedings.mlr.press/v235/elsayed24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/elsayed24a/elsayed24a.pdf
https://openreview.net/forum?id=yrFUJzcTsk
Revisiting Scalable Hessian Diagonal Approximations for Applications in Reinforcement Learning
https://proceedings.mlr.press/v235/elsayed24a.html
Mohamed Elsayed, Homayoon Farrahi, Felix Dangel, A. Rupam Mahmood
https://proceedings.mlr.press/v235/elsayed24a.html
ICML 2024
Second-order information is valuable for many applications but challenging to compute. Several works focus on computing or approximating Hessian diagonals, but even this simplification introduces significant additional costs compared to computing a gradient. In the absence of efficient exact computation schemes for Hessian diagonals, we revisit an early approximation scheme proposed by Becker and LeCun (1989, BL89), which has a cost similar to gradients and appears to have been overlooked by the community. We introduce HesScale, an improvement over BL89, which adds negligible extra computation. On small networks, we find that this improvement is of higher quality than all alternatives, even those with theoretical guarantees, such as unbiasedness, while being much cheaper to compute. We use this insight in reinforcement learning problems where small networks are used and demonstrate HesScale in second-order optimization and scaling the step-size parameter. In our experiments, HesScale optimizes faster than existing methods and improves stability through step-size scaling. These findings are promising for scaling second-order methods in larger models in the future.
https://proceedings.mlr.press/v235/engels24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/engels24a/engels24a.pdf
https://openreview.net/forum?id=8t8zBaGFar
Approximate Nearest Neighbor Search with Window Filters
https://proceedings.mlr.press/v235/engels24a.html
Joshua Engels, Ben Landrum, Shangdi Yu, Laxman Dhulipala, Julian Shun
https://proceedings.mlr.press/v235/engels24a.html
ICML 2024
We define and investigate the problem of c-approximate window search: approximate nearest neighbor search where each point in the dataset has a numeric label, and the goal is to find nearest neighbors to queries within arbitrary label ranges. Many semantic search problems, such as image and document search with timestamp filters, or product search with cost filters, are natural examples of this problem. We propose and theoretically analyze a modular tree-based framework for transforming an index that solves the traditional c-approximate nearest neighbor problem into a data structure that solves window search. On standard nearest neighbor benchmark datasets equipped with random label values, adversarially constructed embeddings, and image search embeddings with real timestamps, we obtain up to a $75\times$ speedup over existing solutions at the same level of recall.
https://proceedings.mlr.press/v235/engstrom24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/engstrom24a/engstrom24a.pdf
https://openreview.net/forum?id=GC8HkKeH8s
DsDm: Model-Aware Dataset Selection with Datamodels
https://proceedings.mlr.press/v235/engstrom24a.html
Logan Engstrom, Axel Feldmann, Aleksander Madry
https://proceedings.mlr.press/v235/engstrom24a.html
ICML 2024
When selecting data for training large-scale models, standard practice is to filter for examples that match human notions of data quality. Such filtering yields qualitatively clean datapoints that intuitively should improve model behavior. However, in practice the opposite can often happen: we find that selecting according to similarity with "high quality" data sources may not increase (and can even hurt) performance compared to randomly selecting data. To develop better methods for selecting data, we start by framing dataset selection as an optimization problem that we can directly solve for: given target tasks, a learning algorithm, and candidate data, select the subset that maximizes model performance. This framework thus avoids handpicked notions of data quality, and instead models explicitly how the learning process uses train datapoints to predict on the target tasks. Our resulting method greatly improves language model (LM) performance on both pre-specified tasks and previously unseen tasks. Specifically, choosing target tasks representative of standard LM problems and evaluating on diverse held-out benchmarks, our selected datasets provide a 2x compute multiplier over baseline methods.
https://proceedings.mlr.press/v235/entesari24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/entesari24a/entesari24a.pdf
https://openreview.net/forum?id=RtnGLJNtEG
Compositional Curvature Bounds for Deep Neural Networks
https://proceedings.mlr.press/v235/entesari24a.html
Taha Entesari, Sina Sharifi, Mahyar Fazlyab
https://proceedings.mlr.press/v235/entesari24a.html
ICML 2024
A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability to adversarial attacks. In this paper, we study the second-order behavior of continuously differentiable deep neural networks, focusing on robustness against adversarial perturbations. First, we provide a theoretical analysis of robustness and attack certificates for deep classifiers by leveraging local gradients and upper bounds on the second derivative (curvature constant). Next, we introduce a novel algorithm to analytically compute provable upper bounds on the second derivative of neural networks. This algorithm leverages the compositional structure of the model to propagate the curvature bound layer-by-layer, giving rise to a scalable and modular approach. The proposed bound can serve as a differentiable regularizer to control the curvature of neural networks during training, thereby enhancing robustness. Finally, we demonstrate the efficacy of our method on classification tasks using the MNIST and CIFAR-10 datasets.
https://proceedings.mlr.press/v235/epstein24a.html
https://raw.githubusercontent.com/mlresearch/v235/main/assets/epstein24a/epstein24a.pdf
https://openreview.net/forum?id=Lgh8bhWpVC
Disentangled 3D Scene Generation with Layout Learning
https://proceedings.mlr.press/v235/epstein24a.html
Dave Epstein, Ben Poole, Ben Mildenhall, Alexei A Efros, Aleksander Holynski
https://proceedings.mlr.press/v235/epstein24a.html
ICML 2024
We introduce a method to generate 3D scenes that are disentangled into their component objects. This disentanglement is unsupervised, relying only on the knowledge of a large pretrained text-to-image model. Our key insight is that objects can be discovered by finding parts of a 3D scene that, when rearranged spatially, still produce valid configurations of the same scene. Concretely, our method jointly optimizes multiple NeRFs—each representing its own object—along with a set of layouts that composite these objects into scenes. We then encourage these composited scenes to be in-distribution according to the image generator. We show that despite its simplicity, our approach successfully generates 3D scenes decomposed into individual objects, enabling new capabilities in text-to-3D content creation.