abs
stringlengths 44
64
| Download PDF
stringlengths 75
115
| OpenReview
stringlengths 42
42
| title
stringlengths 15
148
| url
stringlengths 44
64
| authors
stringlengths 6
903
| detail_url
stringlengths 44
64
| tags
stringclasses 1
value | abstract
stringlengths 422
5.84k
|
---|---|---|---|---|---|---|---|---|
https://proceedings.mlr.press/v235/balazevic24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/balazevic24a/balazevic24a.pdf | https://openreview.net/forum?id=qeFgvVVAJ2 | Memory Consolidation Enables Long-Context Video Understanding | https://proceedings.mlr.press/v235/balazevic24a.html | Ivana Balazevic, Yuge Shi, Pinelopi Papalampidi, Rahma Chaabouni, Skanda Koppula, Olivier J Henaff | https://proceedings.mlr.press/v235/balazevic24a.html | ICML 2024 | Most transformer-based video encoders are limited to short temporal contexts due to their quadratic complexity. While various attempts have been made to extend this context, this has often come at the cost of both conceptual and computational complexity. We propose to instead re-purpose existing pre-trained video transformers by simply fine-tuning them to attend to memories derived non-parametrically from past activations. By leveraging redundancy reduction, our memory-consolidated vision transformer (MC-ViT) effortlessly extends its context far into the past and exhibits excellent scaling behavior when learning from longer videos. In doing so, MC-ViT sets a new state-of-the-art in long-context video understanding on EgoSchema, Perception Test, and Diving48, outperforming methods that benefit from orders of magnitude more parameters. |
https://proceedings.mlr.press/v235/balestriero24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/balestriero24a/balestriero24a.pdf | https://openreview.net/forum?id=glfcwSsks8 | Characterizing Large Language Model Geometry Helps Solve Toxicity Detection and Generation | https://proceedings.mlr.press/v235/balestriero24a.html | Randall Balestriero, Romain Cosentino, Sarath Shekkizhar | https://proceedings.mlr.press/v235/balestriero24a.html | ICML 2024 | Large Language Models (LLMs) drive current AI breakthroughs despite very little being known about their internal representations. In this work, we propose to shed the light on LLMs inner mechanisms through the lens of geometry. In particular, we develop in closed form $(i)$ the intrinsic dimension in which the Multi-Head Attention embeddings are constrained to exist and $(ii)$ the partition and per-region affine mappings of the feedforward (MLP) network of LLMs’ layers. Our theoretical findings further enable the design of novel principled solutions applicable to state-of-the-art LLMs. First, we show that, through our geometric understanding, we can bypass LLMs’ RLHF protection by controlling the embedding’s intrinsic dimension through informed prompt manipulation. Second, we derive interpretable geometrical features that can be extracted from any (pre-trained) LLM, providing a rich abstract representation of their inputs. We observe that these features are sufficient to help solve toxicity detection, and even allow the identification of various types of toxicity. Our results demonstrate how, even in large-scale regimes, exact theoretical results can answer practical questions in LLMs. Code: https://github.com/RandallBalestriero/SplineLLM |
https://proceedings.mlr.press/v235/balestriero24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/balestriero24b/balestriero24b.pdf | https://openreview.net/forum?id=XsDWw1Mn2p | How Learning by Reconstruction Produces Uninformative Features For Perception | https://proceedings.mlr.press/v235/balestriero24b.html | Randall Balestriero, Yann Lecun | https://proceedings.mlr.press/v235/balestriero24b.html | ICML 2024 | Input space reconstruction is an attractive representation learning paradigm. Despite interpretability benefit of reconstruction and generation, we identify a misalignment between learning to reconstruct, and learning for perception. We show that the former allocates a model’s capacity towards a subspace of the data explaining the observed variance–a subspace with uninformative features for the latter. For example, the supervised TinyImagenet task with images projected onto the top subspace explaining 90% of the pixel variance can be solved with 45% test accuracy. Using the bottom subspace instead, accounting for only 20% of the pixel variance, reaches 55% test accuracy. Learning by reconstruction is also wasteful as the features for perception are learned last, pushing the need for long training schedules. We finally prove that learning by denoising can alleviate that misalignment for some noise strategies, e.g., masking. While tuning the noise strategy without knowledge of the perception task seems challenging, we provide a solution to detect if a noise strategy is never beneficial regardless of the perception task, e.g., additive Gaussian noise. |
https://proceedings.mlr.press/v235/balmaseda24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/balmaseda24a/balmaseda24a.pdf | https://openreview.net/forum?id=FpbKoIPHxb | Combinatorial Approximations for Cluster Deletion: Simpler, Faster, and Better | https://proceedings.mlr.press/v235/balmaseda24a.html | Vicente Balmaseda, Ying Xu, Yixin Cao, Nate Veldt | https://proceedings.mlr.press/v235/balmaseda24a.html | ICML 2024 | Cluster deletion is an NP-hard graph clustering objective with applications in computational biology and social network analysis, where the goal is to delete a minimum number of edges to partition a graph into cliques. We first provide a tighter analysis of two previous approximation algorithms, improving their approximation guarantees from 4 to 3. Moreover, we show that both algorithms can be derandomized in a surprisingly simple way, by greedily taking a vertex of maximum degree in an auxiliary graph and forming a cluster around it. One of these algorithms relies on solving a linear program. Our final contribution is to design a new and purely combinatorial approach for doing so that is far more scalable in theory and practice. |
https://proceedings.mlr.press/v235/balseiro24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/balseiro24a/balseiro24a.pdf | https://openreview.net/forum?id=HTMFUKAm8B | A Field Guide for Pacing Budget and ROS Constraints | https://proceedings.mlr.press/v235/balseiro24a.html | Santiago R. Balseiro, Kshipra Bhawalkar, Zhe Feng, Haihao Lu, Vahab Mirrokni, Balasubramanian Sivan, Di Wang | https://proceedings.mlr.press/v235/balseiro24a.html | ICML 2024 | Budget pacing is a popular service that has been offered by major internet advertising platforms since their inception. In the past few years, autobidding products that provide real-time bidding as a service to advertisers have seen a prominent rise in adoption. A popular autobidding stategy is value maximization subject to return-on-spend (ROS) constraints. For historical or business reasons, the systems that govern these two services, namely budget pacing and ROS pacing, are not necessarily always a single unified and coordinated entity that optimizes a global objective subject to both constraints. The purpose of this work is to theoretically and empirically compare algorithms with different degrees of coordination between these two pacing systems. In particular, we compare (a) a fully-decoupled sequential algorithm; (b) a minimally-coupled min-pacing algorithm; (c) a fully-coupled dual-based algorithm. Our main contribution is to theoretically analyze the min-pacing algorithm and show that it attains similar guarantees to the fully-coupled canonical dual-based algorithm. On the other hand, we show that the sequential algorithm, even though appealing by virtue of being fully decoupled, could badly violate the constraints. We validate our theoretical findings empirically by showing that the min-pacing algorithm performs almost as well as the canonical dual-based algorithm on a semi-synthetic dataset that was generated from a large online advertising platform’s auction data. |
https://proceedings.mlr.press/v235/balsells-rodas24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/balsells-rodas24a/balsells-rodas24a.pdf | https://openreview.net/forum?id=Eew3yUQQtE | On the Identifiability of Switching Dynamical Systems | https://proceedings.mlr.press/v235/balsells-rodas24a.html | Carles Balsells-Rodas, Yixin Wang, Yingzhen Li | https://proceedings.mlr.press/v235/balsells-rodas24a.html | ICML 2024 | The identifiability of latent variable models has received increasing attention due to its relevance in interpretability and out-of-distribution generalisation. In this work, we study the identifiability of Switching Dynamical Systems, taking an initial step toward extending identifiability analysis to sequential latent variable models. We first prove the identifiability of Markov Switching Models, which commonly serve as the prior distribution for the continuous latent variables in Switching Dynamical Systems. We present identification conditions for first-order Markov dependency structures, whose transition distribution is parametrised via non-linear Gaussians. We then establish the identifiability of the latent variables and non-linear mappings in Switching Dynamical Systems up to affine transformations, by leveraging identifiability analysis techniques from identifiable deep latent variable models. We finally develop estimation algorithms for identifiable Switching Dynamical Systems. Throughout empirical studies, we demonstrate the practicality of identifiable Switching Dynamical Systems for segmenting high-dimensional time series such as videos, and showcase the use of identifiable Markov Switching Models for regime-dependent causal discovery in climate data. |
https://proceedings.mlr.press/v235/bamas24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bamas24a/bamas24a.pdf | https://openreview.net/forum?id=b9uHveqszc | Analyzing $D^α$ seeding for $k$-means | https://proceedings.mlr.press/v235/bamas24a.html | Etienne Bamas, Sai Ganesh Nagarajan, Ola Svensson | https://proceedings.mlr.press/v235/bamas24a.html | ICML 2024 | One of the most popular clustering algorithms is the celebrated $D^\alpha$ seeding algorithm (also know as $k$-means++ when $\alpha=2$) by Arthur and Vassilvitskii (2007), who showed that it guarantees in expectation an $O(2^{2\alpha}\cdot \log k)$-approximate solution to the ($k$,$\alpha$)-clustering cost (where distances are raised to the power $\alpha$) for any $\alpha\ge 1$. More recently, Balcan, Dick, and White (2018) observed experimentally that using $D^\alpha$ seeding with $\alpha>2$ can lead to a better solution with respect to the standard $k$-means objective (i.e. the $(k,2)$-clustering cost). In this paper, we provide a rigorous understanding of this phenomenon. For any $\alpha>2$, we show that $D^\alpha$ seeding guarantees in expectation an approximation factor of \begin{equation*} O_\alpha \left(\left(\frac{\sigma_{\textrm{max}}}{\sigma_{\textrm{min}}}\right)^{2-4/\alpha}\cdot (g_\alpha \cdot \min \lbrace\ell,\log k\rbrace)^{2/\alpha}\right) \end{equation*} with respect to the standard $k$-means cost of any underlying clustering; where $g_\alpha$ is a parameter capturing the concentration of the points in each cluster, $\sigma_{\textrm{max}}$ and $\sigma_{\textrm{min}}$ are the maximum and minimum standard deviation of the clusters around their center, and $\ell$ is the number of distinct mixing weights in the underlying clustering (after rounding them to the nearest power of $2$). For instance, if the underlying clustering is defined by a mixture of $k$ Gaussian distributions with equal cluster variance (up to a constant-factor), then our result implies that: (1) if there are a constant number of mixing weights, any constant $\alpha>2$ yields a constant-factor approximation; (2) if the mixing weights are arbitrary, any constant $\alpha>2$ yields an $O\left(\log^{2/\alpha}k\right)$-approximation, and $\alpha=\Theta(\log\log k)$ yields an $O(\log\log k)^3$-approximation. We complement these results by some lower bounds showing that the dependency on $g_\alpha$ and $\sigma_{\textrm{max}}/\sigma_{\textrm{min}}$ is tight. Finally, we provide an experimental validation of the effects of the aforementioned parameters when using $D^\alpha$ seeding. |
https://proceedings.mlr.press/v235/bampis24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bampis24a/bampis24a.pdf | https://openreview.net/forum?id=AD5QC1BTJL | Parsimonious Learning-Augmented Approximations for Dense Instances of $\mathcalNP$-hard Problems | https://proceedings.mlr.press/v235/bampis24a.html | Evripidis Bampis, Bruno Escoffier, Michalis Xefteris | https://proceedings.mlr.press/v235/bampis24a.html | ICML 2024 | The classical work of (Arora et al., 1999) provides a scheme that gives, for any $\epsilon>0$, a polynomial time $1-\epsilon$ approximation algorithm for dense instances of a family of $\mathcal{NP}$-hard problems, such as Max-CUT and Max-$k$-SAT. In this paper we extend and speed up this scheme using a logarithmic number of one-bit predictions. We propose a learning augmented framework which aims at finding fast algorithms which guarantees approximation consistency, smoothness and robustness with respect to the prediction error. We provide such algorithms, which moreover use predictions parsimoniously, for dense instances of various optimization problems. |
https://proceedings.mlr.press/v235/ban24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/ban24a/ban24a.pdf | https://openreview.net/forum?id=KLmWRMg6nL | Fair Resource Allocation in Multi-Task Learning | https://proceedings.mlr.press/v235/ban24a.html | Hao Ban, Kaiyi Ji | https://proceedings.mlr.press/v235/ban24a.html | ICML 2024 | By jointly learning multiple tasks, multi-task learning (MTL) can leverage the shared knowledge across tasks, resulting in improved data efficiency and generalization performance. However, a major challenge in MTL lies in the presence of conflicting gradients, which can hinder the fair optimization of some tasks and subsequently impede MTL’s ability to achieve better overall performance. Inspired by fair resource allocation in communication networks, we formulate the optimization of MTL as a utility maximization problem, where the loss decreases across tasks are maximized under different fairness measurements. To address the problem, we propose FairGrad, a novel optimization objective. FairGrad not only enables flexible emphasis on certain tasks but also achieves a theoretical convergence guarantee. Extensive experiments demonstrate that our method can achieve state-of-the-art performance among gradient manipulation methods on a suite of multi-task benchmarks in supervised learning and reinforcement learning. Furthermore, we incorporate the idea of $\alpha$-fairness into the loss functions of various MTL methods. Extensive empirical studies demonstrate that their performance can be significantly enhanced. Code is available at https://github.com/OptMN-Lab/fairgrad. |
https://proceedings.mlr.press/v235/band24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/band24a/band24a.pdf | https://openreview.net/forum?id=rJVjQSQ8ye | Linguistic Calibration of Long-Form Generations | https://proceedings.mlr.press/v235/band24a.html | Neil Band, Xuechen Li, Tengyu Ma, Tatsunori Hashimoto | https://proceedings.mlr.press/v235/band24a.html | ICML 2024 | Language models (LMs) may lead their users to make suboptimal downstream decisions when they confidently hallucinate. This issue can be mitigated by having the LM verbally convey the probability that its claims are correct, but existing models cannot produce long-form text with calibrated confidence statements. Through the lens of decision-making, we define linguistic calibration for long-form generations: an LM is linguistically calibrated if its generations enable its users to make calibrated probabilistic predictions. This definition enables a training framework where a supervised finetuning step bootstraps an LM to emit long-form generations with confidence statements such as "I estimate a 30% chance of..." or "I am certain that...", followed by a reinforcement learning step which rewards generations that enable a user to provide calibrated answers to related questions. We linguistically calibrate Llama 2 7B and find in automated and human evaluations of long-form generations that it is significantly more calibrated than strong finetuned factuality baselines with comparable accuracy. These findings generalize under significant domain shifts to scientific and biomedical questions and to an entirely held-out person biography generation task. Our results demonstrate that long-form generations may be calibrated end-to-end by constructing an objective in the space of the predictions that users make in downstream decision-making. |
https://proceedings.mlr.press/v235/banerjee24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/banerjee24a/banerjee24a.pdf | https://openreview.net/forum?id=HOG80Yk4Gw | Relational DNN Verification With Cross Executional Bound Refinement | https://proceedings.mlr.press/v235/banerjee24a.html | Debangshu Banerjee, Gagandeep Singh | https://proceedings.mlr.press/v235/banerjee24a.html | ICML 2024 | We focus on verifying relational properties defined over deep neural networks (DNNs) such as robustness against universal adversarial perturbations (UAP), certified worst-case hamming distance for binary string classifications, etc. Precise verification of these properties requires reasoning about multiple executions of the same DNN. However, most of the existing works in DNN verification only handle properties defined over single executions and as a result, are imprecise for relational properties. Though few recent works for relational DNN verification, capture linear dependencies between the inputs of multiple executions, they do not leverage dependencies between the outputs of hidden layers producing imprecise results. We develop a scalable relational verifier RACoon that utilizes cross-execution dependencies at all layers of the DNN gaining substantial precision over SOTA baselines on a wide range of datasets, networks, and relational properties. |
https://proceedings.mlr.press/v235/banihashem24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/banihashem24a/banihashem24a.pdf | https://openreview.net/forum?id=uUeXaKLE1I | A Dynamic Algorithm for Weighted Submodular Cover Problem | https://proceedings.mlr.press/v235/banihashem24a.html | Kiarash Banihashem, Samira Goudarzi, Mohammadtaghi Hajiaghayi, Peyman Jabbarzade, Morteza Monemizadeh | https://proceedings.mlr.press/v235/banihashem24a.html | ICML 2024 | We initiate the study of the submodular cover problem in a dynamic setting where the elements of the ground set are inserted and deleted. In the classical submodular cover problem, we are given a monotone submodular function $f : 2^{V} \to \mathbb{R}^{\ge 0}$ and the goal is to obtain a set $S \subseteq V$ that minimizes the cost subject to the constraint $f(S) = f(V)$. This is a classical problem in computer science and generalizes the Set Cover problem, 2-Set Cover, and dominating set problem among others. We consider this problem in a dynamic setting where there are updates to our set $V$, in the form of insertions and deletions of elements from a ground set $\mathcal{V}$, and the goal is to maintain an approximately optimal solution with low query complexity per update. For this problem, we propose a randomized algorithm that, in expectation, obtains a $(1-O(\epsilon), O(\epsilon^{-1}))$-bicriteria approximation using polylogarithmic query complexity per update. |
https://proceedings.mlr.press/v235/banihashem24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/banihashem24b/banihashem24b.pdf | https://openreview.net/forum?id=z3PUNzdmGs | Dynamic Metric Embedding into lp Space | https://proceedings.mlr.press/v235/banihashem24b.html | Kiarash Banihashem, Mohammadtaghi Hajiaghayi, Dariusz Rafal Kowalski, Jan Olkowski, Max Springer | https://proceedings.mlr.press/v235/banihashem24b.html | ICML 2024 | We give the first non-trivial decremental dynamic embedding of a weighted, undirected graph $G$ into $\ell_p$ space. Given a weighted graph $G$ undergoing a sequence of edge weight increases, the goal of this problem is to maintain a (randomized) mapping $\phi: (G,d) \to (X,\ell_p)$ from the set of vertices of the graph to the $\ell_p$ space such that for every pair of vertices $u$ and $v$, the expected distance between $\phi(u)$ and $\phi(v)$ in the $\ell_p$ metric is within a small multiplicative factor, referred to as the distortion, of their distance in $G$. Our main result is a dynamic algorithm with expected distortion $O(\log^2 n)$ and total update time $O\left((m^{1+o(1)} \log^2 W + Q)\log(nW) \right)$, where $W$ is the maximum weight of the edges, $Q$ is the total number of updates and $n, m$ denote the number of vertices and edges in $G$ respectively. This is the first result of its kind, extending the seminal result of Bourgain ’85 to the expanding field of dynamic algorithms. Moreover, we demonstrate that in the fully dynamic regime, where we tolerate edge insertions as well as deletions, no algorithm can explicitly maintain an embedding into $\ell_p$ space that has a low distortion with high probability. |
https://proceedings.mlr.press/v235/baninajjar24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/baninajjar24a/baninajjar24a.pdf | https://openreview.net/forum?id=gUFufRkzjV | VNN: Verification-Friendly Neural Networks with Hard Robustness Guarantees | https://proceedings.mlr.press/v235/baninajjar24a.html | Anahita Baninajjar, Ahmed Rezine, Amir Aminifar | https://proceedings.mlr.press/v235/baninajjar24a.html | ICML 2024 | Machine learning techniques often lack formal correctness guarantees, evidenced by the widespread adversarial examples that plague most deep-learning applications. This lack of formal guarantees resulted in several research efforts that aim at verifying Deep Neural Networks (DNNs), with a particular focus on safety-critical applications. However, formal verification techniques still face major scalability and precision challenges. The over-approximation introduced during the formal verification process to tackle the scalability challenge often results in inconclusive analysis. To address this challenge, we propose a novel framework to generate Verification-Friendly Neural Networks (VNNs). We present a post-training optimization framework to achieve a balance between preserving prediction performance and verification-friendliness. Our proposed framework results in VNNs that are comparable to the original DNNs in terms of prediction performance, while amenable to formal verification techniques. This essentially enables us to establish robustness for more VNNs than their DNN counterparts, in a time-efficient manner. |
https://proceedings.mlr.press/v235/bao24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bao24a/bao24a.pdf | https://openreview.net/forum?id=yHRxnhKyEJ | Provable Benefits of Local Steps in Heterogeneous Federated Learning for Neural Networks: A Feature Learning Perspective | https://proceedings.mlr.press/v235/bao24a.html | Yajie Bao, Michael Crawshaw, Mingrui Liu | https://proceedings.mlr.press/v235/bao24a.html | ICML 2024 | Local steps are crucial for Federated Learning (FL) algorithms and have witnessed great empirical success in reducing communication costs and improving the generalization performance of deep neural networks. However, there are limited studies on the effect of local steps on heterogeneous FL. A few works investigate this problem from the optimization perspective. Woodworth et al. (2020a) showed that the iteration complexity of Local SGD, the most popular FL algorithm, is dominated by the baseline mini-batch SGD, which does not show the benefits of local steps. In addition, Levy (2023) proposed a new local update method that provably benefits over mini-batch SGD. However, in the same setting, there is still no work analyzing the effects of local steps to generalization in a heterogeneous FL setting. Motivated by our experimental findings where Local SGD learns more distinguishing features than parallel SGD, this paper studies the generalization benefits of local steps from a feature learning perspective. We propose a novel federated data model that exhibits a new form of data heterogeneity, under which we show that a convolutional neural network (CNN) trained by GD with global updates will miss some pattern-related features, while the network trained by GD with local updates can learn all features in polynomial time. Consequently, local steps help CNN generalize better in our data model. In a different parameter setting, we also prove that Local GD with one-shot model averaging can learn all features and generalize well in all clients. Our experimental results also confirm the benefits of local steps in improving test accuracy on real-world data. |
https://proceedings.mlr.press/v235/bao24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bao24b/bao24b.pdf | https://openreview.net/forum?id=aRZjRj41WQ | Self-attention Networks Localize When QK-eigenspectrum Concentrates | https://proceedings.mlr.press/v235/bao24b.html | Han Bao, Ryuichiro Hataya, Ryo Karakida | https://proceedings.mlr.press/v235/bao24b.html | ICML 2024 | The self-attention mechanism prevails in modern machine learning. It has an interesting functionality of adaptively selecting tokens from an input sequence by modulating the degree of attention localization, which many researchers speculate is the basis of the powerful model performance but complicates the underlying mechanism of the learning dynamics. In recent years, mainly two arguments have connected attention localization to the model performances. One is the rank collapse, where the embedded tokens by a self-attention block become very similar across different tokens, leading to a less expressive network. The other is the entropy collapse, where the attention probability approaches non-uniform and entails low entropy, making the learning dynamics more likely to be trapped in plateaus. These two failure modes may apparently contradict each other because the rank and entropy collapses are relevant to uniform and non-uniform attention, respectively. To this end, we characterize the notion of attention localization by the eigenspectrum of query-key parameter matrices and reveal that a small eigenspectrum variance leads attention to be localized. Interestingly, the small eigenspectrum variance prevents both rank and entropy collapse, leading to better model expressivity and trainability. |
https://proceedings.mlr.press/v235/bao24c.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bao24c/bao24c.pdf | https://openreview.net/forum?id=pmcusTywXO | Graph Out-of-Distribution Detection Goes Neighborhood Shaping | https://proceedings.mlr.press/v235/bao24c.html | Tianyi Bao, Qitian Wu, Zetian Jiang, Yiting Chen, Jiawei Sun, Junchi Yan | https://proceedings.mlr.press/v235/bao24c.html | ICML 2024 | Despite the rich line of research works on out-of-distribution (OOD) detection on images, the literature on OOD detection for interdependent data, e.g., graphs, is still relatively limited. To fill this gap, we introduce TopoOOD as a principled approach that accommodates graph topology and neighborhood context for detecting OOD node instances on graphs. Meanwhile, we enrich the experiment settings by splitting in-distribution (ID) and OOD data based on distinct topological distributions, which presents new benchmarks for a more comprehensive analysis of graph-based OOD detection. The latter is designed to thoroughly assess the performance of these discriminators under distribution shifts involving structural information, providing a rigorous evaluation of methods in the emerging area of OOD detection on graphs. Our experimental results show the competitiveness of the proposed model across multiple datasets, as evidenced by up to a 15% increase in the AUROC and a 50% decrease in the FPR compared to existing state-of-the-art methods. |
https://proceedings.mlr.press/v235/bar24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bar24a/bar24a.pdf | https://openreview.net/forum?id=hr8OXXMb7a | Stochastic positional embeddings improve masked image modeling | https://proceedings.mlr.press/v235/bar24a.html | Amir Bar, Florian Bordes, Assaf Shocher, Mido Assran, Pascal Vincent, Nicolas Ballas, Trevor Darrell, Amir Globerson, Yann Lecun | https://proceedings.mlr.press/v235/bar24a.html | ICML 2024 | Masked Image Modeling (MIM) is a promising self-supervised learning approach that enables learning from unlabeled images. Despite its recent success, learning good representations through MIM remains challenging because it requires predicting the right semantic content in accurate locations. For example, given an incomplete picture of a dog, we can guess that there is a tail, but we cannot determine its exact location. In this work, we propose to incorporate location uncertainty to MIM by using stochastic positional embeddings (StoP). Specifically, we condition the model on stochastic masked token positions drawn from a gaussian distribution. We show that using StoP reduces overfitting to location features and guides the model toward learning features that are more robust to location uncertainties. Quantitatively, using StoP improves downstream MIM performance on a variety of downstream tasks. For example, linear probing on ImageNet using ViT-B is improved by $+1.7%$, and by $2.5%$ for ViT-H using 1% of the data. |
https://proceedings.mlr.press/v235/bar-shalom24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bar-shalom24a/bar-shalom24a.pdf | https://openreview.net/forum?id=6djDWVTUEq | Subgraphormer: Unifying Subgraph GNNs and Graph Transformers via Graph Products | https://proceedings.mlr.press/v235/bar-shalom24a.html | Guy Bar-Shalom, Beatrice Bevilacqua, Haggai Maron | https://proceedings.mlr.press/v235/bar-shalom24a.html | ICML 2024 | In the realm of Graph Neural Networks (GNNs), two exciting research directions have recently emerged: Subgraph GNNs and Graph Transformers. In this paper, we propose an architecture that integrates both approaches, dubbed Subgraphormer, which combines the enhanced expressive power, message-passing mechanisms, and aggregation schemes from Subgraph GNNs with attention and positional encodings, arguably the most important components in Graph Transformers. Our method is based on an intriguing new connection we reveal between Subgraph GNNs and product graphs, suggesting that Subgraph GNNs can be formulated as Message Passing Neural Networks (MPNNs) operating on a product of the graph with itself. We use this formulation to design our architecture: first, we devise an attention mechanism based on the connectivity of the product graph. Following this, we propose a novel and efficient positional encoding scheme for Subgraph GNNs, which we derive as a positional encoding for the product graph. Our experimental results demonstrate significant performance improvements over both Subgraph GNNs and Graph Transformers on a wide range of datasets. |
https://proceedings.mlr.press/v235/barbarani24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/barbarani24a/barbarani24a.pdf | https://openreview.net/forum?id=fNJbcxhxRj | Scale-Free Image Keypoints Using Differentiable Persistent Homology | https://proceedings.mlr.press/v235/barbarani24a.html | Giovanni Barbarani, Francesco Vaccarino, Gabriele Trivigno, Marco Guerra, Gabriele Berton, Carlo Masone | https://proceedings.mlr.press/v235/barbarani24a.html | ICML 2024 | In computer vision, keypoint detection is a fundamental task, with applications spanning from robotics to image retrieval; however, existing learning-based methods suffer from scale dependency, and lack flexibility. This paper introduces a novel approach that leverages Morse theory and persistent homology, powerful tools rooted in algebraic topology. We propose a novel loss function based on the recent introduction of a notion of subgradient in persistent homology, paving the way towards topological learning. Our detector, MorseDet, is the first topology-based learning model for feature detection, which achieves competitive performance in keypoint repeatability and introduces a principled and theoretically robust approach to the problem. |
https://proceedings.mlr.press/v235/barbulescu24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/barbulescu24a/barbulescu24a.pdf | https://openreview.net/forum?id=FWlNA3et6X | To Each (Textual Sequence) Its Own: Improving Memorized-Data Unlearning in Large Language Models | https://proceedings.mlr.press/v235/barbulescu24a.html | George-Octavian Bărbulescu, Peter Triantafillou | https://proceedings.mlr.press/v235/barbulescu24a.html | ICML 2024 | LLMs have been found to memorize training textual sequences and regurgitate verbatim said sequences during text generation time. This fact is known to be the cause of privacy and related (e.g., copyright) problems. Unlearning in LLMs then takes the form of devising new algorithms that will properly deal with these side-effects of memorized data, while not hurting the model’s utility. We offer a fresh perspective towards this goal, namely, that each textual sequence to be forgotten should be treated differently when being unlearned based on its degree of memorization within the LLM. We contribute a new metric for measuring unlearning quality, an adversarial attack showing that SOTA algorithms lacking this perspective fail for privacy, and two new unlearning methods based on Gradient Ascent and Task Arithmetic, respectively. A comprehensive performance evaluation across an extensive suite of NLP tasks then mapped the solution space, identifying the best solutions under different scales in model capacities and forget set sizes and quantified the gains of the new approaches. |
https://proceedings.mlr.press/v235/bardone24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bardone24a/bardone24a.pdf | https://openreview.net/forum?id=9iGdh0wAgB | Sliding Down the Stairs: How Correlated Latent Variables Accelerate Learning with Neural Networks | https://proceedings.mlr.press/v235/bardone24a.html | Lorenzo Bardone, Sebastian Goldt | https://proceedings.mlr.press/v235/bardone24a.html | ICML 2024 | Neural networks extract features from data using stochastic gradient descent (SGD). In particular, higher-order input cumulants (HOCs) are crucial for their performance. However, extracting information from the $p$th cumulant of $d$-dimensional inputs is computationally hard: the number of samples required to recover a single direction from an order-$p$ tensor (tensor PCA) using SGD grows as $d^{p−1}$, which is prohibitive for high-dimensional inputs. This result raises the question of how neural networks extract relevant directions from the HOCs of their inputs efficiently. Here, we show that correlations between latent variables along the directions encoded in different input cumulants speed up learning from higher-order correlations. We show this effect analytically by deriving nearly sharp thresholds for the number of samples required by a single neuron to recover these directions using online SGD from a random start in high dimensions. Our analytical results are confirmed in simulations of two-layer neural networks and unveil a new mechanism for hierarchical learning in neural networks |
https://proceedings.mlr.press/v235/bartoldson24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bartoldson24a/bartoldson24a.pdf | https://openreview.net/forum?id=HQtTg1try7 | Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies | https://proceedings.mlr.press/v235/bartoldson24a.html | Brian R. Bartoldson, James Diffenderfer, Konstantinos Parasyris, Bhavya Kailkhura | https://proceedings.mlr.press/v235/bartoldson24a.html | ICML 2024 | This paper revisits the simple, long-studied, yet still unsolved problem of making image classifiers robust to imperceptible perturbations. Taking CIFAR10 as an example, SOTA clean accuracy is about $100$%, but SOTA robustness to $\ell_{\infty}$-norm bounded perturbations barely exceeds $70$%. To understand this gap, we analyze how model size, dataset size, and synthetic data quality affect robustness by developing the first scaling laws for adversarial training. Our scaling laws reveal inefficiencies in prior art and provide actionable feedback to advance the field. For instance, we discovered that SOTA methods diverge notably from compute-optimal setups, using excess compute for their level of robustness. Leveraging a compute-efficient setup, we surpass the prior SOTA with $20$% ($70$%) fewer training (inference) FLOPs. We trained various compute-efficient models, with our best achieving $74$% AutoAttack accuracy ($+3$% gain). However, our scaling laws also predict robustness slowly grows then plateaus at $90$%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible. To better understand this predicted limit, we carry out a small-scale human evaluation on the AutoAttack data that fools our top-performing model. Concerningly, we estimate that human performance also plateaus near $90$%, which we show to be attributable to $\ell_{\infty}$-constrained attacks’ generation of invalid images not consistent with their original labels. Having characterized limiting roadblocks, we outline promising paths for future research. |
https://proceedings.mlr.press/v235/bartosh24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bartosh24a/bartosh24a.pdf | https://openreview.net/forum?id=xzX7kf486K | Neural Diffusion Models | https://proceedings.mlr.press/v235/bartosh24a.html | Grigory Bartosh, Dmitry Vetrov, Christian A. Naesseth | https://proceedings.mlr.press/v235/bartosh24a.html | ICML 2024 | Diffusion models have shown remarkable performance on many generative tasks. Despite recent success, most diffusion models are restricted in that they only allow linear transformation of the data distribution. In contrast, broader family of transformations can help train generative distributions more efficiently, simplifying the reverse process and closing the gap between the true negative log-likelihood and the variational approximation. In this paper, we present Neural Diffusion Models (NDMs), a generalization of conventional diffusion models that enables defining and learning time-dependent non-linear transformations of data. We show how to optimise NDMs using a variational bound in a simulation-free setting. Moreover, we derive a time-continuous formulation of NDMs, which allows fast and reliable inference using off-the-shelf numerical ODE and SDE solvers. Finally, we demonstrate the utility of NDMs through experiments on many image generation benchmarks, including MNIST, CIFAR-10, downsampled versions of ImageNet and CelebA-HQ. NDMs outperform conventional diffusion models in terms of likelihood, achieving state-of-the-art results on ImageNet and CelebA-HQ, and produces high-quality samples. |
https://proceedings.mlr.press/v235/barzilai24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/barzilai24a/barzilai24a.pdf | https://openreview.net/forum?id=PY3bKuorBI | Generalization in Kernel Regression Under Realistic Assumptions | https://proceedings.mlr.press/v235/barzilai24a.html | Daniel Barzilai, Ohad Shamir | https://proceedings.mlr.press/v235/barzilai24a.html | ICML 2024 | It is by now well-established that modern over-parameterized models seem to elude the bias-variance tradeoff and generalize well despite overfitting noise. Many recent works attempt to analyze this phenomenon in the relatively tractable setting of kernel regression. However, as we argue in detail, most past works on this topic either make unrealistic assumptions, or focus on a narrow problem setup. This work aims to provide a unified theory to upper bound the excess risk of kernel regression for nearly all common and realistic settings. When applied to common kernels, our results imply benign overfitting in high input dimensions, nearly tempered overfitting in fixed dimensions, and explicit convergence rates for regularized regression. As a by-product, we obtain time-dependent bounds for neural networks trained in the kernel regime. Our results rely on new relative perturbation bounds for the eigenvalues of kernel matrices, which may be of independent interest. These reveal a self-regularization phenomenon, whereby a heavy tail in the eigendecomposition of the kernel implicitly leads to good generalization. |
https://proceedings.mlr.press/v235/bassan24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bassan24a/bassan24a.pdf | https://openreview.net/forum?id=veEjiN2w9F | Local vs. Global Interpretability: A Computational Complexity Perspective | https://proceedings.mlr.press/v235/bassan24a.html | Shahaf Bassan, Guy Amir, Guy Katz | https://proceedings.mlr.press/v235/bassan24a.html | ICML 2024 | The local and global interpretability of various ML models has been studied extensively in recent years. However, despite significant progress in the field, many known results remain informal or lack sufficient mathematical rigor. We propose a framework for bridging this gap, by using computational complexity theory to assess local and global perspectives of interpreting ML models. We begin by proposing proofs for two novel insights that are essential for our analysis: (1) a duality between local and global forms of explanations; and (2) the inherent uniqueness of certain global explanation forms. We then use these insights to evaluate the complexity of computing explanations, across three model types representing the extremes of the interpretability spectrum: (1) linear models; (2) decision trees; and (3) neural networks. Our findings offer insights into both the local and global interpretability of these models. For instance, under standard complexity assumptions such as P != NP, we prove that selecting global sufficient subsets in linear models is computationally harder than selecting local subsets. Interestingly, with neural networks and decision trees, the opposite is true: it is harder to carry out this task locally than globally. We believe that our findings demonstrate how examining explainability through a computational complexity lens can help us develop a more rigorous grasp of the inherent interpretability of ML models. |
https://proceedings.mlr.press/v235/bassily24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bassily24a/bassily24a.pdf | https://openreview.net/forum?id=kkqIEp2bRa | Differentially Private Domain Adaptation with Theoretical Guarantees | https://proceedings.mlr.press/v235/bassily24a.html | Raef Bassily, Corinna Cortes, Anqi Mao, Mehryar Mohri | https://proceedings.mlr.press/v235/bassily24a.html | ICML 2024 | In many applications, the labeled data at the learner’s disposal is subject to privacy constraints and is relatively limited. To derive a more accurate predictor for the target domain, it is often beneficial to leverage publicly available labeled data from an alternative domain, somewhat close to the target domain. This is the modern problem of supervised domain adaptation from a public source to a private target domain. We present two $(\epsilon, \delta)$-differentially private adaptation algorithms for supervised adaptation, for which we make use of a general optimization problem, recently shown to benefit from favorable theoretical learning guarantees. Our first algorithm is designed for regression with linear predictors and shown to solve a convex optimization problem. Our second algorithm is a more general solution for loss functions that may be non-convex but Lipschitz and smooth. While our main objective is a theoretical analysis, we also report the results of several experiments. We first show that the non-private versions of our algorithms match state-of-the-art performance in supervised adaptation and that for larger values of the target sample size or $\epsilon$, the performance of our private algorithms remains close to that of their non-private counterparts. |
https://proceedings.mlr.press/v235/basu24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/basu24a/basu24a.pdf | https://openreview.net/forum?id=A9MiJdetnZ | A Statistical Framework for Data-dependent Retrieval-Augmented Models | https://proceedings.mlr.press/v235/basu24a.html | Soumya Basu, Ankit Singh Rawat, Manzil Zaheer | https://proceedings.mlr.press/v235/basu24a.html | ICML 2024 | Modern ML systems increasingly augment input instances with additional relevant information to enhance final prediction. Despite growing interest in such retrieval-augmented models, their fundamental properties and training are not well understood. We propose a statistical framework to study such models with two components: 1) a retriever to identify the relevant information out of a large corpus via a data-dependent metric; and 2) a predictor that consumes the input instances along with the retrieved information to make the final predictions. We present a principled method for end-to-end training of both components and draw connections with various training approaches in the literature. Furthermore, we establish excess risk bounds for retrieval-augmented models while delineating the contributions of both retriever and predictor towards the model performance.We validate the utility of our proposed training methods along with the key takeaways from our statistical analysis on open domain question answering task where retrieval augmentation is important. |
https://proceedings.mlr.press/v235/basu24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/basu24b/basu24b.pdf | https://openreview.net/forum?id=fsVBsxjRER | On Mechanistic Knowledge Localization in Text-to-Image Generative Models | https://proceedings.mlr.press/v235/basu24b.html | Samyadeep Basu, Keivan Rezaei, Priyatham Kattakinda, Vlad I Morariu, Nanxuan Zhao, Ryan A. Rossi, Varun Manjunatha, Soheil Feizi | https://proceedings.mlr.press/v235/basu24b.html | ICML 2024 | Identifying layers within text-to-image models which control visual attributes can facilitate efficient model editing through closed-form updates. Recent work, leveraging causal tracing show that early Stable-Diffusion variants confine knowledge primarily to the first layer of the CLIP text-encoder, while it diffuses throughout the UNet. Extending this framework, we observe that for recent models (e.g., SD-XL, DeepFloyd), causal tracing fails in pinpointing localized knowledge, highlighting challenges in model editing. To address this issue, we introduce the concept of mechanistic localization in text-to-image models, where knowledge about various visual attributes (e.g., "style", "objects", "facts") can be mechanistically localized to a small fraction of layers in the UNet, thus facilitating efficient model editing. We localize knowledge using our method LocoGen which measures the direct effect of intermediate layers to output generation by performing interventions in the cross-attention layers of the UNet. We then employ LocoEdit, a fast closed-form editing method across popular open-source text-to-image models (including the latest SD-XL) and explore the possibilities of neuron-level model editing. Using mechanistic localization, our work offers a better view of successes and failures in localization-based text-to-image model editing. |
https://proceedings.mlr.press/v235/bechavod24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bechavod24a/bechavod24a.pdf | https://openreview.net/forum?id=6EF0bxcZvT | Monotone Individual Fairness | https://proceedings.mlr.press/v235/bechavod24a.html | Yahav Bechavod | https://proceedings.mlr.press/v235/bechavod24a.html | ICML 2024 | We revisit the problem of online learning with individual fairness, where an online learner strives to maximize predictive accuracy while ensuring that similar individuals are treated similarly. We first extend the frameworks of Gillen et al. (2018); Bechavod et al. (2020), which rely on feedback from human auditors regarding fairness violations, to allow for auditing schemes that can aggregate feedback from any number of auditors, using a rich class we term monotone aggregation functions, for which we also prove a useful characterization. Using our generalized framework, we present an oracle-efficient algorithm guaranteeing a bound of $\mathcal{O}(T^\frac{3}{4})$ simultaneously for regret and number of fairness violations. We then study an online classification setting where label feedback is available for positively-predicted individuals only, and present an algorithm guaranteeing a bound of $\mathcal{O}(T^\frac{5}{6})$ simultaneously for regret and number of fairness violations. In both settings, our algorithms improve on the best known bounds for oracle-efficient algorithms. Furthermore, our algorithms offer significant improvements in computational efficiency, greatly reducing the number of required calls to an (offline) optimization oracle, as opposed to previous algorithms which required $T$ such calls every round. |
https://proceedings.mlr.press/v235/bechler-speicher24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bechler-speicher24a/bechler-speicher24a.pdf | https://openreview.net/forum?id=fSNHK7mu3j | Graph Neural Networks Use Graphs When They Shouldn’t | https://proceedings.mlr.press/v235/bechler-speicher24a.html | Maya Bechler-Speicher, Ido Amos, Ran Gilad-Bachrach, Amir Globerson | https://proceedings.mlr.press/v235/bechler-speicher24a.html | ICML 2024 | Predictions over graphs play a crucial role in various domains, including social networks and medicine. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Although a graph-structure is provided as input to the GNN, in some cases the best solution can be obtained by ignoring it. While GNNs have the ability to ignore the graph-structure in such cases, it is not clear that they will. In this work, we show that GNNs actually tend to overfit the given graph-structure in the sense that they use it even when a better solution can be obtained by ignoring it. We analyze the implicit bias of gradient-descent learning of GNNs and prove that when the ground truth function does not use the graphs, GNNs are not guaranteed to learn a solution that ignores the graph, even with infinite data. We examine this phenomenon with respect to different graph distributions and find that regular graphs are more robust to this overfitting. We also prove that within the family of regular graphs, GNNs are guaranteed to extrapolate when learning with gradient descent. Finally, based on our empirical and theoretical findings, we demonstrate on real-data how regular graphs can be leveraged to reduce graph overfitting and enhance performance. |
https://proceedings.mlr.press/v235/beck24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/beck24a/beck24a.pdf | https://openreview.net/forum?id=43HZG9zwaj | Diffusion Tempering Improves Parameter Estimation with Probabilistic Integrators for Ordinary Differential Equations | https://proceedings.mlr.press/v235/beck24a.html | Jonas Beck, Nathanael Bosch, Michael Deistler, Kyra L. Kadhim, Jakob H. Macke, Philipp Hennig, Philipp Berens | https://proceedings.mlr.press/v235/beck24a.html | ICML 2024 | Ordinary differential equations (ODEs) are widely used to describe dynamical systems in science, but identifying parameters that explain experimental measurements is challenging. In particular, although ODEs are differentiable and would allow for gradient-based parameter optimization, the nonlinear dynamics of ODEs often lead to many local minima and extreme sensitivity to initial conditions. We therefore propose diffusion tempering, a novel regularization technique for probabilistic numerical methods which improves convergence of gradient-based parameter optimization in ODEs. By iteratively reducing a noise parameter of the probabilistic integrator, the proposed method converges more reliably to the true parameters. We demonstrate that our method is effective for dynamical systems of different complexity and show that it obtains reliable parameter estimates for a Hodgkin–Huxley model with a practically relevant number of parameters. |
https://proceedings.mlr.press/v235/becker24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/becker24a/becker24a.pdf | https://openreview.net/forum?id=CvRu2inbGV | Standardized Interpretable Fairness Measures for Continuous Risk Scores | https://proceedings.mlr.press/v235/becker24a.html | Ann-Kristin Becker, Oana Dumitrasc, Klaus Broelemann | https://proceedings.mlr.press/v235/becker24a.html | ICML 2024 | We propose a standardized version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance. Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities as well as for comparing biases across different models, datasets, or time points. We derive a link between the different families of existing fairness measures for scores and show that the proposed standardized fairness measures outperform ROC-based fairness measures because they are more explicit and can quantify significant biases that ROC-based fairness measures miss. |
https://proceedings.mlr.press/v235/behrouz24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/behrouz24a/behrouz24a.pdf | https://openreview.net/forum?id=nOjZfpLyh1 | Unsupervised Representation Learning of Brain Activity via Bridging Voxel Activity and Functional Connectivity | https://proceedings.mlr.press/v235/behrouz24a.html | Ali Behrouz, Parsa Delavari, Farnoosh Hashemi | https://proceedings.mlr.press/v235/behrouz24a.html | ICML 2024 | Effective brain representation learning is a key step toward the understanding of cognitive processes and diagnosis of neurological diseases/disorders. Existing studies have focused on either (1) voxel-level activity, where only a single weight relating the voxel activity to the task (i.e., aggregation of voxel activity over a time window) is considered, missing their temporal dynamics, or (2) functional connectivity of the brain in the level of region of interests, missing voxel-level activities. We bridge this gap and design BrainMixer, an unsupervised learning framework that effectively utilizes both functional connectivity and associated time series of voxels to learn voxel-level representation in an unsupervised manner. BrainMixer employs two simple yet effective MLP-based encoders to simultaneously learn the dynamics of voxel-level signals and their functional correlations. To encode voxel activity, BrainMixer fuses information across both time and voxel dimensions via a dynamic attention mechanism. To learn the structure of the functional connectivity, BrainMixer presents a temporal graph patching and encodes each patch by combining its nodes’ features via a new adaptive temporal pooling. Our experiments show that BrainMixer attains outstanding performance and outperforms 14 baselines in different downstream tasks and setups. |
https://proceedings.mlr.press/v235/belrose24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/belrose24a/belrose24a.pdf | https://openreview.net/forum?id=IGdpKP0N6w | Neural Networks Learn Statistics of Increasing Complexity | https://proceedings.mlr.press/v235/belrose24a.html | Nora Belrose, Quintin Pope, Lucia Quirke, Alex Troy Mallen, Xiaoli Fern | https://proceedings.mlr.press/v235/belrose24a.html | ICML 2024 | The distributional simplicity bias (DSB) posits that neural networks learn low-order moments of the data distribution first, before moving on to higher-order correlations. In this work, we present compelling new evidence for the DSB by showing that networks automatically learn to perform well on maximum-entropy distributions whose low-order statistics match those of the training set early in training, then lose this ability later. We also extend the DSB to discrete domains by proving an equivalence between token $n$-gram frequencies and the moments of embedding vectors, and by finding empirical evidence for the bias in LLMs. Finally we use optimal transport methods to surgically edit the low-order statistics of one class to match those of another, and show that early-training networks treat the edited samples as if they were drawn from the target class. Code is available at https://github.com/EleutherAI/features-across-time. |
https://proceedings.mlr.press/v235/ben-basat24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/ben-basat24a/ben-basat24a.pdf | https://openreview.net/forum?id=gWEwIlZrbQ | Accelerating Federated Learning with Quick Distributed Mean Estimation | https://proceedings.mlr.press/v235/ben-basat24a.html | Ran Ben-Basat, Shay Vargaftik, Amit Portnoy, Gil Einziger, Yaniv Ben-Itzhak, Michael Mitzenmacher | https://proceedings.mlr.press/v235/ben-basat24a.html | ICML 2024 | Distributed Mean Estimation (DME), in which $n$ clients communicate vectors to a parameter server that estimates their average, is a fundamental building block in communication-efficient federated learning. In this paper, we improve on previous DME techniques that achieve the optimal $O(1/n)$ Normalized Mean Squared Error (NMSE) guarantee by asymptotically improving the complexity for either encoding or decoding (or both). To achieve this, we formalize the problem in a novel way that allows us to use off-the-shelf mathematical solvers to design the quantization. Using various datasets and training tasks, we demonstrate how QUIC-FL achieves state of the art accuracy with faster encoding and decoding times compared to other DME methods. |
https://proceedings.mlr.press/v235/ben-dov24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/ben-dov24a/ben-dov24a.pdf | https://openreview.net/forum?id=Ez3Lckpe4l | The Role of Learning Algorithms in Collective Action | https://proceedings.mlr.press/v235/ben-dov24a.html | Omri Ben-Dov, Jake Fawkes, Samira Samadi, Amartya Sanyal | https://proceedings.mlr.press/v235/ben-dov24a.html | ICML 2024 | Collective action in machine learning is the study of the control that a coordinated group can have over machine learning algorithms. While previous research has concentrated on assessing the impact of collectives against Bayes (sub-)optimal classifiers, this perspective is limited in that it does not account for the choice of learning algorithm. Since classifiers seldom behave like Bayes classifiers and are influenced by the choice of learning algorithms along with their inherent biases, in this work we initiate the study of how the choice of the learning algorithm plays a role in the success of a collective in practical settings. Specifically, we focus on distributionally robust optimization (DRO), popular for improving a worst group error, and on the ubiquitous stochastic gradient descent (SGD), due to its inductive bias for "simpler" functions. Our empirical results, supported by a theoretical foundation, show that the effective size and success of the collective are highly dependent on properties of the learning algorithm. This highlights the necessity of taking the learning algorithm into account when studying the impact of collective action in machine learning. |
https://proceedings.mlr.press/v235/ben-hamu24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/ben-hamu24a/ben-hamu24a.pdf | https://openreview.net/forum?id=SE20BFqj6J | D-Flow: Differentiating through Flows for Controlled Generation | https://proceedings.mlr.press/v235/ben-hamu24a.html | Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman | https://proceedings.mlr.press/v235/ben-hamu24a.html | ICML 2024 | Taming the generation outcome of state of the art Diffusion and Flow-Matching (FM) models without having to re-train a task-specific model unlocks a powerful tool for solving inverse problems, conditional generation, and controlled generation in general. In this work we introduce D-Flow, a simple framework for controlling the generation process by differentiating through the flow, optimizing for the source (noise) point. We motivate this framework by our key observation stating that for Diffusion/FM models trained with Gaussian probability paths, differentiating through the generation process projects gradient on the data manifold, implicitly injecting the prior into the optimization process. We validate our framework on linear and non-linear controlled generation problems including: image and audio inverse problems and conditional molecule generation reaching state of the art performance across all. |
https://proceedings.mlr.press/v235/benkert24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/benkert24a/benkert24a.pdf | https://openreview.net/forum?id=zII3Olw7cr | Transitional Uncertainty with Layered Intermediate Predictions | https://proceedings.mlr.press/v235/benkert24a.html | Ryan Benkert, Mohit Prabhushankar, Ghassan Alregib | https://proceedings.mlr.press/v235/benkert24a.html | ICML 2024 | In this paper, we discuss feature engineering for single-pass uncertainty estimation. For accurate uncertainty estimates, neural networks must extract differences in the feature space that quantify uncertainty. This could be achieved by current single-pass approaches that maintain feature distances between data points as they traverse the network. While initial results are promising, maintaining feature distances within the network representations frequently inhibits information compression and opposes the learning objective. We study this effect theoretically and empirically to arrive at a simple conclusion: preserving feature distances in the output is beneficial when the preserved features contribute to learning the label distribution and act in opposition otherwise. We then propose Transitional Uncertainty with Layered Intermediate Predictions (TULIP) as a simple approach to address the shortcomings of current single-pass estimators. Specifically, we implement feature preservation by extracting features from intermediate representations before information is collapsed by subsequent layers. We refer to the underlying preservation mechanism as transitional feature preservation. We show that TULIP matches or outperforms current single-pass methods on standard benchmarks and in practical settings where these methods are less reliable (imbalances, complex architectures, medical modalities). |
https://proceedings.mlr.press/v235/benomar24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/benomar24a/benomar24a.pdf | https://openreview.net/forum?id=jJLcXGB2uA | Non-clairvoyant Scheduling with Partial Predictions | https://proceedings.mlr.press/v235/benomar24a.html | Ziyad Benomar, Vianney Perchet | https://proceedings.mlr.press/v235/benomar24a.html | ICML 2024 | The non-clairvoyant scheduling problem has gained new interest within learning-augmented algorithms, where the decision-maker is equipped with predictions without any quality guarantees. In practical settings, access to predictions may be reduced to specific instances, due to cost or data limitations. Our investigation focuses on scenarios where predictions for only $B$ job sizes out of $n$ are available to the algorithm. We first establish near-optimal lower bounds and algorithms in the case of perfect predictions. Subsequently, we present a learning-augmented algorithm satisfying the robustness, consistency, and smoothness criteria, and revealing a novel tradeoff between consistency and smoothness inherent in the scenario with a restricted number of predictions. |
https://proceedings.mlr.press/v235/berman24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/berman24a/berman24a.pdf | https://openreview.net/forum?id=AocOA4h3bu | Sequential Disentanglement by Extracting Static Information From A Single Sequence Element | https://proceedings.mlr.press/v235/berman24a.html | Nimrod Berman, Ilan Naiman, Idan Arbiv, Gal Fadlon, Omri Azencot | https://proceedings.mlr.press/v235/berman24a.html | ICML 2024 | One of the fundamental representation learning tasks is unsupervised sequential disentanglement, where latent codes of inputs are decomposed to a single static factor and a sequence of dynamic factors. To extract this latent information, existing methods condition the static and dynamic codes on the entire input sequence. Unfortunately, these models often suffer from information leakage, i.e., the dynamic vectors encode both static and dynamic information, or vice versa, leading to a non-disentangled representation. Attempts to alleviate this problem via reducing the dynamic dimension and auxiliary loss terms gain only partial success. Instead, we propose a novel and simple architecture that mitigates information leakage by offering a simple and effective subtraction inductive bias while conditioning on a single sample. Remarkably, the resulting variational framework is simpler in terms of required loss terms, hyper-parameters, and data augmentation. We evaluate our method on multiple data-modality benchmarks including general time series, video, and audio, and we show beyond state-of-the-art results on generation and prediction tasks in comparison to several strong baselines. |
https://proceedings.mlr.press/v235/berman24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/berman24b/berman24b.pdf | https://openreview.net/forum?id=iHSgfGob9j | CoLoRA: Continuous low-rank adaptation for reduced implicit neural modeling of parameterized partial differential equations | https://proceedings.mlr.press/v235/berman24b.html | Jules Berman, Benjamin Peherstorfer | https://proceedings.mlr.press/v235/berman24b.html | ICML 2024 | This work introduces reduced models based on Continuous Low Rank Adaptation (CoLoRA) that pre-train neural networks for a given partial differential equation and then continuously adapt low-rank weights in time to rapidly predict the evolution of solution fields at new physics parameters and new initial conditions. The adaptation can be either purely data-driven or via an equation-driven variational approach that provides Galerkin-optimal approximations. Because CoLoRA approximates solution fields locally in time, the rank of the weights can be kept small, which means that only few training trajectories are required offline so that CoLoRA is well suited for data-scarce regimes. Predictions with CoLoRA are orders of magnitude faster than with classical methods and their accuracy and parameter efficiency is higher compared to other neural network approaches. |
https://proceedings.mlr.press/v235/bertolotti24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bertolotti24a/bertolotti24a.pdf | https://openreview.net/forum?id=yyYMAprcAR | By Tying Embeddings You Are Assuming the Distributional Hypothesis | https://proceedings.mlr.press/v235/bertolotti24a.html | Francesco Bertolotti, Walter Cazzola | https://proceedings.mlr.press/v235/bertolotti24a.html | ICML 2024 | In this work, we analyze both theoretically and empirically the effect of tied input-output embeddings—a popular technique that reduces the model size while often improving training. Interestingly, we found that this technique is connected to Harris (1954)’s distributional hypothesis—often portrayed by the famous Firth (1957)’s quote “a word is characterized by the company it keeps”. Specifically, our findings indicate that words (or, more broadly, symbols) with similar semantics tend to be encoded in similar input embeddings, while words that appear in similar contexts are encoded in similar output embeddings (thus explaining the semantic space arising in input and output embedding of foundational language models). As a consequence of these findings, the tying of the input and output embeddings is encouraged only when the distributional hypothesis holds for the underlying data. These results also provide insight into the embeddings of foundation language models (which are known to be semantically organized). Further, we complement the theoretical findings with several experiments supporting the claims. |
https://proceedings.mlr.press/v235/bettini24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bettini24a/bettini24a.pdf | https://openreview.net/forum?id=qQjUgItPq4 | Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning | https://proceedings.mlr.press/v235/bettini24a.html | Matteo Bettini, Ryan Kortvelesy, Amanda Prorok | https://proceedings.mlr.press/v235/bettini24a.html | ICML 2024 | The study of behavioral diversity in Multi-Agent Reinforcement Learning (MARL) is a nascent yet promising field. In this context, the present work deals with the question of how to control the diversity of a multi-agent system. With no existing approaches to control diversity to a set value, current solutions focus on blindly promoting it via intrinsic rewards or additional loss functions, effectively changing the learning objective and lacking a principled measure for it. To address this, we introduce Diversity Control (DiCo), a method able to control diversity to an exact value of a given metric by representing policies as the sum of a parameter-shared component and dynamically scaled per-agent components. By applying constraints directly to the policy architecture, DiCo leaves the learning objective unchanged, enabling its applicability to any actor-critic MARL algorithm. We theoretically prove that DiCo achieves the desired diversity, and we provide several experiments, both in cooperative and competitive tasks, that show how DiCo can be employed as a novel paradigm to increase performance and sample efficiency in MARL. Multimedia results are available on the paper’s website: https://sites.google.com/view/dico-marl |
https://proceedings.mlr.press/v235/beukman24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/beukman24a/beukman24a.pdf | https://openreview.net/forum?id=LRnXPxDksA | Refining Minimax Regret for Unsupervised Environment Design | https://proceedings.mlr.press/v235/beukman24a.html | Michael Beukman, Samuel Coward, Michael Matthews, Mattie Fellows, Minqi Jiang, Michael D Dennis, Jakob Nicolaus Foerster | https://proceedings.mlr.press/v235/beukman24a.html | ICML 2024 | In unsupervised environment design, reinforcement learning agents are trained on environment configurations (levels) generated by an adversary that maximises some objective. Regret is a commonly used objective that theoretically results in a minimax regret (MMR) policy with desirable robustness guarantees; in particular, the agent’s maximum regret is bounded. However, once the agent reaches this regret bound on all levels, the adversary will only sample levels where regret cannot be further reduced. Although there may be possible performance improvements to be made outside of these regret-maximising levels, learning stagnates. In this work, we introduce Bayesian level-perfect MMR (BLP), a refinement of the minimax regret objective that overcomes this limitation. We formally show that solving for this objective results in a subset of MMR policies, and that BLP policies act consistently with a Perfect Bayesian policy over all levels. We further introduce an algorithm, ReMiDi, that results in a BLP policy at convergence. We empirically demonstrate that training on levels from a minimax regret adversary causes learning to prematurely stagnate, but that ReMiDi continues learning. |
https://proceedings.mlr.press/v235/beurer-kellner24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/beurer-kellner24a/beurer-kellner24a.pdf | https://openreview.net/forum?id=pXaEYzrFae | Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation | https://proceedings.mlr.press/v235/beurer-kellner24a.html | Luca Beurer-Kellner, Marc Fischer, Martin Vechev | https://proceedings.mlr.press/v235/beurer-kellner24a.html | ICML 2024 | To ensure that text generated by large language models (LLMs) is in an expected format, constrained decoding methods propose to enforce strict formal language constraints during generation. However, as we show in this work, not only do such methods often incur performance overhead during generation, but many of them also significantly impair task accuracy, if they do not correctly align the underlying LLM sub-word vocabularies with external constraints. To address this, we present a novel decoding algorithm, DOMINO, that can enforce constraints in a fully subword-aligned fashion, while leveraging pre-computation and speculative decoding to achieve virtually no overhead and in some cases even almost 2$\times$ speedup over unconstrained decoding – thereby outperforming existing approaches by a wide margin. We release DOMINO as open source at https://github.com/eth-sri/domino. |
https://proceedings.mlr.press/v235/beurer-kellner24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/beurer-kellner24b/beurer-kellner24b.pdf | https://openreview.net/forum?id=2Yu5FWdzde | Prompt Sketching for Large Language Models | https://proceedings.mlr.press/v235/beurer-kellner24b.html | Luca Beurer-Kellner, Mark Niklas Mueller, Marc Fischer, Martin Vechev | https://proceedings.mlr.press/v235/beurer-kellner24b.html | ICML 2024 | Many recent prompting strategies for large language models (LLMs) query the model multiple times sequentially – first to produce intermediate results and then the final answer. However, using these methods, both decoder and model are unaware of potential follow-up prompts, leading to disconnected and undesirably wordy intermediate responses. In this work, we address this issue by proposing prompt sketching, a new prompting paradigm in which an LLM does not only respond by completing a prompt, but by predicting values for multiple variables in a template. This way, sketching grants users more control over the generation process, e.g., by providing a reasoning framework via intermediate instructions, leading to better overall results. The key idea enabling sketching with existing, autoregressive models is to adapt the decoding procedure to also score follow-up instructions during text generation, thus optimizing overall template likelihood in inference. Our experiments show that in a zero-shot setting, prompt sketching outperforms existing, sequential prompting schemes such as direct asking or chain-of-thought on 7 out of 8 LLM benchmarking tasks, including state tracking, arithmetic reasoning, and general question answering. To facilitate future use, we release a number of generic, yet effective sketches applicable to many tasks, and an open source library called dclib, powering our sketch-aware decoders as part of https://github.com/eth-sri/lmql. |
https://proceedings.mlr.press/v235/bewley24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bewley24a/bewley24a.pdf | https://openreview.net/forum?id=Ad9msn1SKC | Counterfactual Metarules for Local and Global Recourse | https://proceedings.mlr.press/v235/bewley24a.html | Tom Bewley, Salim I. Amoukou, Saumitra Mishra, Daniele Magazzeni, Manuela Veloso | https://proceedings.mlr.press/v235/bewley24a.html | ICML 2024 | We introduce T-CREx, a novel model-agnostic method for local and global counterfactual explanation (CE), which summarises recourse options for both individuals and groups in the form of generalised rules. It leverages tree-based surrogate models to learn the counterfactual rules, alongside metarules denoting their regimes of optimality, providing both a global analysis of model behaviour and diverse recourse options for users. Experiments indicate that T-CREx achieves superior aggregate performance over existing rule-based baselines on a range of CE desiderata, while being orders of magnitude faster to run. |
https://proceedings.mlr.press/v235/beznosikov24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/beznosikov24a/beznosikov24a.pdf | https://openreview.net/forum?id=Zw52bJCZXc | Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features | https://proceedings.mlr.press/v235/beznosikov24a.html | Aleksandr Beznosikov, David Dobre, Gauthier Gidel | https://proceedings.mlr.press/v235/beznosikov24a.html | ICML 2024 | The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints that arise in machine learning applications. In recent years, stochastic versions of FW have gained popularity, motivated by large datasets for which the computation of the full gradient is prohibitively expensive. In this paper, we present two new variants of the FW algorithms for stochastic finite-sum minimization. Our algorithms have the best convergence guarantees of existing stochastic FW approaches for both convex and non-convex objective functions. Our methods do not have the issue of permanently collecting large batches, which is common to many stochastic projection-free approaches. Moreover, our second approach does not require either large batches or full deterministic gradients, which is a typical weakness of many techniques for finite-sum problems. The faster theoretical rates of our approaches are confirmed experimentally. |
https://proceedings.mlr.press/v235/bharadhwaj24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bharadhwaj24a/bharadhwaj24a.pdf | https://openreview.net/forum?id=Jtjurj7oIJ | Position: Scaling Simulation is Neither Necessary Nor Sufficient for In-the-Wild Robot Manipulation | https://proceedings.mlr.press/v235/bharadhwaj24a.html | Homanga Bharadhwaj | https://proceedings.mlr.press/v235/bharadhwaj24a.html | ICML 2024 | In this paper, we develop a structured critique of robotic simulations for real-world manipulation, by arguing that scaling simulators is neither necessary nor sufficient for making progress in general-purpose real-world robotic manipulation agents that are compliant with human preferences. With the ubiquity of robotic simulators, and recent efforts to scale them for diverse tasks, and at the same time the interest in generally capable real-world manipulation systems, we believe it is important to address the limitations of using simulation for real-world manipulation, so that as a community, we can focus our collective resources, energy, and time on approaches that have more principled odds of success. We further demonstrate the unique challenges that real-world manipulation presents, and show through examples and arguments why scaling simulation doesn’t get us closer to solving these challenges required for diverse real-world deployment. |
https://proceedings.mlr.press/v235/bhattacharya24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhattacharya24a/bhattacharya24a.pdf | https://openreview.net/forum?id=rucbIsWoEV | Dynamic Facility Location in High Dimensional Euclidean Spaces | https://proceedings.mlr.press/v235/bhattacharya24a.html | Sayan Bhattacharya, Gramoz Goranci, Shaofeng H.-C. Jiang, Yi Qian, Yubo Zhang | https://proceedings.mlr.press/v235/bhattacharya24a.html | ICML 2024 | We study the facility location problem in the dynamic setting, where the goal is to efficiently process an intermixed sequence of point insertions and deletions while maintaining a high quality and stable solution. Although the problem has been studied in the context of general metrics and low-dimensional spaces, much remains unknown concerning dynamic facility location in high dimensional spaces. In this work, we present the first fully dynamic algorithm for facility location in high-dimensional spaces $\mathbb{R}^{d}$. For any $c \geq 1$, our algorithm achieves $O(c)$-approximation, supports point updates in $\tilde{O}(\mathrm{poly}(d)n^{1/c + o(1)})$ amortized time and incurs $O(1)$ amortized recourse. More generally, our result shows that despite the linear-time lower bound on the update time for general metrics, it is possible to achieve sub-linear update times for metric spaces that admit dynamic nearest neighbour oracles. Experiments on real datasets confirm that our algorithm achieves high-quality solutions with low running time, and incurs minimal recourse. |
https://proceedings.mlr.press/v235/bhattacharyya24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhattacharyya24a/bhattacharyya24a.pdf | https://openreview.net/forum?id=6OSLjErBhh | Total Variation Distance Meets Probabilistic Inference | https://proceedings.mlr.press/v235/bhattacharyya24a.html | Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, Dimitrios Myrisiotis, A. Pavan, N. V. Vinodchandran | https://proceedings.mlr.press/v235/bhattacharyya24a.html | ICML 2024 | In this paper, we establish a novel connection between total variation (TV) distance estimation and probabilistic inference. In particular, we present an efficient, structure-preserving reduction from relative approximation of TV distance to probabilistic inference over directed graphical models. This reduction leads to a fully polynomial randomized approximation scheme (FPRAS) for estimating TV distances between same-structure distributions over any class of Bayes nets for which there is an efficient probabilistic inference algorithm. In particular, it leads to an FPRAS for estimating TV distances between distributions that are defined over a common Bayes net of small treewidth. Prior to this work, such approximation schemes only existed for estimating TV distances between product distributions. Our approach employs a new notion of partial couplings of high-dimensional distributions, which might be of independent interest. |
https://proceedings.mlr.press/v235/bhirangi24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhirangi24a/bhirangi24a.pdf | https://openreview.net/forum?id=TK7xkOsXDu | Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling | https://proceedings.mlr.press/v235/bhirangi24a.html | Raunaq Bhirangi, Chenyu Wang, Venkatesh Pattabiraman, Carmel Majidi, Abhinav Gupta, Tess Hellebrekers, Lerrel Pinto | https://proceedings.mlr.press/v235/bhirangi24a.html | ICML 2024 | Reasoning from sequences of raw sensory data is a ubiquitous problem across fields ranging from medical devices to robotics. These problems often involve using long sequences of raw sensor data (e.g. magnetometers, piezoresistors) to predict sequences of desirable physical quantities (e.g. force, inertial measurements). While classical approaches are powerful for locally-linear prediction problems, they often fall short when using real-world sensors. These sensors are typically non-linear, are affected by extraneous variables (e.g. vibration), and exhibit data-dependent drift. For many problems, the prediction task is exacerbated by small labeled datasets since obtaining ground-truth labels requires expensive equipment. In this work, we present Hierarchical State-Space models (HiSS), a conceptually simple, new technique for continuous sequential prediction. HiSS stacks structured state-space models on top of each other to create a temporal hierarchy. Across six real-world sensor datasets, from tactile-based state prediction to accelerometer-based inertial measurement, HiSS outperforms state-of-the-art sequence models such as causal Transformers, LSTMs, S4, and Mamba by at least 23% on MSE. Our experiments further indicate that HiSS demonstrates efficient scaling to smaller datasets and is compatible with existing data-filtering techniques. Code, datasets and videos can be found on https://hiss-csp.github.io. |
https://proceedings.mlr.press/v235/bhowal24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhowal24a/bhowal24a.pdf | https://openreview.net/forum?id=Ao9UUaScAU | Why do Variational Autoencoders Really Promote Disentanglement? | https://proceedings.mlr.press/v235/bhowal24a.html | Pratik Bhowal, Achint Soni, Sirisha Rambhatla | https://proceedings.mlr.press/v235/bhowal24a.html | ICML 2024 | Despite not being designed for this purpose, the use of variational autoencoders (VAEs) has proven remarkably effective for disentangled representation learning (DRL). Recent research attributes this success to certain characteristics of the loss function that prevent latent space rotation, or hypothesize about the orthogonality properties of the decoder by drawing parallels with principal component analysis (PCA). This hypothesis, however, has only been tested experimentally for linear VAEs, and the theoretical justification still remains an open problem. Moreover, since real-world VAEs are often inherently non-linear due to the use of neural architectures, understanding DRL capabilities of real-world VAEs remains a critical task. Our work takes a step towards understanding disentanglement in real-world VAEs to theoretically establish how the orthogonality properties of the decoder promotes disentanglement in practical applications. Complementary to our theoretical contributions, our experimental results corroborate our analysis. Code is available at https://github.com/criticalml-uw/Disentanglement-in-VAE. |
https://proceedings.mlr.press/v235/bhuyan24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bhuyan24a/bhuyan24a.pdf | https://openreview.net/forum?id=icijMMWwdG | Best of Both Worlds Guarantees for Smoothed Online Quadratic Optimization | https://proceedings.mlr.press/v235/bhuyan24a.html | Neelkamal Bhuyan, Debankur Mukherjee, Adam Wierman | https://proceedings.mlr.press/v235/bhuyan24a.html | ICML 2024 | We study the smoothed online quadratic optimization (SOQO) problem where, at each round $t$, a player plays an action $x_t$ in response to a quadratic hitting cost and an additional squared $\ell_2$-norm cost for switching actions. This problem class has strong connections to a wide range of application domains including smart grid management, adaptive control, and data center management, where switching-efficient algorithms are highly sought after. We study the SOQO problem in both adversarial and stochastic settings, and in this process, perform the first stochastic analysis of this class of problems. We provide the online optimal algorithm when the minimizers of the hitting cost function evolve as a general stochastic process, which, for the case of martingale process, takes the form of a distribution-agnostic dynamic interpolation algorithm that we call Lazy Adaptive Interpolation (LAI). Next, we present the stochastic-adversarial trade-off by proving an $\Omega(T)$ expected regret for the adversarial optimal algorithm in the literature (ROBD) with respect to LAI and, a sub-optimal competitive ratio for LAI in the adversarial setting. Finally, we present a best-of-both-worlds algorithm that obtains a robust adversarial performance while simultaneously achieving a near-optimal stochastic performance. |
https://proceedings.mlr.press/v235/bian24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bian24a/bian24a.pdf | https://openreview.net/forum?id=Rx9GMufByc | Multi-Patch Prediction: Adapting Language Models for Time Series Representation Learning | https://proceedings.mlr.press/v235/bian24a.html | Yuxuan Bian, Xuan Ju, Jiangtong Li, Zhijian Xu, Dawei Cheng, Qiang Xu | https://proceedings.mlr.press/v235/bian24a.html | ICML 2024 | In this study, we present $\text{aL\small{LM}4T\small{S}}$, an innovative framework that adapts Large Language Models (LLMs) for time-series representation learning. Central to our approach is that we reconceive time-series forecasting as a self-supervised, multi-patch prediction task, which, compared to traditional mask-and-reconstruction methods, captures temporal dynamics in patch representations more effectively. Our strategy encompasses two-stage training: (i). a causal continual pre-training phase on various time-series datasets, anchored on next patch prediction, effectively syncing LLM capabilities with the intricacies of time-series data; (ii). fine-tuning for multi-patch prediction in the targeted time-series context. A distinctive element of our framework is the patch-wise decoding layer, which departs from previous methods reliant on sequence-level decoding. Such a design directly transposes individual patches into temporal sequences, thereby significantly bolstering the model’s proficiency in mastering temporal patch-based representations. $\text{aL\small{LM}4T\small{S}}$ demonstrates superior performance in several downstream tasks, proving its effectiveness in deriving temporal representations with enhanced transferability and marking a pivotal advancement in the adaptation of LLMs for time-series analysis. |
https://proceedings.mlr.press/v235/bian24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bian24b/bian24b.pdf | https://openreview.net/forum?id=QhKsE7YAJk | Naive Bayes Classifiers over Missing Data: Decision and Poisoning | https://proceedings.mlr.press/v235/bian24b.html | Song Bian, Xiating Ouyang, Zhiwei Fan, Paraschos Koutris | https://proceedings.mlr.press/v235/bian24b.html | ICML 2024 | We study the certifiable robustness of ML classifiers on dirty datasets that could contain missing values. A test point is certifiably robust for an ML classifier if the classifier returns the same prediction for that test point, regardless of which cleaned version (among exponentially many) of the dirty dataset the classifier is trained on. In this paper, we show theoretically that for Naive Bayes Classifiers (NBC) over dirty datasets with missing values: (i) there exists an efficient polynomial time algorithm to decide whether multiple input test points are all certifiably robust over a dirty dataset; and (ii) the data poisoning attack, which aims to make all input test points certifiably non-robust by inserting missing cells to the clean dataset, is in polynomial time for single test points but NP-complete for multiple test points. Extensive experiments demonstrate that our algorithms are efficient and outperform existing baselines. |
https://proceedings.mlr.press/v235/bianchi24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bianchi24a/bianchi24a.pdf | https://openreview.net/forum?id=CmOmaxkt8p | How Well Can LLMs Negotiate? NegotiationArena Platform and Analysis | https://proceedings.mlr.press/v235/bianchi24a.html | Federico Bianchi, Patrick John Chia, Mert Yuksekgonul, Jacopo Tagliabue, Dan Jurafsky, James Zou | https://proceedings.mlr.press/v235/bianchi24a.html | ICML 2024 | Negotiation is the basis of social interactions; humans negotiate everything from the price of cars to how to share common resources. With rapidly growing interest in using large language models (LLMs) to act as agents on behalf of human users, such LLM agents would also need to be able to negotiate. In this paper, we study how well LLMs can negotiate with each other. We develop NegotiationArena: a flexible framework for evaluating and probing the negotiation abilities of LLM agents. We implemented three types of scenarios in NegotiationArena to assess LLM’s behaviors in allocating shared resources (ultimatum games), aggregate resources (trading games) and buy/sell goods (price negotiations). Each scenario allows for multiple turns of flexible dialogues between LLM agents to allow for more complex negotiations. Interestingly, LLM agents can significantly boost their negotiation outcomes by employing certain behavioral tactics. For example, by pretending to be desolate and desperate, LLMs can improve their payoffs by 20% when negotiating against the standard GPT-4. We also quantify irrational negotiation behaviors exhibited by the LLM agents, many of which also appear in humans. Together, NegotiationArena offers a new environment to investigate LLM interactions, enabling new insights into LLM’s theory of mind, irrationality, and reasoning abilities |
https://proceedings.mlr.press/v235/bianchi24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bianchi24b/bianchi24b.pdf | https://openreview.net/forum?id=Qc5umSsUi8 | Scalable Safe Policy Improvement for Factored Multi-Agent MDPs | https://proceedings.mlr.press/v235/bianchi24b.html | Federico Bianchi, Edoardo Zorzi, Alberto Castellini, Thiago D. Simão, Matthijs T. J. Spaan, Alessandro Farinelli | https://proceedings.mlr.press/v235/bianchi24b.html | ICML 2024 | In this work, we focus on safe policy improvement in multi-agent domains where current state-of-the-art methods cannot be effectively applied because of large state and action spaces. We consider recent results using Monte Carlo Tree Search for Safe Policy Improvement with Baseline Bootstrapping and propose a novel algorithm that scales this approach to multi-agent domains, exploiting the factorization of the transition model and value function. Given a centralized behavior policy and a dataset of trajectories, our algorithm generates an improved policy by selecting joint actions using a novel extension of Max-Plus (or Variable Elimination) that constrains local actions to guarantee safety criteria. An empirical evaluation on multi-agent SysAdmin and multi-UAV Delivery shows that the approach scales to very large domains where state-of-the-art methods cannot work. |
https://proceedings.mlr.press/v235/bica24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bica24a/bica24a.pdf | https://openreview.net/forum?id=5nxIRQ8GNa | Improving fine-grained understanding in image-text pre-training | https://proceedings.mlr.press/v235/bica24a.html | Ioana Bica, Anastasija Ilic, Matthias Bauer, Goker Erdogan, Matko Bošnjak, Christos Kaplanis, Alexey A. Gritsenko, Matthias Minderer, Charles Blundell, Razvan Pascanu, Jovana Mitrovic | https://proceedings.mlr.press/v235/bica24a.html | ICML 2024 | We introduce SPARse fine-grained Contrastive alignment (SPARC), a simple method for pretraining more fine-grained multimodal representations from image-text pairs. Given that multiple image patches often correspond to single words, we propose to learn a grouping of image patches for every token in the caption. To achieve this, we use a sparse similarity metric between image patches and language tokens and compute for each token a language-grouped vision embedding as the weighted average of patches. The token and language-grouped vision embeddings are then contrasted through a fine-grained sequence-wise loss that only depends on individual samples and does not require other batch samples as negatives, i.e., more detailed information is encoded in a computationally inexpensive way. SPARC combines this fine-grained loss with a contrastive loss between global image and text embeddings to learn representations that simultaneously encode global and local information. We thoroughly evaluate SPARC and show improved performance over competing approaches both on image-level tasks relying on coarse-grained information, e.g. classification, as well as region-level tasks relying on fine-grained information, e.g., retrieval, object detection, segmentation while also improving model faithfulness and captioning in foundational vision-language models. |
https://proceedings.mlr.press/v235/biecek24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/biecek24a/biecek24a.pdf | https://openreview.net/forum?id=ooikIHLHCs | Position: Explain to Question not to Justify | https://proceedings.mlr.press/v235/biecek24a.html | Przemyslaw Biecek, Wojciech Samek | https://proceedings.mlr.press/v235/biecek24a.html | ICML 2024 | Explainable Artificial Intelligence (XAI) is a young but very promising field of research. Unfortunately, the progress in this field is currently slowed down by divergent and incompatible goals. We separate various threads tangled within the area of XAI into two complementary cultures of human/value-oriented explanations (BLUE XAI) and model/validation-oriented explanations (RED XAI). This position paper argues that the area of RED XAI is currently under-explored, i.e., more methods for explainability are desperately needed to question models (e.g., extract knowledge from well-performing models as well as spotting and fixing bugs in faulty models), and the area of RED XAI hides great opportunities and potential for important research necessary to ensure the safety of AI systems. We conclude this paper by presenting promising challenges in this area. |
https://proceedings.mlr.press/v235/bini24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bini24a/bini24a.pdf | https://openreview.net/forum?id=yPDTXQwUPy | ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections | https://proceedings.mlr.press/v235/bini24a.html | Massimo Bini, Karsten Roth, Zeynep Akata, Anna Khoreva | https://proceedings.mlr.press/v235/bini24a.html | ICML 2024 | Parameter-efficient finetuning (PEFT) has become ubiquitous to adapt foundation models to downstream task requirements while retaining their generalization ability. However, the amount of additionally introduced parameters and compute for successful adaptation and hyperparameter searches can explode quickly, especially when deployed at scale to serve numerous individual requests. To ensure effective, parameter-efficient, and hyperparameter-robust adaptation, we propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections. By design, ETHER transformations require a minimal number of parameters, are less likely to deteriorate model performance, and exhibit robustness to hyperparameter and learning rate choices. In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters ($\sim$$10$-$100$ times lower than LoRA or OFT) across multiple image synthesis and natural language tasks without exhaustive hyperparameter tuning. Finally, we investigate the recent emphasis on Hyperspherical Energy retention for adaptation and raise questions on its practical utility. The code is available at https://github.com/mwbini/ether. |
https://proceedings.mlr.press/v235/biparva24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/biparva24a/biparva24a.pdf | https://openreview.net/forum?id=DwniHlwcOB | Incorporating Information into Shapley Values: Reweighting via a Maximum Entropy Approach | https://proceedings.mlr.press/v235/biparva24a.html | Darya Biparva, Donatello Materassi | https://proceedings.mlr.press/v235/biparva24a.html | ICML 2024 | Both the marginal contributions needed for the computation of Shapley values and the graph produced by Pearl-Verma theorem rely on the choice of an ordering of the variables. For Shapley values, the marginal contributions are averaged over all orderings, while in causal inference methods, the typical approach is to select orderings producing a graph with a minimal number of edges. We reconcile both approaches by reinterpreting them from a maximum entropy perspective. Namely, Shapley values assume no prior knowledge about the orderings and treat them as equally likely, while causal inference approaches apply Occam’s razor and consider only orderings producing the simplest explanatory graphs. We find that the blind application of Occam’s razor to Shapley values does not produce fully satisfactory explanations. Hence, we propose two variations of Shapley values based on entropy maximization to appropriately incorporate prior information about the model. Hence, we propose a variation of Shapley values based on entropy maximization to appropriately incorporate prior information about the model. |
https://proceedings.mlr.press/v235/blaauwbroek24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/blaauwbroek24a/blaauwbroek24a.pdf | https://openreview.net/forum?id=A7CtiozznN | Graph2Tac: Online Representation Learning of Formal Math Concepts | https://proceedings.mlr.press/v235/blaauwbroek24a.html | Lasse Blaauwbroek, Mirek Olšák, Jason Rute, Fidel Ivan Schaposnik Massolo, Jelle Piepenbrock, Vasily Pestun | https://proceedings.mlr.press/v235/blaauwbroek24a.html | ICML 2024 | In proof assistants, the physical proximity between two formal mathematical concepts is a strong predictor of their mutual relevance. Furthermore, lemmas with close proximity regularly exhibit similar proof structures. We show that this locality property can be exploited through online learning techniques to obtain solving agents that far surpass offline learners when asked to prove theorems in an unseen mathematical setting. We extensively benchmark two such online solvers implemented in the Tactician platform for the Coq proof assistant: First, Tactician’s online $k$-nearest neighbor solver, which can learn from recent proofs, shows a $1.72\times$ improvement in theorems proved over an offline equivalent. Second, we introduce a graph neural network, Graph2Tac, with a novel approach to build hierarchical representations for new definitions. Graph2Tac’s online definition task realizes a $1.5\times$ improvement in theorems solved over an offline baseline. The $k$-NN and Graph2Tac solvers rely on orthogonal online data, making them highly complementary. Their combination improves $1.27\times$ over their individual performances. Both solvers outperform all other general purpose provers for Coq, including CoqHammer, Proverbot9001, and a transformer baseline by at least $1.48\times$ and are available for practical use by end-users. |
https://proceedings.mlr.press/v235/black24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/black24a/black24a.pdf | https://openreview.net/forum?id=3pxMIjB9QK | Biharmonic Distance of Graphs and its Higher-Order Variants: Theoretical Properties with Applications to Centrality and Clustering | https://proceedings.mlr.press/v235/black24a.html | Mitchell Black, Lucy Lin, Weng-Keen Wong, Amir Nayyeri | https://proceedings.mlr.press/v235/black24a.html | ICML 2024 | Effective resistance is a distance between vertices of a graph that is both theoretically interesting and useful in applications. We study a variant of effective resistance called the biharmonic distance. While the effective resistance measures how well-connected two vertices are, we prove several theoretical results supporting the idea that the biharmonic distance measures how important an edge is to the global topology of the graph. Our theoretical results connect the biharmonic distance to well-known measures of connectivity of a graph like its total resistance and sparsity. Based on these results, we introduce two clustering algorithms using the biharmonic distance. Finally, we introduce a further generalization of the biharmonic distance that we call the $k$-harmonic distance. We empirically study the utility of biharmonic and $k$-harmonic distance for edge centrality and graph clustering. |
https://proceedings.mlr.press/v235/black24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/black24b/black24b.pdf | https://openreview.net/forum?id=va3r3hSA6n | Comparing Graph Transformers via Positional Encodings | https://proceedings.mlr.press/v235/black24b.html | Mitchell Black, Zhengchao Wan, Gal Mishne, Amir Nayyeri, Yusu Wang | https://proceedings.mlr.press/v235/black24b.html | ICML 2024 | The distinguishing power of graph transformers is tied to the choice of positional encoding: features used to augment the base transformer with information about the graph. There are two primary types of positional encoding: absolute positional encodings (APEs) and relative positional encodings (RPEs). APEs assign features to each node and are given as input to the transformer. RPEs instead assign a feature to each pair of nodes, e.g., shortest-path distance, and are used to augment the attention block. A priori, it is unclear which method is better for maximizing the power of the resulting graph transformer. In this paper, we aim to understand the relationship between these different types of positional encodings. Interestingly, we show that graph transformers using APEs and RPEs are equivalent in their ability to distinguish non-isomorphic graphs. In particular, we demonstrate how to interchange APEs and RPEs while maintaining their distinguishing power in terms of graph transformers. However, in the case of graphs with node features, we show that RPEs may have an advantage over APEs. Based on our theoretical results, we provide a study of different APEs and RPEs—including the shortest-path and resistance distance and the recently introduced stable and expressive positional encoding (SPE)—and compare their distinguishing power in terms of transformers. We believe our work will help navigate the vast number of positional encoding choices and provide guidance on the future design of positional encodings for graph transformers. |
https://proceedings.mlr.press/v235/blanchet24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/blanchet24a/blanchet24a.pdf | https://openreview.net/forum?id=XPP6K57bop | Stability Evaluation through Distributional Perturbation Analysis | https://proceedings.mlr.press/v235/blanchet24a.html | Jose Blanchet, Peng Cui, Jiajin Li, Jiashuo Liu | https://proceedings.mlr.press/v235/blanchet24a.html | ICML 2024 | The performance of learning models often deteriorates when deployed in out-of-sample environments. To ensure reliable deployment, we propose a stability evaluation criterion based on distributional perturbations. Conceptually, our stability evaluation criterion is defined as the minimal perturbation required on our observed dataset to induce a prescribed deterioration in risk evaluation. In this paper, we utilize the optimal transport (OT) discrepancy with moment constraints on the (sample, density) space to quantify this perturbation. Therefore, our stability evaluation criterion can address both data corruptions and sub-population shifts—the two most common types of distribution shifts in real-world scenarios. To further realize practical benefits, we present a series of tractable convex formulations and computational methods tailored to different classes of loss functions. The key technical tool to achieve this is the strong duality theorem provided in this paper. Empirically, we validate the practical utility of our stability evaluation criterion across a host of real-world applications. These empirical studies showcase the criterion’s ability not only to compare the stability of different learning models and features but also to provide valuable guidelines and strategies to further improve models. |
https://proceedings.mlr.press/v235/bleistein24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bleistein24a/bleistein24a.pdf | https://openreview.net/forum?id=xGlVkBSDdt | Dynamic Survival Analysis with Controlled Latent States | https://proceedings.mlr.press/v235/bleistein24a.html | Linus Bleistein, Van Tuan Nguyen, Adeline Fermanian, Agathe Guilloux | https://proceedings.mlr.press/v235/bleistein24a.html | ICML 2024 | We consider the task of learning individual-specific intensities of counting processes from a set of static variables and irregularly sampled time series. We introduce a novel modelization approach in which the intensity is the solution to a controlled differential equation. We first design a neural estimator by building on neural controlled differential equations. In a second time, we show that our model can be linearized in the signature space under sufficient regularity conditions, yielding a signature-based estimator which we call CoxSig. We provide theoretical learning guarantees for both estimators, before showcasing the performance of our models on a vast array of simulated and real-world datasets from finance, predictive maintenance and food supply chain management. |
https://proceedings.mlr.press/v235/blessing24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/blessing24a/blessing24a.pdf | https://openreview.net/forum?id=fVg9YrSllr | Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling | https://proceedings.mlr.press/v235/blessing24a.html | Denis Blessing, Xiaogang Jia, Johannes Esslinger, Francisco Vargas, Gerhard Neumann | https://proceedings.mlr.press/v235/blessing24a.html | ICML 2024 | Monte Carlo methods, Variational Inference, and their combinations play a pivotal role in sampling from intractable probability distributions. However, current studies lack a unified evaluation framework, relying on disparate performance measures and limited method comparisons across diverse tasks, complicating the assessment of progress and hindering the decision-making of practitioners. In response to these challenges, our work introduces a benchmark that evaluates sampling methods using a standardized task suite and a broad range of performance criteria. Moreover, we study existing metrics for quantifying mode collapse and introduce novel metrics for this purpose. Our findings provide insights into strengths and weaknesses of existing sampling methods, serving as a valuable reference for future developments. |
https://proceedings.mlr.press/v235/bok24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bok24a/bok24a.pdf | https://openreview.net/forum?id=KCVCFsPkrm | Shifted Interpolation for Differential Privacy | https://proceedings.mlr.press/v235/bok24a.html | Jinho Bok, Weijie J Su, Jason Altschuler | https://proceedings.mlr.press/v235/bok24a.html | ICML 2024 | Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning. It is a fundamental question to quantify their privacy leakage, yet tight characterizations remain open even in the foundational setting of convex losses. This paper improves over previous analyses by establishing (and refining) the “privacy amplification by iteration” phenomenon in the unifying framework of $f$-differential privacy—which tightly captures all aspects of the privacy loss and immediately implies tighter privacy accounting in other notions of differential privacy, e.g., $(\varepsilon,\delta)$-DP and Rényi DP. Our key technical insight is the construction of shifted interpolated processes that unravel the popular shifted-divergences argument, enabling generalizations beyond divergence-based relaxations of DP. Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization. Our techniques extend to many settings: convex/strongly convex, constrained/unconstrained, full/cyclic/stochastic batches, and all combinations thereof. As an immediate corollary, we recover the $f$-DP characterization of the exponential mechanism for strongly convex optimization in Gopi et al. (2022), and moreover extend this result to more general settings. |
https://proceedings.mlr.press/v235/bombari24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bombari24a/bombari24a.pdf | https://openreview.net/forum?id=o6N1Bqay0k | How Spurious Features are Memorized: Precise Analysis for Random and NTK Features | https://proceedings.mlr.press/v235/bombari24a.html | Simone Bombari, Marco Mondelli | https://proceedings.mlr.press/v235/bombari24a.html | ICML 2024 | Deep learning models are known to overfit and memorize spurious features in the training dataset. While numerous empirical studies have aimed at understanding this phenomenon, a rigorous theoretical framework to quantify it is still missing. In this paper, we consider spurious features that are uncorrelated with the learning task, and we provide a precise characterization of how they are memorized via two separate terms: (i) the stability of the model with respect to individual training samples, and (ii) the feature alignment between the spurious pattern and the full sample. While the first term is well established in learning theory and it is connected to the generalization error in classical work, the second one is, to the best of our knowledge, novel. Our key technical result gives a precise characterization of the feature alignment for the two prototypical settings of random features (RF) and neural tangent kernel (NTK) regression. We prove that the memorization of spurious features weakens as the generalization capability increases and, through the analysis of the feature alignment, we unveil the role of the model and of its activation function. Numerical experiments show the predictive power of our theory on standard datasets (MNIST, CIFAR-10). |
https://proceedings.mlr.press/v235/bombari24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bombari24b/bombari24b.pdf | https://openreview.net/forum?id=JBaPBPrn93 | Towards Understanding the Word Sensitivity of Attention Layers: A Study via Random Features | https://proceedings.mlr.press/v235/bombari24b.html | Simone Bombari, Marco Mondelli | https://proceedings.mlr.press/v235/bombari24b.html | ICML 2024 | Understanding the reasons behind the exceptional success of transformers requires a better analysis of why attention layers are suitable for NLP tasks. In particular, such tasks require predictive models to capture contextual meaning which often depends on one or few words, even if the sentence is long. Our work studies this key property, dubbed word sensitivity (WS), in the prototypical setting of random features. We show that attention layers enjoy high WS, namely, there exists a vector in the space of embeddings that largely perturbs the random attention features map. The argument critically exploits the role of the softmax in the attention layer, highlighting its benefit compared to other activations (e.g., ReLU). In contrast, the WS of standard random features is of order $1/\sqrt{n}$, $n$ being the number of words in the textual sample, and thus it decays with the length of the context. We then translate these results on the word sensitivity into generalization bounds: due to their low WS, random features provably cannot learn to distinguish between two sentences that differ only in a single word; in contrast, due to their high WS, random attention features have higher generalization capabilities. We validate our theoretical results with experimental evidence over the BERT-Base word embeddings of the imdb review dataset. |
https://proceedings.mlr.press/v235/bonel24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bonel24a/bonel24a.pdf | https://openreview.net/forum?id=hdpv6mall8 | Position: Machine Learning-powered Assessments of the EU Digital Services Act Aid Quantify Policy Impacts on Online Harms | https://proceedings.mlr.press/v235/bonel24a.html | Eleonora Bonel, Luca Nannini, Davide Bassi, Michele Joshua Maggini | https://proceedings.mlr.press/v235/bonel24a.html | ICML 2024 | While machine learning shows promise in automated knowledge generation, current techniques such as large language models and micro-targeted influence operations can be exploited for harmful purposes like the proliferation of disinformation. The European Union’s Digital Services Act (DSA) is an exemplary policy response addressing these harms generated by online platforms. In this regard, it necessitates a comprehensive evaluation of its impact on curbing the harmful downstream effects of these opaque practices. Despite their harmful applications, we argue that machine learning techniques offer immense, yet under-exploited, potential for unraveling the impacts of regulations like the DSA. Following an analysis that reveals possible limitations in the DSA’s provisions, we call for resolute efforts to address methodological barriers around appropriate data access, isolating marginal regulatory effects, and facilitating generalization across different contexts. Given the identified advantages of data-driven approaches to regulatory delivery, we advocate for machine learning research to help quantify the policy impacts on online harms. |
https://proceedings.mlr.press/v235/bordelon24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bordelon24a/bordelon24a.pdf | https://openreview.net/forum?id=nbOY1OmtRc | A Dynamical Model of Neural Scaling Laws | https://proceedings.mlr.press/v235/bordelon24a.html | Blake Bordelon, Alexander Atanasov, Cengiz Pehlevan | https://proceedings.mlr.press/v235/bordelon24a.html | ICML 2024 | On a variety of tasks, the performance of neural networks predictably improves with training time, dataset size and model size across many orders of magnitude. This phenomenon is known as a neural scaling law. Of fundamental importance is the compute-optimal scaling law, which reports the performance as a function of units of compute when choosing model sizes optimally. We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization. This reproduces many observations about neural scaling laws. First, our model makes a prediction about why the scaling of performance with training time and with model size have different power law exponents. Consequently, the theory predicts an asymmetric compute-optimal scaling rule where the number of training steps are increased faster than model parameters, consistent with recent empirical observations. Second, it has been observed that early in training, networks converge to their infinite-width dynamics at a rate $1/\text{width}$ but at late time exhibit a rate $\text{width}^{-c}$, where $c$ depends on the structure of the architecture and task. We show that our model exhibits this behavior. Lastly, our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data. |
https://proceedings.mlr.press/v235/boschi24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/boschi24a/boschi24a.pdf | https://openreview.net/forum?id=a7MW5kFFOf | A New Computationally Efficient Algorithm to solve Feature Selection for Functional Data Classification in High-dimensional Spaces | https://proceedings.mlr.press/v235/boschi24a.html | Tobia Boschi, Francesca Bonin, Rodrigo Ordonez-Hurtado, Alessandra Pascale, Jonathan P Epperlein | https://proceedings.mlr.press/v235/boschi24a.html | ICML 2024 | This paper introduces a novel methodology for Feature Selection for Functional Classification, FSFC, that addresses the challenge of jointly performing feature selection and classification of functional data in scenarios with categorical responses and multivariate longitudinal features. FSFC tackles a newly defined optimization problem that integrates logistic loss and functional features to identify the most crucial variables for classification. To address the minimization procedure, we employ functional principal components and develop a new adaptive version of the Dual Augmented Lagrangian algorithm. The computational efficiency of FSFC enables handling high-dimensional scenarios where the number of features may considerably exceed the number of statistical units. Simulation experiments demonstrate that FSFC outperforms other machine learning and deep learning methods in computational time and classification accuracy. Furthermore, the FSFC feature selection capability can be leveraged to significantly reduce the problem’s dimensionality and enhance the performances of other classification algorithms. The efficacy of FSFC is also demonstrated through a real data application, analyzing relationships between four chronic diseases and other health and demographic factors. FSFC source code is publicly available at https://github.com/IBM/funGCN. |
https://proceedings.mlr.press/v235/bouchard24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bouchard24a/bouchard24a.pdf | https://openreview.net/forum?id=uQiFsBil3p | Random matrix theory improved Fréchet mean of symmetric positive definite matrices | https://proceedings.mlr.press/v235/bouchard24a.html | Florent Bouchard, Ammar Mian, Malik Tiomoko, Guillaume Ginolhac, Frederic Pascal | https://proceedings.mlr.press/v235/bouchard24a.html | ICML 2024 | In this study, we consider the realm of covariance matrices in machine learning, particularly focusing on computing Fréchet means on the manifold of symmetric positive definite matrices, commonly referred to as Karcher or geometric means. Such means are leveraged in numerous machine learning tasks. Relying on advanced statistical tools, we introduce a random matrix theory based method that estimates Fréchet means, which is particularly beneficial when dealing with low sample support and a high number of matrices to average. Our experimental evaluation, involving both synthetic and real-world EEG and hyperspectral datasets, shows that we largely outperform state-of-the-art methods. |
https://proceedings.mlr.press/v235/bouchiat24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bouchiat24a/bouchiat24a.pdf | https://openreview.net/forum?id=0pSTzCnEmi | Improving Neural Additive Models with Bayesian Principles | https://proceedings.mlr.press/v235/bouchiat24a.html | Kouroche Bouchiat, Alexander Immer, Hugo Yèche, Gunnar Ratsch, Vincent Fortuin | https://proceedings.mlr.press/v235/bouchiat24a.html | ICML 2024 | Neural additive models (NAMs) enhance the transparency of deep neural networks by handling input features in separate additive sub-networks. However, they lack inherent mechanisms that provide calibrated uncertainties and enable selection of relevant features and interactions. Approaching NAMs from a Bayesian perspective, we augment them in three primary ways, namely by a) providing credible intervals for the individual additive sub-networks; b) estimating the marginal likelihood to perform an implicit selection of features via an empirical Bayes procedure; and c) facilitating the ranking of feature pairs as candidates for second-order interaction in fine-tuned models. In particular, we develop Laplace-approximated NAMs (LA-NAMs), which show improved empirical performance on tabular datasets and challenging real-world medical tasks. |
https://proceedings.mlr.press/v235/bounoua24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bounoua24a/bounoua24a.pdf | https://openreview.net/forum?id=LuhWZ2oJ5L | S$Ω$I: Score-based O-INFORMATION Estimation | https://proceedings.mlr.press/v235/bounoua24a.html | Mustapha Bounoua, Giulio Franzese, Pietro Michiardi | https://proceedings.mlr.press/v235/bounoua24a.html | ICML 2024 | The analysis of scientific data and complex multivariate systems requires information quantities that capture relationships among multiple random variables. Recently, new information-theoretic measures have been developed to overcome the shortcomings of classical ones, such as mutual information, that are restricted to considering pairwise interactions. Among them, the concept of information synergy and redundancy is crucial for understanding the high-order dependencies between variables. One of the most prominent and versatile measures based on this concept is O-information, which provides a clear and scalable way to quantify the synergy-redundancy balance in multivariate systems. However, its practical application is limited to simplified cases. In this work, we introduce S$\Omega$I, which allows to compute O-information without restrictive assumptions about the system while leveraging a unique model. Our experiments validate our approach on synthetic data, and demonstrate the effectiveness of S$\Omega$I in the context of a real-world use case. |
https://proceedings.mlr.press/v235/bravo24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bravo24a/bravo24a.pdf | https://openreview.net/forum?id=UjDp4Wkq2V | On dimensionality of feature vectors in MPNNs | https://proceedings.mlr.press/v235/bravo24a.html | César Bravo, Alexander Kozachinskiy, Cristobal Rojas | https://proceedings.mlr.press/v235/bravo24a.html | ICML 2024 | We revisit the result of Morris et al. (AAAI’19) that message-passing graphs neural networks (MPNNs) are equal in their distinguishing power to the Weisfeiler–Leman (WL) isomorphism test. Morris et al. show their result with ReLU activation function and $O(n)$-dimensional feature vectors, where $n$ is the size of the graph. Recently, by introducing randomness into the architecture, Aamand et al. (NeurIPS’22) improved this bound to $O(\log n)$-dimensional feature vectors, although at the expense of guaranteeing perfect simulation only with high probability. In all these constructions, to guarantee equivalence to the WL test, the dimension of feature vectors in the MPNN has to increase with the size of the graphs. However, architectures used in practice have feature vectors of constant dimension. Thus, there is a gap between the guarantees provided by these results and the actual characteristics of architectures used in practice. In this paper we close this gap by showing that, for any non-polynomial analytic (like the sigmoid) activation function, to guarantee that MPNNs are equivalent to the WL test, feature vectors of dimension $d=1$ is all we need, independently of the size of the graphs. Our main technical insight is that for simulating multi-sets in the WL-test, it is enough to use linear independence of feature vectors over rationals instead of reals. Countability of the set of rationals together with nice properties of analytic functions allow us to carry out the simulation invariant over the iterations of the WL test without increasing the dimension of the feature vectors. |
https://proceedings.mlr.press/v235/brenner24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/brenner24a/brenner24a.pdf | https://openreview.net/forum?id=b1iurBHDck | Integrating Multimodal Data for Joint Generative Modeling of Complex Dynamics | https://proceedings.mlr.press/v235/brenner24a.html | Manuel Brenner, Florian Hess, Georgia Koppe, Daniel Durstewitz | https://proceedings.mlr.press/v235/brenner24a.html | ICML 2024 | Many, if not most, systems of interest in science are naturally described as nonlinear dynamical systems. Empirically, we commonly access these systems through time series measurements. Often such time series may consist of discrete random variables rather than continuous measurements, or may be composed of measurements from multiple data modalities observed simultaneously. For instance, in neuroscience we may have behavioral labels in addition to spike counts and continuous physiological recordings. While by now there is a burgeoning literature on deep learning for dynamical systems reconstruction (DSR), multimodal data integration has hardly been considered in this context. Here we provide such an efficient and flexible algorithmic framework that rests on a multimodal variational autoencoder for generating a sparse teacher signal that guides training of a reconstruction model, exploiting recent advances in DSR training techniques. It enables to combine various sources of information for optimal reconstruction, even allows for reconstruction from symbolic data (class labels) alone, and connects different types of observations within a common latent dynamics space. In contrast to previous multimodal data integration techniques for scientific applications, our framework is fully generative, producing, after training, trajectories with the same geometrical and temporal structure as those of the ground truth system. |
https://proceedings.mlr.press/v235/bressan24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bressan24a/bressan24a.pdf | https://openreview.net/forum?id=d5tJWH5yCi | Fully-Dynamic Approximate Decision Trees With Worst-Case Update Time Guarantees | https://proceedings.mlr.press/v235/bressan24a.html | Marco Bressan, Mauro Sozio | https://proceedings.mlr.press/v235/bressan24a.html | ICML 2024 | We study the problem of maintaining a decision tree in the fully-dynamic setting, where the dataset is updated by an adversarial sequence of insertions and deletions. We present the first algorithm with strong guarantees on both the quality of the tree and the worst-case update time (the maximum time spent between two consecutive dataset updates). For instance, we can maintain a tree where each node has Gini gain within $\beta$ of the optimum, while guaranteeing an update time $O(d \beta^{-3} \log^4 n )$, where $d$ is the number of features and $n$ the maximum size of the dataset. This is optimal up to polylogarithmic factors, as any dynamic algorithm must have update time in $\Omega(d)$. Similar guarantees hold for the variance and information gain, for classification and regression, and even for boosted trees. This shows that many popular decision trees such as ID3 or C4.5 can be efficiently be made dynamic, answering an open question of Bressan, Damay and Sozio (AAAI 2023). We also show that, under the 3SUM conjecture or the Orthogonal Vectors Hypothesis, the update time must be polynomial in $1/\beta$. |
https://proceedings.mlr.press/v235/brilliantov24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/brilliantov24a/brilliantov24a.pdf | https://openreview.net/forum?id=gQz30hTkRE | Applying language models to algebraic topology: generating simplicial cycles using multi-labeling in Wu’s formula | https://proceedings.mlr.press/v235/brilliantov24a.html | Kirill Brilliantov, Fedor Pavutnitskiy, Dmitry Pasechnyuk, German Magai | https://proceedings.mlr.press/v235/brilliantov24a.html | ICML 2024 | Computing homotopy groups of spheres has long been a fundamental objective in algebraic topology. Various theoretical and algorithmic approaches have been developed to tackle this problem. In this paper we take a step towards the goal of comprehending the group-theoretic structure of the generators of these homotopy groups by leveraging the power of machine learning. Specifically, in the simplicial group setting of Wu’s formula, we reformulate the problem of generating simplicial cycles as a problem of sampling from the intersection of algorithmic datasets related to Dyck languages. We present and evaluate language modelling approaches that employ multi-label information for input sequences, along with the necessary group-theoretic toolkit and non-neural baselines. |
https://proceedings.mlr.press/v235/brown24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/brown24a/brown24a.pdf | https://openreview.net/forum?id=igRAPavrrS | Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation | https://proceedings.mlr.press/v235/brown24a.html | Gavin R Brown, Krishnamurthy Dj Dvijotham, Georgina Evans, Daogao Liu, Adam Smith, Abhradeep Guha Thakurta | https://proceedings.mlr.press/v235/brown24a.html | ICML 2024 | We provide an improved analysis of standard differentially private gradient descent for linear regression under the squared error loss. Under modest assumptions on the input, we characterize the distribution of the iterate at each time step. Our analysis leads to new results on the algorithm’s accuracy: for a proper fixed choice of hyperparameters, the sample complexity depends only linearly on the dimension of the data. This matches the dimension-dependence of the (non-private) ordinary least squares estimator as well as that of recent private algorithms that rely on sophisticated adaptive gradient-clipping schemes (Varshney et al., 2022; Liu et al., 2023). Our analysis of the iterates’ distribution also allows us to construct confidence intervals for the empirical optimizer which adapt automatically to the variance of the algorithm on a particular data set. We validate our theorems through experiments on synthetic data. |
https://proceedings.mlr.press/v235/brown-cohen24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/brown-cohen24a/brown-cohen24a.pdf | https://openreview.net/forum?id=6jmdOTRMIO | Scalable AI Safety via Doubly-Efficient Debate | https://proceedings.mlr.press/v235/brown-cohen24a.html | Jonah Brown-Cohen, Geoffrey Irving, Georgios Piliouras | https://proceedings.mlr.press/v235/brown-cohen24a.html | ICML 2024 | The emergence of pre-trained AI systems with powerful capabilities across a diverse and ever-increasing set of complex domains has raised a critical challenge for AI safety as tasks can become too complicated for humans to judge directly. Irving et al (2018). proposed a debate method in this direction with the goal of pitting the power of such AI models against each other until the problem of identifying (mis)-alignment is broken down into a manageable subtask. While the promise of this approach is clear, the original framework was based on the assumption that the honest strategy is able to simulate deterministic AI systems for an exponential number of steps, limiting its applicability. In this paper, we show how to address these challenges by designing a new set of debate protocols where the honest strategy can always succeed using a simulation of a polynomial number of steps, whilst being able to verify the alignment of stochastic AI systems, even when the dishonest strategy is allowed to use exponentially many simulation steps. |
https://proceedings.mlr.press/v235/bruce24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bruce24a/bruce24a.pdf | https://openreview.net/forum?id=bJbSbJskOS | Genie: Generative Interactive Environments | https://proceedings.mlr.press/v235/bruce24a.html | Jake Bruce, Michael D Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, Yusuf Aytar, Sarah Maria Elisabeth Bechtle, Feryal Behbahani, Stephanie C.Y. Chan, Nicolas Heess, Lucy Gonzalez, Simon Osindero, Sherjil Ozair, Scott Reed, Jingwei Zhang, Konrad Zolna, Jeff Clune, Nando De Freitas, Satinder Singh, Tim Rocktäschel | https://proceedings.mlr.press/v235/bruce24a.html | ICML 2024 | We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future. |
https://proceedings.mlr.press/v235/bryutkin24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bryutkin24a/bryutkin24a.pdf | https://openreview.net/forum?id=nYX7I6PsL7 | HAMLET: Graph Transformer Neural Operator for Partial Differential Equations | https://proceedings.mlr.press/v235/bryutkin24a.html | Andrey Bryutkin, Jiahao Huang, Zhongying Deng, Guang Yang, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero | https://proceedings.mlr.press/v235/bryutkin24a.html | ICML 2024 | We present a novel graph transformer framework, HAMLET, designed to address the challenges in solving partial differential equations (PDEs) using neural networks. The framework uses graph transformers with modular input encoders to directly incorporate differential equation information into the solution process. This modularity enhances parameter correspondence control, making HAMLET adaptable to PDEs of arbitrary geometries and varied input formats. Notably, HAMLET scales effectively with increasing data complexity and noise, showcasing its robustness. HAMLET is not just tailored to a single type of physical simulation, but can be applied across various domains. Moreover, it boosts model resilience and performance, especially in scenarios with limited data. We demonstrate, through extensive experiments, that our framework is capable of outperforming current techniques for PDEs. |
https://proceedings.mlr.press/v235/bu24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bu24a/bu24a.pdf | https://openreview.net/forum?id=kzz0kn546b | Provably Neural Active Learning Succeeds via Prioritizing Perplexing Samples | https://proceedings.mlr.press/v235/bu24a.html | Dake Bu, Wei Huang, Taiji Suzuki, Ji Cheng, Qingfu Zhang, Zhiqiang Xu, Hau-San Wong | https://proceedings.mlr.press/v235/bu24a.html | ICML 2024 | Neural Network-based active learning (NAL) is a cost-effective data selection technique that utilizes neural networks to select and train on a small subset of samples. While existing work successfully develops various effective or theory-justified NAL algorithms, the understanding of the two commonly used query criteria of NAL: uncertainty-based and diversity-based, remains in its infancy. In this work, we try to move one step forward by offering a unified explanation for the success of both query criteria-based NAL from a feature learning view. Specifically, we consider a feature-noise data model comprising easy-to-learn or hard-to-learn features disrupted by noise, and conduct analysis over 2-layer NN-based NALs in the pool-based scenario. We provably show that both uncertainty-based and diversity-based NAL are inherently amenable to one and the same principle, i.e., striving to prioritize samples that contain yet-to-be-learned features. We further prove that this shared principle is the key to their success-achieve small test error within a small labeled set. Contrastingly, the strategy-free passive learning exhibits a large test error due to the inadequate learning of yet-to-be-learned features, necessitating resort to a significantly larger label complexity for a sufficient test error reduction. Experimental results validate our findings. |
https://proceedings.mlr.press/v235/bu24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bu24b/bu24b.pdf | https://openreview.net/forum?id=6n99bIxb3r | Tackling Prevalent Conditions in Unsupervised Combinatorial Optimization: Cardinality, Minimum, Covering, and More | https://proceedings.mlr.press/v235/bu24b.html | Fanchen Bu, Hyeonsoo Jo, Soo Yong Lee, Sungsoo Ahn, Kijung Shin | https://proceedings.mlr.press/v235/bu24b.html | ICML 2024 | Combinatorial optimization (CO) is naturally discrete, making machine-learning techniques based on differentiable optimization inapplicable. Karalias & Loukas (2020) adapted the probabilistic method by Erdős & Spencer (1974), to incorporate CO into differentiable optimization. Their work ignited the research on unsupervised learning for CO, composed of two main components: probabilistic objectives and derandomization. However, each component confronts unique challenges. First, deriving objectives under complex conditions and constraints is nontrivial. Second, the derandomization process is underexplored, and the existing derandomization methods are either random sampling or naive rounding. In this work, we aim to tackle complex conditions in unsupervised CO. First, we concretize the targets for probabilistic objective construction and derandomization with theoretical justification. Then, for various complex conditions commonly involved in different CO problems, we derive nontrivial objectives and derandomization to meet the targets. Finally, we apply the derivations to various CO problems. Via extensive experiments on synthetic and real-world graphs, we validate the correctness of our derivations and show our empirical superiority w.r.t. both optimization quality and speed. |
https://proceedings.mlr.press/v235/bu24c.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bu24c/bu24c.pdf | https://openreview.net/forum?id=fqeANcjBMT | Differentially Private Bias-Term Fine-tuning of Foundation Models | https://proceedings.mlr.press/v235/bu24c.html | Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis | https://proceedings.mlr.press/v235/bu24c.html | ICML 2024 | We study the problem of differentially private (DP) fine-tuning of large pre-trained models — a recent privacy-preserving approach suitable for solving downstream tasks with sensitive data. Existing work has demonstrated that high accuracy is possible under strong privacy constraint, yet requires significant computational overhead or modifications to the network architecture. We propose differentially private bias-term fine-tuning (DP-BiTFiT), which matches the state-of-the-art accuracy for DP algorithms and the efficiency of the standard BiTFiT. DP-BiTFiT is model agnostic (not modifying the network architecture), parameter efficient (only training about 0.1% of the parameters), and computation efficient (almost removing the overhead caused by DP, in both the time and space complexity). On a wide range of tasks, DP-BiTFiT is 2 - 30X faster and uses 2 - 8X less memory than DP full fine-tuning, even faster than the standard full fine-tuning. This amazing efficiency enables us to conduct DP fine-tuning on language and vision tasks with long-sequence texts and high-resolution images, which were computationally difficult using existing methods. |
https://proceedings.mlr.press/v235/buathong24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/buathong24a/buathong24a.pdf | https://openreview.net/forum?id=scMAQ3mFAA | Bayesian Optimization of Function Networks with Partial Evaluations | https://proceedings.mlr.press/v235/buathong24a.html | Poompol Buathong, Jiayue Wan, Raul Astudillo, Sam Daulton, Maximilian Balandat, Peter I. Frazier | https://proceedings.mlr.press/v235/buathong24a.html | ICML 2024 | Bayesian optimization is a powerful framework for optimizing functions that are expensive or time-consuming to evaluate. Recent work has considered Bayesian optimization of function networks (BOFN), where the objective function is given by a network of functions, each taking as input the output of previous nodes in the network as well as additional parameters. Leveraging this network structure has been shown to yield significant performance improvements. Existing BOFN algorithms for general-purpose networks evaluate the full network at each iteration. However, many real-world applications allow for evaluating nodes individually. To exploit this, we propose a novel knowledge gradient acquisition function that chooses which node and corresponding inputs to evaluate in a cost-aware manner, thereby reducing query costs by evaluating only on a part of the network at each step. We provide an efficient approach to optimizing our acquisition function and show that it outperforms existing BOFN methods and other benchmarks across several synthetic and real-world problems. Our acquisition function is the first to enable cost-aware optimization of a broad class of function networks. |
https://proceedings.mlr.press/v235/buchholz24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/buchholz24a/buchholz24a.pdf | https://openreview.net/forum?id=GyV33H5Uuk | Robustness of Nonlinear Representation Learning | https://proceedings.mlr.press/v235/buchholz24a.html | Simon Buchholz, Bernhard Schölkopf | https://proceedings.mlr.press/v235/buchholz24a.html | ICML 2024 | We study the problem of unsupervised representation learning in slightly misspecified settings, and thus formalize the study of robustness of nonlinear representation learning. We focus on the case where the mixing is close to a local isometry in a suitable distance and show based on existing rigidity results that the mixing can be identified up to linear transformations and small errors. In a second step, we investigate Independent Component Analysis (ICA) with observations generated according to $x=f(s)=As+h(s)$ where $A$ is an invertible mixing matrix and $h$ a small perturbation. We show that we can approximately recover the matrix $A$ and the independent components. Together, these two results show approximate identifiability of nonlinear ICA with almost isometric mixing functions. Those results are a step towards identifiability results for unsupervised representation learning for real-world data that do not follow restrictive model classes. |
https://proceedings.mlr.press/v235/bui24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bui24a/bui24a.pdf | https://openreview.net/forum?id=lon750Kf7n | Density-Softmax: Efficient Test-time Model for Uncertainty Estimation and Robustness under Distribution Shifts | https://proceedings.mlr.press/v235/bui24a.html | Ha Manh Bui, Anqi Liu | https://proceedings.mlr.press/v235/bui24a.html | ICML 2024 | Sampling-based methods, e.g., Deep Ensembles and Bayesian Neural Nets have become promising approaches to improve the quality of uncertainty estimation and robust generalization. However, they suffer from a large model size and high latency at test time, which limits the scalability needed for low-resource devices and real-time applications. To resolve these computational issues, we propose Density-Softmax, a sampling-free deterministic framework via combining a density function built on a Lipschitz-constrained feature extractor with the softmax layer. Theoretically, we show that our model is the solution of minimax uncertainty risk and is distance-aware on feature space, thus reducing the over-confidence of the standard softmax under distribution shifts. Empirically, our method enjoys competitive results with state-of-the-art techniques in terms of uncertainty and robustness, while having a lower number of model parameters and a lower latency at test time. |
https://proceedings.mlr.press/v235/bui24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bui24b/bui24b.pdf | https://openreview.net/forum?id=2T00oYk54P | Explaining Graph Neural Networks via Structure-aware Interaction Index | https://proceedings.mlr.press/v235/bui24b.html | Ngoc Bui, Hieu Trung Nguyen, Viet Anh Nguyen, Rex Ying | https://proceedings.mlr.press/v235/bui24b.html | ICML 2024 | The Shapley value is a prominent tool for interpreting black-box machine learning models thanks to its strong theoretical foundation. However, for models with structured inputs, such as graph neural networks, existing Shapley-based explainability approaches either focus solely on node-wise importance or neglect the graph structure when perturbing the input instance. This paper introduces the Myerson-Taylor interaction index that internalizes the graph structure into attributing the node values and the interaction values among nodes. Unlike the Shapley-based methods, the Myerson-Taylor index decomposes coalitions into components satisfying a pre-chosen connectivity criterion. We prove that the Myerson-Taylor index is the unique one that satisfies a system of five natural axioms accounting for graph structure and high-order interaction among nodes. Leveraging these properties, we propose Myerson-Taylor Structure-Aware Graph Explainer (MAGE), a novel explainer that uses the second-order Myerson-Taylor index to identify the most important motifs influencing the model prediction, both positively and negatively. Extensive experiments on various graph datasets and models demonstrate that our method consistently provides superior subgraph explanations compared to state-of-the-art methods. |
https://proceedings.mlr.press/v235/bulian24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/bulian24a/bulian24a.pdf | https://openreview.net/forum?id=ScIHQoTUjT | Assessing Large Language Models on Climate Information | https://proceedings.mlr.press/v235/bulian24a.html | Jannis Bulian, Mike S. Schäfer, Afra Amini, Heidi Lam, Massimiliano Ciaramita, Ben Gaiarin, Michelle Chen Huebscher, Christian Buck, Niels G. Mede, Markus Leippold, Nadine Strauss | https://proceedings.mlr.press/v235/bulian24a.html | ICML 2024 | As Large Language Models (LLMs) rise in popularity, it is necessary to assess their capability in critically relevant domains. We present a comprehensive evaluation framework, grounded in science communication research, to assess LLM responses to questions about climate change. Our framework emphasizes both presentational and epistemological adequacy, offering a fine-grained analysis of LLM generations spanning 8 dimensions and 30 issues. Our evaluation task is a real-world example of a growing number of challenging problems where AI can complement and lift human performance. We introduce a novel protocol for scalable oversight that relies on AI Assistance and raters with relevant education. We evaluate several recent LLMs on a set of diverse climate questions. Our results point to a significant gap between surface and epistemological qualities of LLMs in the realm of climate communication. |
https://proceedings.mlr.press/v235/burns24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/burns24a/burns24a.pdf | https://openreview.net/forum?id=l0OGoZPZuC | Semantically-correlated memories in a dense associative model | https://proceedings.mlr.press/v235/burns24a.html | Thomas F Burns | https://proceedings.mlr.press/v235/burns24a.html | ICML 2024 | I introduce a novel associative memory model named Correlated Dense Associative Memory (CDAM), which integrates both auto- and hetero-association in a unified framework for continuous-valued memory patterns. Employing an arbitrary graph structure to semantically link memory patterns, CDAM is theoretically and numerically analysed, revealing four distinct dynamical modes: auto-association, narrow hetero-association, wide hetero-association, and neutral quiescence. Drawing inspiration from inhibitory modulation studies, I employ anti-Hebbian learning rules to control the range of hetero-association, extract multi-scale representations of community structures in graphs, and stabilise the recall of temporal sequences. Experimental demonstrations showcase CDAM’s efficacy in handling real-world data, replicating a classical neuroscience experiment, performing image retrieval, and simulating arbitrary finite automata. |
https://proceedings.mlr.press/v235/burns24b.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/burns24b/burns24b.pdf | https://openreview.net/forum?id=ghNRg2mEgN | Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision | https://proceedings.mlr.press/v235/burns24b.html | Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, Jeffrey Wu | https://proceedings.mlr.press/v235/burns24b.html | ICML 2024 | Widely used alignment techniques, such as reinforcement learning from human feedback (RLHF), rely on the ability of humans to supervise model behavior—for example, to evaluate whether a model faithfully followed instructions or generated safe outputs. However, future superhuman models will behave in complex ways too difficult for humans to reliably evaluate; humans will only be able to weakly supervise superhuman models. We study an analogy to this problem: can weak model supervision elicit the full capabilities of a much stronger model? We test this using a range of pretrained language models in the GPT-4 family on natural language processing (NLP), chess, and reward modeling tasks. We find that when we naively finetune strong pretrained models on labels generated by a weak model, they consistently perform better than their weak supervisors, a phenomenon we call weak-to-strong generalization. However, we are still far from recovering the full capabilities of strong models with naive finetuning alone, suggesting that techniques like RLHF may scale poorly to superhuman models without further work. We find that simple methods can often significantly improve weak-to-strong generalization: for example, when finetuning GPT-4 with a GPT-2-level supervisor and an auxiliary confidence loss, we can recover close to GPT-3.5-level performance on NLP tasks. Our results suggest that it is feasible to make empirical progress today on a fundamental challenge of aligning superhuman models. |
https://proceedings.mlr.press/v235/butt24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/butt24a/butt24a.pdf | https://openreview.net/forum?id=SXVn5IFsrs | CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay | https://proceedings.mlr.press/v235/butt24a.html | Natasha Butt, Blazej Manczak, Auke Wiggers, Corrado Rainone, David W. Zhang, Michaël Defferrard, Taco Cohen | https://proceedings.mlr.press/v235/butt24a.html | ICML 2024 | Large language models are increasingly solving tasks that are commonly believed to require human-level reasoning ability. However, these models still perform very poorly on benchmarks of general intelligence such as the Abstraction and Reasoning Corpus (ARC). In this paper, we approach the ARC as a programming-by-examples problem, and introduce a novel and scalable method for language model self-improvement called Code Iteration (CodeIt). Our method iterates between 1) program sampling and hindsight relabeling, and 2) learning from prioritized experience replay. By relabeling the goal of an episode (i.e., the program output given input) to the output actually produced by the sampled program, our method effectively deals with the extreme sparsity of rewards in program synthesis. Applying CodeIt to the ARC dataset, we demonstrate that prioritized hindsight replay, along with pre-training and data-augmentation, leads to successful inter-task generalization. CodeIt is the first neuro-symbolic approach that scales to the full ARC evaluation dataset. Our method solves 15% of ARC evaluation tasks, achieving state-of-the-art performance and outperforming existing neural and symbolic baselines. Our code is available at https://github.com/Qualcomm-AI-research/codeit. |
https://proceedings.mlr.press/v235/buzaglo24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/buzaglo24a/buzaglo24a.pdf | https://openreview.net/forum?id=3eHNvPHL9Z | How Uniform Random Weights Induce Non-uniform Bias: Typical Interpolating Neural Networks Generalize with Narrow Teachers | https://proceedings.mlr.press/v235/buzaglo24a.html | Gon Buzaglo, Itamar Harel, Mor Shpigel Nacson, Alon Brutzkus, Nathan Srebro, Daniel Soudry | https://proceedings.mlr.press/v235/buzaglo24a.html | ICML 2024 | A main theoretical puzzle is why over-parameterized Neural Networks (NNs) generalize well when trained to zero loss (i.e., so they interpolate the data). Usually, the NN is trained with Stochastic Gradient Descent (SGD) or one of its variants. However, recent empirical work examined the generalization of a random NN that interpolates the data: the NN was sampled from a seemingly uniform prior over the parameters, conditioned on that the NN perfectly classifying the training set. Interestingly, such a NN sample typically generalized as well as SGD-trained NNs. We prove that such a random NN interpolator typically generalizes well if there exists an underlying narrow “teacher NN" that agrees with the labels. Specifically, we show that such a ‘flat’ prior over the NN parametrization induces a rich prior over the NN functions, due to the redundancy in the NN structure. In particular, this creates a bias towards simpler functions, which require less relevant parameters to represent — enabling learning with a sample complexity approximately proportional to the complexity of the teacher (roughly, the number of non-redundant parameters), rather than the student’s. |
https://proceedings.mlr.press/v235/byambadalai24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/byambadalai24a/byambadalai24a.pdf | https://openreview.net/forum?id=RDofzHLuX4 | Estimating Distributional Treatment Effects in Randomized Experiments: Machine Learning for Variance Reduction | https://proceedings.mlr.press/v235/byambadalai24a.html | Undral Byambadalai, Tatsushi Oka, Shota Yasui | https://proceedings.mlr.press/v235/byambadalai24a.html | ICML 2024 | We propose a novel regression adjustment method designed for estimating distributional treatment effect parameters in randomized experiments. Randomized experiments have been extensively used to estimate treatment effects in various scientific fields. However, to gain deeper insights, it is essential to estimate distributional treatment effects rather than relying solely on average effects. Our approach incorporates pre-treatment covariates into a distributional regression framework, utilizing machine learning techniques to improve the precision of distributional treatment effect estimators. The proposed approach can be readily implemented with off-the-shelf machine learning methods and remains valid as long as the nuisance components are reasonably well estimated. Also, we establish the asymptotic properties of the proposed estimator and present a uniformly valid inference method. Through simulation results and real data analysis, we demonstrate the effectiveness of integrating machine learning techniques in reducing the variance of distributional treatment effect estimators in finite samples. |
https://proceedings.mlr.press/v235/cabannes24a.html | https://raw.githubusercontent.com/mlresearch/v235/main/assets/cabannes24a/cabannes24a.pdf | https://openreview.net/forum?id=A9fLbXLRTK | Learning Associative Memories with Gradient Descent | https://proceedings.mlr.press/v235/cabannes24a.html | Vivien Cabannes, Berfin Simsek, Alberto Bietti | https://proceedings.mlr.press/v235/cabannes24a.html | ICML 2024 | This work focuses on the training dynamics of one associative memory module storing outer products of token embeddings. We reduce this problem to the study of a system of particles, which interact according to properties of the data distribution and correlations between embeddings. Through theory and experiments, we provide several insights. In overparameterized regimes, we obtain logarithmic growth of the “classification margins.” Yet, we show that imbalance in token frequencies and memory interferences due to correlated embeddings lead to oscillatory transitory regimes. The oscillations are more pronounced with large step sizes, which can create benign loss spikes, although these learning rates speed up the dynamics and accelerate the asymptotic convergence. We also find that underparameterized regimes lead to suboptimal memorization schemes. Finally, we assess the validity of our findings on small Transformer models. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.