title
stringlengths
17
147
url
stringlengths
42
42
detail_url
stringlengths
42
42
authors
stringlengths
8
486
tags
stringclasses
2 values
abstract
stringlengths
468
2.51k
pdf
stringlengths
71
71
Selective Explanations
https://openreview.net/forum?id=gHCFduRo7o
https://openreview.net/forum?id=gHCFduRo7o
Lucas Monteiro Paes,Dennis Wei,Flavio Calmon
NIPS 2024,Poster
Feature attribution methods explain black-box machine learning (ML) models by assigning importance scores to input features. These methods can be computationally expensive for large ML models. To address this challenge, there have been increasing efforts to develop amortized explainers, where a ML model is trained to efficiently approximate computationally expensive feature attribution scores. Despite their efficiency, amortized explainers can produce misleading explanations. In this paper, we propose selective explanations to (i) detect when amortized explainers generate inaccurate explanations and (ii) improve the approximation of the explanation using a technique we call explanations with initial guess. Selective explanations allow practitioners to specify the fraction of samples that receive explanations with initial guess, offering a principled way to bridge the gap between amortized explainers (one inference) and more computationally costly approximations (multiple inferences). Our experiments on various models and datasets demonstrate that feature attributions via selective explanations strike a favorable balance between explanation quality and computational efficiency.
https://openreview.net/pdf/5b7e3b99fcef803366002e9632ee40a47cbfa4c9.pdf
Enhancing Diversity in Bayesian Deep Learning via Hyperspherical Energy Minimization of CKA
https://openreview.net/forum?id=s2hA6Bz3LE
https://openreview.net/forum?id=s2hA6Bz3LE
David Smerkous,Qinxun Bai,Li Fuxin
NIPS 2024,Poster
Particle-based Bayesian deep learning often requires a similarity metric to compare two networks. However, naive similarity metrics lack permutation invariance and are inappropriate for comparing networks. Centered Kernel Alignment (CKA) on feature kernels has been proposed to compare deep networks but has not been used as an optimization objective in Bayesian deep learning. In this paper, we explore the use of CKA in Bayesian deep learning to generate diverse ensembles and hypernetworks that output a network posterior. Noting that CKA projects kernels onto a unit hypersphere and that directly optimizing the CKA objective leads to diminishing gradients when two networks are very similar. We propose adopting the approach of hyperspherical energy (HE) on top of CKA kernels to address this drawback and improve training stability. Additionally, by leveraging CKA-based feature kernels, we derive feature repulsive terms applied to synthetically generated outlier examples. Experiments on both diverse ensembles and hypernetworks show that our approach significantly outperforms baselines in terms of uncertainty quantification in both synthetic and realistic outlier detection tasks.
https://openreview.net/pdf/e8fd6b257ea14297e3fcc15e027f5b978526a38b.pdf
Learning to Edit Visual Programs with Self-Supervision
https://openreview.net/forum?id=uzIWqRzjEP
https://openreview.net/forum?id=uzIWqRzjEP
R. Kenny Jones,Renhao Zhang,Aditya Ganeshan,Daniel Ritchie
NIPS 2024,Poster
We design a system that learns how to edit visual programs. Our edit network consumes a complete input program and a visual target. From this input, we task our network with predicting a local edit operation that could be applied to the input program to improve its similarity to the target. In order to apply this scheme for domains that lack program annotations, we develop a self-supervised learning approach that integrates this edit network into a bootstrapped finetuning loop along with a network that predicts entire programs in one-shot. Our joint finetuning scheme, when coupled with an inference procedure that initializes a population from the one-shot model and evolves members of this population with the edit network, helps to infer more accurate visual programs. Over multiple domains, we experimentally compare our method against the alternative of using only the one-shot model, and find that even under equal search-time budgets, our editing-based paradigm provides significant advantages.
https://openreview.net/pdf/5574437cb41abf73076c2977076bffc90f011092.pdf
ETO:Efficient Transformer-based Local Feature Matching by Organizing Multiple Homography Hypotheses
https://openreview.net/forum?id=3xHCaDdYcc
https://openreview.net/forum?id=3xHCaDdYcc
Junjie Ni,Guofeng Zhang,Guanglin Li,Yijin Li,Xinyang Liu,Zhaoyang Huang,Hujun Bao
NIPS 2024,Poster
We tackle the efficiency problem of learning local feature matching.Recent advancements have given rise to purely CNN-based and transformer-based approaches, each augmented with deep learning techniques. While CNN-based methods often excel in matching speed, transformer-based methods tend to provide more accurate matches. We propose an efficient transformer-based network architecture for local feature matching.This technique is built on constructing multiple homography hypotheses to approximate the continuous correspondence in the real world and uni-directional cross-attention to accelerate the refinement. On the YFCC100M dataset, our matching accuracy is competitive with LoFTR, a state-of-the-art transformer-based architecture, while the inference speed is boosted to 4 times, even outperforming the CNN-based methods.Comprehensive evaluations on other open datasets such as Megadepth, ScanNet, and HPatches demonstrate our method's efficacy, highlighting its potential to significantly enhance a wide array of downstream applications.
https://openreview.net/pdf/3762b865d47c647261ea21651b925a68d24663a7.pdf
Causal Inference in the Closed-Loop: Marginal Structural Models for Sequential Excursion Effects
https://openreview.net/forum?id=BgZcuEsYU8
https://openreview.net/forum?id=BgZcuEsYU8
Alexander W. Levis,Gabriel Loewinger,Francisco Pereira
NIPS 2024,Poster
Optogenetics is widely used to study the effects of neural circuit manipulation on behavior. However, the paucity of causal inference methodological work on this topic has resulted in analysis conventions that discard information, and constrain the scientific questions that can be posed. To fill this gap, we introduce a nonparametric causal inference framework for analyzing "closed-loop" designs, which use dynamic policies that assign treatment based on covariates. In this setting, standard methods can introduce bias and occlude causal effects. Building on the sequentially randomized experiments literature in causal inference, our approach extends history-restricted marginal structural models for dynamic regimes. In practice, our framework can identify a wide range of causal effects of optogenetics on trial-by-trial behavior, such as, fast/slow-acting, dose-response, additive/antagonistic, and floor/ceiling. Importantly, it does so without requiring negative controls, and can estimate how causal effect magnitudes evolve across time points. From another view, our work extends "excursion effect" methods---popular in the mobile health literature---to enable estimation of causal contrasts for treatment sequences greater than length one, in the presence of positivity violations. We derive rigorous statistical guarantees, enabling hypothesis testing of these causal effects. We demonstrate our approach on data from a recent study of dopaminergic activity on learning, and show how our method reveals relevant effects obscured in standard analyses.
https://openreview.net/pdf/d29c280f804bad36b2451f2e49f236e6099ba176.pdf
Understanding Model Selection for Learning in Strategic Environments
https://openreview.net/forum?id=R6FOuWv5MD
https://openreview.net/forum?id=R6FOuWv5MD
Tinashe Handina,Eric Mazumdar
NIPS 2024,Poster
The deployment of ever-larger machine learning models reflects a growing consensus that the more expressive the model class one optimizes over—and the more data one has access to—the more one can improve performance. As models get deployed in a variety of real-world scenarios, they inevitably face strategic environments. In this work, we consider the natural question of how the interplay of models and strategic interactions affects the relationship between performance at equilibrium and the expressivity of model classes. We find that strategic interactions can break the conventional view—meaning that performance does not necessarily monotonically improve as model classes get larger or more expressive (even with infinite data). We show the implications of this result in several contexts including strategic regression, strategic classification, and multi-agent reinforcement learning. In particular, we show that each of these settings admits a Braess' paradox-like phenomenon in which optimizing over less expressive model classes allows one to achieve strictly better equilibrium outcomes. Motivated by these examples, we then propose a new paradigm for model selection in games wherein an agent seeks to choose amongst different model classes to use as their action set in a game.
https://openreview.net/pdf/a992fb68ba2734fe4f7a089cae07379cb8f7ef58.pdf
MSA Generation with Seqs2Seqs Pretraining: Advancing Protein Structure Predictions
https://openreview.net/forum?id=D0DLlMOufv
https://openreview.net/forum?id=D0DLlMOufv
Le Zhang,Jiayang Chen,Tao Shen,Yu Li,Siqi Sun
NIPS 2024,Poster
Deep learning models like AlphaFold2 have revolutionized protein structure prediction, achieving unprecedented accuracy. However, the dependence on robust multiple sequence alignments (MSAs) continues to pose a challenge, especially for proteins that lack a wealth of homologous sequences. To overcome this limitation, we introduce MSA-Generator, a self-supervised generative protein language model. Trained on a sequence-to-sequence task using an automatically constructed dataset, MSA-Generator employs protein-specific attention mechanisms to harness large-scale protein databases, generating virtual MSAs that enrich existing ones and boost prediction accuracy. Our experiments on CASP14 and CASP15 benchmarks reveal significant improvements in LDDT scores, particularly for complex and challenging sequences, enhancing the performance of both AlphaFold2 and RoseTTAFold. The code is released at \url{https://github.com/lezhang7/MSAGen}.
https://openreview.net/pdf/fd516f23b421f9d03d5b978b03eded9900f0a462.pdf
Identifiable Shared Component Analysis of Unpaired Multimodal Mixtures
https://openreview.net/forum?id=ivCX2cjwcT
https://openreview.net/forum?id=ivCX2cjwcT
Subash Timilsina,Sagar Shrestha,Xiao Fu
NIPS 2024,Poster
A core task in multi-modal learning is to integrate information from multiple feature spaces (e.g., text and audio), offering modality-invariant essential representations of data. Recent research showed that, classical tools such as canonical correlation analysis (CCA) provably identify the shared components up to minor ambiguities, when samples in each modality are generated from a linear mixture of shared and private components. Such identifiability results were obtained under the condition that the cross-modality samples are aligned/paired according to their shared information. This work takes a step further, investigating shared component identifiability from multi-modal linear mixtures where cross-modality samples are unaligned. A distribution divergence minimization-based loss is proposed, under which a suite of sufficient conditions ensuring identifiability of the shared components are derived. Our conditions are based on cross-modality distribution discrepancy characterization and density-preserving transform removal, which are much milder than existing studies relying on independent component analysis. More relaxed conditions are also provided via adding reasonable structural constraints, motivated by available side information in various applications. The identifiability claims are thoroughly validated using synthetic and real-world data.
https://openreview.net/pdf/782dd5983e36710970c218a7fd9b39791abee723.pdf
Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces
https://openreview.net/forum?id=wAqdvcK1Fv
https://openreview.net/forum?id=wAqdvcK1Fv
Tobias Schröder,Zijing Ou,Yingzhen Li,Andrew B. Duncan
NIPS 2024,Poster
Energy-based models (EBMs) offer a flexible framework for probabilistic modelling across various data domains. However, training EBMs on data in discrete or mixed state spaces poses significant challenges due to the lack of robust and fast sampling methods. In this work, we propose to train discrete EBMs with Energy Discrepancy, a loss function which only requires the evaluation of the energy function at data points and their perturbed counterparts, thus eliminating the need for Markov chain Monte Carlo. We introduce perturbations of the data distribution by simulating a diffusion process on the discrete state space endowed with a graph structure. This allows us to inform the choice of perturbation from the structure of the modelled discrete variable, while the continuous time parameter enables fine-grained control of the perturbation. Empirically, we demonstrate the efficacy of the proposed approaches in a wide range of applications, including the estimation of discrete densities with non-binary vocabulary and binary image modelling. We also introduce the first application of EBMs to tabular data sets with applications in synthetic data generation and calibrated classification.
https://openreview.net/pdf/eac92ea0ece71224dde4b9f69a62521adc463b5c.pdf
On the Optimality of Dilated Entropy and Lower Bounds for Online Learning in Extensive-Form Games
https://openreview.net/forum?id=6PMfJT2O7G
https://openreview.net/forum?id=6PMfJT2O7G
Zhiyuan Fan,Christian Kroer,Gabriele Farina
NIPS 2024,Poster
First-order methods (FOMs) are arguably the most scalable algorithms for equilibrium computation in large extensive-form games. To operationalize these methods, a distance-generating function, acting as a regularizer for the strategy space, must be chosen. The ratio between the strong convexity modulus and the diameter of the regularizer is a key parameter in the analysis of FOMs. A natural question is then: what is the optimal distance-generating function for extensive-form decision spaces? In this paper, we make a number of contributions, ultimately establishing that the weight-one dilated entropy (DilEnt) distance-generating function is optimal up to logarithmic factors. The DilEnt regularizer is notable due to its iterate-equivalence with Kernelized OMWU (KOMWU)---the algorithm with state-of-the-art dependence on the game tree size in extensive-form games---when used in conjunction with the online mirror descent (OMD) algorithm. However, the standard analysis for OMD is unable to establish such a result; the only current analysis is by appealing to the iterate equivalence to KOMWU. We close this gap by introducing a pair of primal-dual treeplex norms, which we contend form the natural analytic viewpoint for studying the strong convexity of DilEnt. Using these norm pairs, we recover the diameter-to-strong-convexity ratio that predicts the same performance as KOMWU. Along with a new regret lower bound for online learning in sequence-form strategy spaces, we show that this ratio is nearly optimal. Finally, we showcase our analytic techniques by refining the analysis of Clairvoyant OMD when paired with DilEnt, establishing an $\mathcal{O}(n \log |\mathcal{V}| \log T/T)$ approximation rate to coarse correlated equilibrium in $n$-player games, where $|\mathcal{V}|$ is the number of reduced normal-form strategies of the players, establishing the new state of the art.
https://openreview.net/pdf/03aa07c5a35abc096ab9e5fb05fb90c95dead009.pdf
Trajectory Data Suffices for Statistically Efficient Learning in Offline RL with Linear $q^\pi$-Realizability and Concentrability
https://openreview.net/forum?id=TusuJSbRxm
https://openreview.net/forum?id=TusuJSbRxm
Volodymyr Tkachuk,Gellért Weisz,Csaba Szepesvari
NIPS 2024,Poster
We consider offline reinforcement learning (RL) in $H$-horizon Markov decision processes (MDPs) under the linear $q^\pi$-realizability assumption, where the action-value function of every policy is linear with respect to a given $d$-dimensional feature function. The hope in this setting is that learning a good policy will be possible without requiring a sample size that scales with the number of states in the MDP. Foster et al. [2021] have shown this to be impossible even under $\text{\textit{concentrability}}$, a data coverage assumption where a coefficient $C_\text{conc}$ bounds the extent to which the state-action distribution of any policy can veer off the data distribution. However, the data in this previous work was in the form of a sequence of individual transitions. This leaves open the question of whether the negative result mentioned could be overcome if the data was composed of sequences of full trajectories. In this work we answer this question positively by proving that with trajectory data, a dataset of size $\text{poly}(d,H,C_\text{conc})/\epsilon^2$ is sufficient for deriving an $\epsilon$-optimal policy, regardless of the size of the state space. The main tool that makes this result possible is due to Weisz et al. [2023], who demonstrate that linear MDPs can be used to approximate linearly $q^\pi$-realizable MDPs. The connection to trajectory data is that the linear MDP approximation relies on "skipping" over certain states. The associated estimation problems are thus easy when working with trajectory data, while they remain nontrivial when working with individual transitions. The question of computational efficiency under our assumptions remains open.
https://openreview.net/pdf/d673788da6ff7e2ffc302ff01028aeef0f99497a.pdf
Predicting the Performance of Foundation Models via Agreement-on-the-Line
https://openreview.net/forum?id=aJx9onwsR4
https://openreview.net/forum?id=aJx9onwsR4
Rahul Saxena,Taeyoun Kim,Aman Mehra,Christina Baek,J Zico Kolter,Aditi Raghunathan
NIPS 2024,Poster
Estimating the out-of-distribution performance in regimes where labels are scarce is critical to safely deploy foundation models. Recently, it was shown that ensembles of neural networks observe the phenomena "agreement-on-the-line", which can be leveraged to reliably predict OOD performance without labels. However, in contrast to classical neural networks that are trained on in-distribution data from scratch for numerous epochs, foundation models undergo minimal finetuning from heavily pretrained weights, which may reduce the ensemble diversity needed to observe agreement-on-the-line. In our work, we demonstrate that when lightly finetuning multiple runs from a $\textit{single}$ foundation model, the choice of randomness during training (linear head initialization, data ordering, and data subsetting) can lead to drastically different levels of agreement-on-the-line in the resulting ensemble. Surprisingly, only random head initialization is able to reliably induce agreement-on-the-line in finetuned foundation models across vision and language benchmarks. Second, we demonstrate that ensembles of $\textit{multiple}$ foundation models pretrained on different datasets but finetuned on the same task can also show agreement-on-the-line. In total, by careful construction of a diverse ensemble, we can utilize agreement-on-the-line-based methods to predict the OOD performance of foundation models with high precision.
https://openreview.net/pdf/2d38f44c6bd0dca35802701cdeb31cf37e4da882.pdf
Towards Principled Graph Transformers
https://openreview.net/forum?id=LJCQH6U0pl
https://openreview.net/forum?id=LJCQH6U0pl
Luis Müller,Daniel Kusuma,Blai Bonet,Christopher Morris
NIPS 2024,Poster
The expressive power of graph learning architectures based on the $k$-dimensional Weisfeiler-Leman ($k$-WL) hierarchy is well understood. However, such architectures often fail to deliver solid predictive performance on real-world tasks, limiting their practical impact. In contrast, global attention-based models such as graph transformers demonstrate strong performance in practice, but comparing their expressive power with the $k$-WL hierarchy remains challenging, particularly since these architectures rely on positional or structural encodings for their expressivity and predictive performance. To address this, we show that the recently proposed Edge Transformer, a global attention model operating on node pairs instead of nodes, has 3-WL expressive power when provided with the right tokenization. Empirically, we demonstrate that the Edge Transformer surpasses other theoretically aligned architectures regarding predictive performance while not relying on positional or structural encodings.
https://openreview.net/pdf/ae53c3024c25e68c6b5b7ee1d9cb9975b3297adc.pdf
Stepping on the Edge: Curvature Aware Learning Rate Tuners
https://openreview.net/forum?id=SEflLHIhhJ
https://openreview.net/forum?id=SEflLHIhhJ
Vincent Roulet,Atish Agarwala,Jean-Bastien Grill,Grzegorz Michal Swirszcz,Mathieu Blondel,Fabian Pedregosa
NIPS 2024,Poster
Curvature information -- particularly, the largest eigenvalue of the loss Hessian, known as the sharpness -- often forms the basis for learning rate tuners. However, recent work has shown that the curvature information undergoes complex dynamics during training, going from a phase of increasing sharpness to eventual stabilization. We analyze the closed-loop feedback effect between learning rate tuning and curvature. We find that classical learning rate tuners may yield greater one-step loss reduction, yet they ultimately underperform in the long term when compared to constant learning rates in the full batch regime. These models break the stabilization of the sharpness, which we explain using a simplified model of the joint dynamics of the learning rate and the curvature. To further investigate these effects, we introduce a new learning rate tuning method, Curvature Dynamics Aware Tuning (CDAT), which prioritizes long term curvature stabilization over instantaneous progress on the objective. In the full batch regime, CDAT shows behavior akin to prefixed warm-up schedules on deep learning objectives, outperforming tuned constant learning rates. In the mini batch regime, we observe that stochasticity introduces confounding effects that explain the previous success of some learning rate tuners at appropriate batch sizes. Our findings highlight the critical role of understanding the joint dynamics of the learning rate and curvature, beyond greedy minimization, to diagnose failures and design effective adaptive learning rate tuners.
https://openreview.net/pdf/33fc0272b7d7a67249291d330ce075067a1e789c.pdf
SceneCraft: Layout-Guided 3D Scene Generation
https://openreview.net/forum?id=CTvxvAcSJN
https://openreview.net/forum?id=CTvxvAcSJN
Xiuyu Yang,Yunze Man,Jun-Kun Chen,Yu-Xiong Wang
NIPS 2024,Poster
The creation of complex 3D scenes tailored to user specifications has been a tedious and challenging task with traditional 3D modeling tools. Although some pioneering methods have achieved automatic text-to-3D generation, they are generally limited to small-scale scenes with restricted control over the shape and texture. We introduce SceneCraft, a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences provided by users. Central to our method is a rendering-based technique, which converts 3D semantic layouts into multi-view 2D proxy maps. Furthermore, we design a semantic and depth conditioned diffusion model to generate multi-view images, which are used to learn a neural radiance field (NeRF) as the final scene representation. Without the constraints of panorama image generation, we surpass previous methods in supporting complicated indoor space generation beyond a single room, even as complicated as a whole multi-bedroom apartment with irregular shapes and layouts. Through experimental analysis, we demonstrate that our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality.
https://openreview.net/pdf/4fbf5f697f7e35affc341d10063221f725630935.pdf
Bounds for the smallest eigenvalue of the NTK for arbitrary spherical data of arbitrary dimension
https://openreview.net/forum?id=mHVmsy9len
https://openreview.net/forum?id=mHVmsy9len
Kedar Karhadkar,Michael Murray,Guido Montufar
NIPS 2024,Poster
Bounds on the smallest eigenvalue of the neural tangent kernel (NTK) are a key ingredient in the analysis of neural network optimization and memorization. However, existing results require distributional assumptions on the data and are limited to a high-dimensional setting, where the input dimension $d_0$ scales at least logarithmically in the number of samples $n$. In this work we remove both of these requirements and instead provide bounds in terms of a measure of distance between data points: notably these bounds hold with high probability even when $d_0$ is held constant versus $n$. We prove our results through a novel application of the hemisphere transform.
https://openreview.net/pdf/fc64dfe0d79cb125c2577c3c2488762284e984b7.pdf
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit
https://openreview.net/forum?id=qK4iS49KDm
https://openreview.net/forum?id=qK4iS49KDm
Jason D. Lee,Kazusato Oko,Taiji Suzuki,Denny Wu
NIPS 2024,Poster
We study the problem of gradient descent learning of a single-index target function $f_*(\boldsymbol{x}) = \textstyle\sigma_*\left(\langle\boldsymbol{x},\boldsymbol{\theta}\rangle\right)$ under isotropic Gaussian data in $\mathbb{R}^d$, where the unknown link function $\sigma_*:\mathbb{R}\to\mathbb{R}$ has information exponent $p$ (defined as the lowest degree in the Hermite expansion). Prior works showed that gradient-based training of neural networks can learn this target with $n\gtrsim d^{\Theta(p)}$ samples, and such complexity is predicted to be necessary by the correlational statistical query lower bound. Surprisingly, we prove that a two-layer neural network optimized by an SGD-based algorithm (on the squared loss) learns $f_*$ with a complexity that is not governed by the information exponent. Specifically, for arbitrary polynomial single-index models, we establish a sample and runtime complexity of $n \simeq T = \Theta(d\cdot\mathrm{polylog} d)$, where $\Theta(\cdot)$ hides a constant only depending on the degree of $\sigma_*$; this dimension dependence matches the information theoretic limit up to polylogarithmic factors. More generally, we show that $n\gtrsim d^{(p_*-1)\vee 1}$ samples are sufficient to achieve low generalization error, where $p_* \le p$ is the \textit{generative exponent} of the link function. Core to our analysis is the reuse of minibatch in the gradient computation, which gives rise to higher-order information beyond correlational queries.
https://openreview.net/pdf/5c351e805429bc780ae5fab35b4eaecf013991eb.pdf
Rethinking Score Distillation as a Bridge Between Image Distributions
https://openreview.net/forum?id=I8PkICj9kM
https://openreview.net/forum?id=I8PkICj9kM
David McAllister,Songwei Ge,Jia-Bin Huang,David W. Jacobs,Alexei A Efros,Aleksander Holynski,Angjoo Kanazawa
NIPS 2024,Poster
Score distillation sampling (SDS) has proven to be an important tool, enabling the use of large-scale diffusion priors for tasks operating in data-poor domains. Unfortunately, SDS has a number of characteristic artifacts that limit its utility in general-purpose applications. In this paper, we make progress toward understanding the behavior of SDS and its variants by viewing them as solving an optimal-cost transport path from some current source distribution to a target distribution. Under this new interpretation, we argue that these methods' characteristic artifacts are caused by (1) linear approximation of the optimal path and (2) poor estimates of the source distribution. We show that by calibrating the text conditioning of the source distribution, we can produce high-quality generation and translation results with little extra overhead. Our method can be easily applied across many domains, matching or beating the performance of specialized methods. We demonstrate its utility in text-to-2D, text-to-3D, translating paintings to real images, optical illusion generation, and 3D sketch-to-real. We compare our method to existing approaches for score distillation sampling and show that it can produce high-frequency details with realistic colors.
https://openreview.net/pdf/6e24468d3ec6ea657f13f09dda826cacbce832af.pdf
Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models
https://openreview.net/forum?id=FJlrSZBMCD
https://openreview.net/forum?id=FJlrSZBMCD
Aviv Bick,Kevin Li,Eric P. Xing,J Zico Kolter,Albert Gu
NIPS 2024,Poster
Transformer architectures have become a dominant paradigm for domains like language modeling but suffer in many inference settings due to their quadratic-time self-attention. Recently proposed subquadratic architectures, such as Mamba, have shown promise, but have been pretrained with substantially less computational resources than the strongest Transformer models. In this work, we present a method that is able to distill a pretrained Transformer architecture into alternative architectures such as state space models (SSMs). The key idea to our approach is that we can view both Transformers and SSMs as applying different forms of mixing matrices over the token sequences. We can thus progressively distill the Transformer architecture by matching different degrees of granularity in the SSM: first matching the mixing matrices themselves, then the hidden units at each block, and finally the end-to-end predictions. Our method, called MOHAWK, is able to distill a Mamba-2 variant based on the Phi-1.5 architecture (Phi-Mamba) using only 3B tokens. Despite using less than 1% of the training data typically used to train models from scratch, Phi-Mamba boasts substantially stronger performance compared to all past open-source non-Transformer models. MOHAWK allows models like SSMs to leverage computational resources invested in training Transformer-based architectures, highlighting a new avenue for building such models.
https://openreview.net/pdf/ddedaa9f0d6404305d1b4b3223cca34caab6ab83.pdf
The Star Geometry of Critic-Based Regularizer Learning
https://openreview.net/forum?id=2GQeCbhxVy
https://openreview.net/forum?id=2GQeCbhxVy
Oscar Leong,Eliza O'Reilly,Yong Sheng Soh
NIPS 2024,Poster
Variational regularization is a classical technique to solve statistical inference tasks and inverse problems, with modern data-driven approaches parameterizing regularizers via deep neural networks showcasing impressive empirical performance. Recent works along these lines learn task-dependent regularizers. This is done by integrating information about the measurements and ground-truth data in an unsupervised, critic-based loss function, where the regularizer attributes low values to likely data and high values to unlikely data. However, there is little theory about the structure of regularizers learned via this process and how it relates to the two data distributions. To make progress on this challenge, we initiate a study of optimizing critic-based loss functions to learn regularizers over a particular family of regularizers: gauges (or Minkowski functionals) of star-shaped bodies. This family contains regularizers that are commonly employed in practice and shares properties with regularizers parameterized by deep neural networks. We specifically investigate critic-based losses derived from variational representations of statistical distances between probability measures. By leveraging tools from star geometry and dual Brunn-Minkowski theory, we illustrate how these losses can be interpreted as dual mixed volumes that depend on the data distribution. This allows us to derive exact expressions for the optimal regularizer in certain cases. Finally, we identify which neural network architectures give rise to such star body gauges and when do such regularizers have favorable properties for optimization. More broadly, this work highlights how the tools of star geometry can aid in understanding the geometry of unsupervised regularizer learning.
https://openreview.net/pdf/f281b0787e1d0047d6c91046e6bd5f68553224e9.pdf
Perceiving Longer Sequences With Bi-Directional Cross-Attention Transformers
https://openreview.net/forum?id=5sm8YDnWvC
https://openreview.net/forum?id=5sm8YDnWvC
Markus Hiller,Krista A. Ehinger,Tom Drummond
NIPS 2024,Poster
We present a novel bi-directional Transformer architecture (BiXT) which scales linearly with input size in terms of computational cost and memory consumption, but does not suffer the drop in performance or limitation to only one input modality seen with other efficient Transformer-based approaches. BiXT is inspired by the Perceiver architectures but replaces iterative attention with an efficient bi-directional cross-attention module in which input tokens and latent variables attend to each other simultaneously, leveraging a naturally emerging attention-symmetry between the two. This approach unlocks a key bottleneck experienced by Perceiver-like architectures and enables the processing and interpretation of both semantics ('what') and location ('where') to develop alongside each other over multiple layers -- allowing its direct application to dense and instance-based tasks alike. By combining efficiency with the generality and performance of a full Transformer architecture, BiXT can process longer sequences like point clouds, text or images at higher feature resolutions and achieves competitive performance across a range of tasks like point cloud part segmentation, semantic image segmentation, image classification, hierarchical sequence modeling and document retrieval. Our experiments demonstrate that BiXT models outperform larger competitors by leveraging longer sequences more efficiently on vision tasks like classification and segmentation, and perform on par with full Transformer variants on sequence modeling and document retrieval -- but require 28\% fewer FLOPs and are up to $8.4\times$ faster.
https://openreview.net/pdf/cffc8b63690897adbc9270e148ab2155fbc70a24.pdf
SceneDiffuser: Efficient and Controllable Driving Simulation Initialization and Rollout
https://openreview.net/forum?id=a4qT29Levh
https://openreview.net/forum?id=a4qT29Levh
Chiyu Max Jiang,Yijing Bai,Andre Cornman,Christopher Davis,Xiukun Huang,Hong Jeon,Sakshum Kulshrestha,John Wheatley Lambert,Shuangyu Li,Xuanyu Zhou,Carlos Fuertes,Chang Yuan,Mingxing Tan,Yin Zhou,Dragomir Anguelov
NIPS 2024,Poster
Simulation with realistic and interactive agents represents a key task for autonomous vehicle (AV) software development in order to test AV performance in prescribed, often long-tail scenarios. In this work, we propose SceneDiffuser, a scene-level diffusion prior for traffic simulation. We present a singular framework that unifies two key stages of simulation: scene initialization and scene rollout. Scene initialization refers to generating the initial layout for the traffic in a scene, and scene rollout refers to closed-loop simulation for the behaviors of the agents. While diffusion has been demonstrated to be effective in learning realistic, multimodal agent distributions, two open challenges remain: controllability and closed-loop inference efficiency and realism. To this end, to address controllability challenges, we propose generalized hard constraints, a generalized inference-time constraint mechanism that is simple yet effective. To improve closed-loop inference quality and efficiency, we propose amortized diffusion, a novel diffusion denoising paradigm that amortizes the physical cost of denoising over future simulation rollout steps, reducing the cost of per physical rollout step to a single denoising function evaluation, while dramatically reducing closed-loop errors. We demonstrate the effectiveness of our approach on the Waymo Open Dataset, where we are able to generate distributionally realistic scenes, while obtaining competitive performance in the Sim Agents Challenge, surpassing the state-of-the-art in many realism attributes.
https://openreview.net/pdf/ac6b24ffb0e47181c8916963928d13383ddf22cf.pdf
No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design Choices
https://openreview.net/forum?id=rIOl7KbSkv
https://openreview.net/forum?id=rIOl7KbSkv
Qi Pang,Shengyuan Hu,Wenting Zheng,Virginia Smith
NIPS 2024,Poster
Advances in generative models have made it possible for AI-generated text, code, and images to mirror human-generated content in many applications. Watermarking, a technique that aims to embed information in the output of a model to verify its source, is useful for mitigating the misuse of such AI-generated content. However, we show that common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to attack---leading to fundamental trade-offs in robustness, utility, and usability. To navigate these trade-offs, we rigorously study a set of simple yet effective attacks on common watermarking systems, and propose guidelines and defenses for LLM watermarking in practice.
https://openreview.net/pdf/bb004f77c167bc7493180ec14d476519fd86acc7.pdf
Scaling Sign Language Translation
https://openreview.net/forum?id=M80WgiO2Lb
https://openreview.net/forum?id=M80WgiO2Lb
Biao Zhang,Garrett Tanzer,Orhan Firat
NIPS 2024,Poster
Sign language translation (SLT) addresses the problem of translating information from a sign language in video to a spoken language in text. Existing studies, while showing progress, are often limited to narrow domains and/or few sign languages and struggle with open-domain tasks. In this paper, we push forward the frontier of SLT by scaling pretraining data, model size, and number of translation directions. We perform large-scale SLT pretraining on different data including 1) noisy multilingual Youtube SLT data, 2) parallel text corpora, and 3) SLT data augmented by translating video captions to other languages with off-the-shelf machine translation models. We unify different pretraining tasks with task-specific prompts under the encoder-decoder architecture, and initialize the SLT model with pretrained (m/By)T5 models across model sizes. SLT pretraining results on How2Sign and FLEURS-ASL\#0 (ASL to 42 spoken languages) demonstrate the significance of data/model scaling and cross-lingual cross-modal transfer, as well as the feasibility of zero-shot SLT. We finetune the pretrained SLT models on 5 downstream open-domain SLT benchmarks covering 5 sign languages. Experiments show substantial quality improvements over the vanilla baselines, surpassing the previous state-of-the-art (SOTA) by wide margins.
https://openreview.net/pdf/20674098c57fba69ddfb43ec06d0123229a6df0a.pdf
Provable Editing of Deep Neural Networks using Parametric Linear Relaxation
https://openreview.net/forum?id=IGhpUd496D
https://openreview.net/forum?id=IGhpUd496D
Zhe Tao,Aditya Thakur
NIPS 2024,Poster
Ensuring that a DNN satisfies a desired property is critical when deploying DNNs in safety-critical applications. There are efficient methods that can verify whether a DNN satisfies a property, as seen in the annual DNN verification competition (VNN-COMP). However, the problem of provably editing a DNN to satisfy a property remains challenging. We present PREPARED, the first efficient technique for provable editing of DNNs. Given a DNN $\mathcal{N}$ with parameters $\theta$, input polytope $P$, and output polytope $Q$, PREPARED finds new parameters $\theta'$ such that $\forall \mathrm{x} \in P . \mathcal{N}(\mathrm{x}; \theta') \in Q$ while minimizing the changes $\lVert{\theta' - \theta}\rVert$. Given a DNN and a property it violates from the VNN-COMP benchmarks, PREPARED is able to provably edit the DNN to satisfy this property within 45 seconds. PREPARED is efficient because it relaxes the NP-hard provable editing problem to solving a linear program. The key contribution is the novel notion of Parametric Linear Relaxation, which enables PREPARED to construct tight output bounds of the DNN that are parameterized by the new parameters $\theta'$. We demonstrate that PREPARED is more efficient and effective compared to prior DNN editing approaches i) using the VNN-COMP benchmarks, ii) by editing CIFAR10 and TinyImageNet image-recognition DNNs, and BERT sentiment-classification DNNs for local robustness, and iii) by training a DNN to model a geodynamics process and satisfy physics constraints.
https://openreview.net/pdf/945784991c16d056f7424e504c586e0fe66b29cd.pdf
Differentially Private Set Representations
https://openreview.net/forum?id=GQNvvQquO0
https://openreview.net/forum?id=GQNvvQquO0
Sarvar Patel,Giuseppe Persiano,Joon Young Seo,Kevin Yeo
NIPS 2024,Poster
We study the problem of differentially private (DP) mechanisms for representing sets of size $k$ from a large universe. Our first construction creates $(\epsilon,\delta)$-DP representations with error probability of $1/(e^\epsilon + 1)$ using space at most $1.05 k \epsilon \cdot \log(e)$ bits where the time to construct a representation is $O(k \log(1/\delta))$ while decoding time is $O(\log(1/\delta))$. We also present a second algorithm for pure $\epsilon$-DP representations with the same error using space at most $k \epsilon \cdot \log(e)$ bits, but requiring large decoding times. Our algorithms match the lower bounds on privacy-utility trade-offs (including constants but ignoring $\delta$ factors) and we also present a new space lower bound matching our constructions up to small constant factors. To obtain our results, we design a new approach embedding sets into random linear systems deviating from most prior approaches that inject noise into non-private solutions.
https://openreview.net/pdf/8586242b37ea885978251c3f7e0ca1537d1b7e6c.pdf
Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation
https://openreview.net/forum?id=oPFjhl6DpR
https://openreview.net/forum?id=oPFjhl6DpR
Shangding Gu,Laixi Shi,Yuhao Ding,Alois Knoll,Costas Spanos,Adam Wierman,Ming Jin
NIPS 2024,Poster
Safe reinforcement learning (RL) is crucial for deploying RL agents in real-world applications, as it aims to maximize long-term rewards while satisfying safety constraints. However, safe RL often suffers from sample inefficiency, requiring extensive interactions with the environment to learn a safe policy. We propose Efficient Safe Policy Optimization (ESPO), a novel approach that enhances the efficiency of safe RL through sample manipulation. ESPO employs an optimization framework with three modes: maximizing rewards, minimizing costs, and balancing the trade-off between the two. By dynamically adjusting the sampling process based on the observed conflict between reward and safety gradients, ESPO theoretically guarantees convergence, optimization stability, and improved sample complexity bounds. Experiments on the Safety-MuJoCo and Omnisafe benchmarks demonstrate that ESPO significantly outperforms existing primal-based and primal-dual-based baselines in terms of reward maximization and constraint satisfaction. Moreover, ESPO achieves substantial gains in sample efficiency, requiring 25--29\% fewer samples than baselines, and reduces training time by 21--38\%.
https://openreview.net/pdf/ba79b8360e1e32df1bc174e2a4c138266533424a.pdf
A Non-parametric Direct Learning Approach to Heterogeneous Treatment Effect Estimation under Unmeasured Confounding
https://openreview.net/forum?id=bwlUQsQumh
https://openreview.net/forum?id=bwlUQsQumh
Xinhai Zhang,Xingye Qiao
NIPS 2024,Poster
In many social, behavioral, and biomedical sciences, treatment effect estimation is a crucial step in understanding the impact of an intervention, policy, or treatment. In recent years, an increasing emphasis has been placed on heterogeneity in treatment effects, leading to the development of various methods for estimating Conditional Average Treatment Effects (CATE). These approaches hinge on a crucial identifying condition of no unmeasured confounding, an assumption that is not always guaranteed in observational studies or randomized control trials with non-compliance. In this paper, we proposed a general framework for estimating CATE with a possible unmeasured confounder using Instrumental Variables. We also construct estimators that exhibit greater efficiency and robustness against various scenarios of model misspecification. The efficacy of the proposed framework is demonstrated through simulation studies and a real data example.
https://openreview.net/pdf/c105411027bf75f81c9b025a7ef7a956478b3eba.pdf
Infinite Limits of Multi-head Transformer Dynamics
https://openreview.net/forum?id=p0BBKhD5aI
https://openreview.net/forum?id=p0BBKhD5aI
Blake Bordelon,Hamza Tahir Chaudhry,Cengiz Pehlevan
NIPS 2024,Poster
In this work we analyze various scaling limits of the training dynamics of transformer models in the feature learning regime. We identify the set of parameterizations which admit well defined infinite width and depth limits that allow the attention layers to update throughout training, a relevant notion of feature learning in these models. We then use tools from dynamical mean field theory (DMFT) to analyze various infinite limits (infinite heads, infinite key/query dimension, and infinite depth) which have different statistical descriptions depending on which infinite limit is taken and how attention layers are scaled. We provide numerical evidence of convergence to the limits and show they maintain the correct scale of updates for both SGD and Adam.
https://openreview.net/pdf/a585f90934bd1a7f438b6cf6acb2ada2329f9c29.pdf
AdaFlow: Imitation Learning with Variance-Adaptive Flow-Based Policies
https://openreview.net/forum?id=ugXKInqDCC
https://openreview.net/forum?id=ugXKInqDCC
Xixi Hu,qiang liu,Xingchao Liu,Bo Liu
NIPS 2024,Poster
Diffusion-based imitation learning improves Behavioral Cloning (BC) on multi-modal decision-making, but comes at the cost of significantly slower inference due to the recursion in the diffusion process. It urges us to design efficient policy generators while keeping the ability to generate diverse actions. To address this challenge, we propose AdaFlow, an imitation learning framework based on flow-based generative modeling. AdaFlow represents the policy with state-conditioned ordinary differential equations (ODEs), which are known as probability flows. We reveal an intriguing connection between the conditional variance of their training loss and the discretization error of the ODEs. With this insight, we propose a variance-adaptive ODE solver that can adjust its step size in the inference stage, making AdaFlow an adaptive decision-maker, offering rapid inference without sacrificing diversity. Interestingly, it automatically reduces to a one-step generator when the action distribution is uni-modal. Our comprehensive empirical evaluation shows that AdaFlow achieves high performance with fast inference speed.
https://openreview.net/pdf/fa1f545e371f428274cf16d6695ca80a78e5311d.pdf
Generative Fractional Diffusion Models
https://openreview.net/forum?id=B9qg3wo75g
https://openreview.net/forum?id=B9qg3wo75g
Gabriel Nobis,Maximilian Springenberg,Marco Aversa,Michael Detzel,Rembert Daems,Roderick Murray-Smith,Shinichi Nakajima,Sebastian Lapuschkin,Stefano Ermon,Tolga Birdal,Manfred Opper,Christoph Knochenhauer,Luis Oala,Wojciech Samek
NIPS 2024,Poster
We introduce the first continuous-time score-based generative model that leverages fractional diffusion processes for its underlying dynamics. Although diffusion models have excelled at capturing data distributions, they still suffer from various limitations such as slow convergence, mode-collapse on imbalanced data, and lack of diversity. These issues are partially linked to the use of light-tailed Brownian motion (BM) with independent increments. In this paper, we replace BM with an approximation of its non-Markovian counterpart, fractional Brownian motion (fBM), characterized by correlated increments and Hurst index $H \in (0,1)$, where $H=0.5$ recovers the classical BM. To ensure tractable inference and learning, we employ a recently popularized Markov approximation of fBM (MA-fBM) and derive its reverse-time model, resulting in *generative fractional diffusion models* (GFDM). We characterize the forward dynamics using a continuous reparameterization trick and propose *augmented score matching* to efficiently learn the score function, which is partly known in closed form, at minimal added cost. The ability to drive our diffusion model via MA-fBM offers flexibility and control. $H \leq 0.5$ enters the regime of *rough paths* whereas $H>0.5$ regularizes diffusion paths and invokes long-term memory. The Markov approximation allows added control by varying the number of Markov processes linearly combined to approximate fBM. Our evaluations on real image datasets demonstrate that GFDM achieves greater pixel-wise diversity and enhanced image quality, as indicated by a lower FID, offering a promising alternative to traditional diffusion models
https://openreview.net/pdf/01334c7c55c6a7e46ca396d90dd37632c4a411a4.pdf
Diffusion Spectral Representation for Reinforcement Learning
https://openreview.net/forum?id=C3tEX45hJX
https://openreview.net/forum?id=C3tEX45hJX
Dmitry Shribak,Chen-Xiao Gao,Yitong Li,Chenjun Xiao,Bo Dai
NIPS 2024,Poster
Diffusion-based models have achieved notable empirical successes in reinforcement learning (RL) due to their expressiveness in modeling complex distributions. Despite existing methods being promising, the key challenge of extending existing methods for broader real-world applications lies in the computational cost at inference time, i.e., sampling from a diffusion model is considerably slow as it often requires tens to hundreds of iterations to generate even one sample. To circumvent this issue, we propose to leverage the flexibility of diffusion models for RL from a representation learning perspective. In particular, by exploiting the connection between diffusion models and energy-based models, we develop Diffusion Spectral Representation (Diff-SR), a coherent algorithm framework that enables extracting sufficient representations for value functions in Markov decision processes (MDP) and partially observable Markov decision processes (POMDP). We further demonstrate how Diff-SR facilitates efficient policy optimization and practical algorithms while explicitly bypassing the difficulty and inference cost of sampling from the diffusion model. Finally, we provide comprehensive empirical studies to verify the benefits of Diff-SR in delivering robust and advantageous performance across various benchmarks with both fully and partially observable settings.
https://openreview.net/pdf/09a16f6e24fbc0417cf0ba278d69fa287ed242e2.pdf
Multi-LLM Debate: Framework, Principals, and Interventions
https://openreview.net/forum?id=sy7eSEXdPC
https://openreview.net/forum?id=sy7eSEXdPC
Andrew Estornell,Yang Liu
NIPS 2024,Poster
The flexible and generalized nature of large language models has allowed for their application in a wide array of language-based domains. Much like their human contemporaries, these models are capable of engaging in discussions and debates as a means of improving answer quality. We first take a theoretical approach to analyzing debate and provide a framework through which debate can be mathematically examined. Building on this framework, we provide several theoretical results for multi-agent debate. In particular, we demonstrate that similar model capabilities, or similar model responses, can result in static debate dynamics where the debate procedure simply converges to the majority opinion. When this majority opinion is the result of a common misconception (ingrained in the models through shared training data) debate is likely to converge to answers associated with that common misconception. Using insights from our theoretical results we then propose three interventions which improve the efficacy of debate. For each intervention, we provide theoretical results demonstrating how debate is improved. We also demonstrate that these interventions result in better performance on four common benchmark tasks.
https://openreview.net/pdf/ae3a0032b5023848f8c865ef47d515acf58cb84f.pdf
ProEdit: Simple Progression is All You Need for High-Quality 3D Scene Editing
https://openreview.net/forum?id=iC869BBmc5
https://openreview.net/forum?id=iC869BBmc5
Jun-Kun Chen,Yu-Xiong Wang
NIPS 2024,Poster
This paper proposes ProEdit - a simple yet effective framework for high-quality 3D scene editing guided by diffusion distillation in a novel progressive manner. Inspired by the crucial observation that multi-view inconsistency in scene editing is rooted in the diffusion model’s large feasible output space (FOS), our framework controls the size of FOS and reduces inconsistency by decomposing the overall editing task into several subtasks, which are then executed progressively on the scene. Within this framework, we design a difficulty-aware subtask decomposition scheduler and an adaptive 3D Gaussian splatting (3DGS) training strategy, ensuring high efficiency in performing each subtask. Extensive evaluation shows that our ProEdit achieves state-of-the-art results in various scenes and challenging editing tasks, all through a simple framework without any expensive or sophisticated add-ons like distillation losses, components, or training procedures. Notably, ProEdit also provides a new way to preview, control, and select the aggressivity of editing operation during the editing process.
https://openreview.net/pdf/ad3e5dfa3ac274eb629365c49592c881edf6c5f1.pdf
Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models
https://openreview.net/forum?id=zIr2QjU4hl
https://openreview.net/forum?id=zIr2QjU4hl
Masatoshi Uehara,Yulai Zhao,Ehsan Hajiramezanali,Gabriele Scalia,Gökcen Eraslan,Avantika Lal,Sergey Levine,Tommaso Biancalani
NIPS 2024,Poster
AI-driven design problems, such as DNA/protein sequence design, are commonly tackled from two angles: generative modeling, which efficiently captures the feasible design space (e.g., natural images or biological sequences), and model-based optimization, which utilizes reward models for extrapolation. To combine the strengths of both approaches, we adopt a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL. Although prior work has explored similar avenues, they primarily focus on scenarios where accurate reward models are accessible. In contrast, we concentrate on an offline setting where a reward model is unknown, and we must learn from static offline datasets, a common scenario in scientific domains. In offline scenarios, existing approaches tend to suffer from overoptimization, as they may be misled by the reward model in out-of-distribution regions. To address this, we introduce a conservative fine-tuning approach, BRAID, by optimizing a conservative reward model, which includes additional penalization outside of offline data distributions. Through empirical and theoretical analysis, we demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models while avoiding the generation of invalid designs through pre-trained diffusion models.
https://openreview.net/pdf/208379a521961503552a6647a7533a7037e81262.pdf
Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis
https://openreview.net/forum?id=wgpmDyJgsg
https://openreview.net/forum?id=wgpmDyJgsg
Qitao Zhao,Shubham Tulsiani
NIPS 2024,Poster
Inferring the 3D structure underlying a set of multi-view images typically requires solving two co-dependent tasks -- accurate 3D reconstruction requires precise camera poses, and predicting camera poses relies on (implicitly or explicitly) modeling the underlying 3D. The classical framework of analysis by synthesis casts this inference as a joint optimization seeking to explain the observed pixels, and recent instantiations learn expressive 3D representations (e.g., Neural Fields) with gradient-descent-based pose refinement of initial pose estimates. However, given a sparse set of observed views, the observations may not provide sufficient direct evidence to obtain complete and accurate 3D. Moreover, large errors in pose estimation may not be easily corrected and can further degrade the inferred 3D. To allow robust 3D reconstruction and pose estimation in this challenging setup, we propose SparseAGS, a method that adapts this analysis-by-synthesis approach by: a) including novel-view-synthesis-based generative priors in conjunction with photometric objectives to improve the quality of the inferred 3D, and b) explicitly reasoning about outliers and using a discrete search with a continuous optimization-based strategy to correct them. We validate our framework across real-world and synthetic datasets in combination with several off-the-shelf pose estimation systems as initialization. We find that it significantly improves the base systems' pose accuracy while yielding high-quality 3D reconstructions that outperform the results from current multi-view reconstruction baselines.
https://openreview.net/pdf/9deb0cefa84b633dd45b98a2c28dfa4cb9a5847d.pdf
Bayesian Strategic Classification
https://openreview.net/forum?id=SadbRPoG2k
https://openreview.net/forum?id=SadbRPoG2k
Lee Cohen,Saeed Sharifi -Malvajerdi,Kevin Stangl,Ali Vakilian,Juba Ziani
NIPS 2024,Poster
In strategic classification, agents modify their features, at a cost, to obtain a positive classification outcome from the learner’s classifier, typically assuming agents have full knowledge of the deployed classifier. In contrast, we consider a Bayesian setting where agents have a common distributional prior on the classifier being used and agents manipulate their features to maximize their expected utility according to this prior. The learner can reveal truthful, yet not necessarily complete, information about the classifier to the agents, aiming to release just enough information to shape the agents' behavior and thus maximize accuracy. We show that partial information release can counter-intuitively benefit the learner’s accuracy, allowing qualified agents to pass the classifier while preventing unqualified agents from doing so. Despite the intractability of computing the best response of an agent in the general case, we provide oracle-efficient algorithms for scenarios where the learner’s hypothesis class consists of low-dimensional linear classifiers or when the agents’ cost function satisfies a sub-modularity condition. Additionally, we address the learner’s optimization problem, offering both positive and negative results on determining the optimal information release to maximize expected accuracy, particularly in settings where an agent’s qualification can be represented by a real-valued number.
https://openreview.net/pdf/1c9f2f87da91ab770db190333ed39b5e5c423b9f.pdf
InstructG2I: Synthesizing Images from Multimodal Attributed Graphs
https://openreview.net/forum?id=zWnW4zqkuM
https://openreview.net/forum?id=zWnW4zqkuM
Bowen Jin,Ziqi Pang,Bingjun Guo,Yu-Xiong Wang,Jiaxuan You,Jiawei Han
NIPS 2024,Poster
In this paper, we approach an overlooked yet critical task Graph2Image: generating images from multimodal attributed graphs (MMAGs). This task poses significant challenges due to the explosion in graph size, dependencies among graph entities, and the need for controllability in graph conditions. To address these challenges, we propose a graph context-conditioned diffusion model called InstructG2I. InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling by combining personalized page rank and re-ranking based on vision-language features. Then, a graph QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process of diffusion. Finally, we propose graph classifier-free guidance, enabling controllable generation by varying the strength of graph guidance and multiple connected edges to a node. Extensive experiments conducted on three datasets from different domains demonstrate the effectiveness and controllability of our approach. The code is available at https://github.com/PeterGriffinJin/InstructG2I.
https://openreview.net/pdf/14232e04d8524d77648d3c5ea135527ad4aef01a.pdf
E2ENet: Dynamic Sparse Feature Fusion for Accurate and Efficient 3D Medical Image Segmentation
https://openreview.net/forum?id=Xp8qhdmeb4
https://openreview.net/forum?id=Xp8qhdmeb4
Boqian Wu,Qiao Xiao,Shiwei Liu,Lu Yin,Mykola Pechenizkiy,Decebal Constantin Mocanu,Maurice van Keulen,Elena Mocanu
NIPS 2024,Poster
Deep neural networks have evolved as the leading approach in 3D medical image segmentation due to their outstanding performance. However, the ever-increasing model size and computational cost of deep neural networks have become the primary barriers to deploying them on real-world, resource-limited hardware. To achieve both segmentation accuracy and efficiency, we propose a 3D medical image segmentation model called Efficient to Efficient Network (E2ENet), which incorporates two parametrically and computationally efficient designs. i. Dynamic sparse feature fusion (DSFF) mechanism: it adaptively learns to fuse informative multi-scale features while reducing redundancy. ii. Restricted depth-shift in 3D convolution: it leverages the 3D spatial information while keeping the model and computational complexity as 2D-based methods. We conduct extensive experiments on AMOS, Brain Tumor Segmentation and BTCV Challenge, demonstrating that E2ENet consistently achieves a superior trade-off between accuracy and efficiency than prior arts across various resource constraints. %In particular, with a single model and single scale, E2ENet achieves comparable accuracy on the large-scale challenge AMOS-CT, while saving over 69% parameter count and 27% FLOPs in the inference phase, compared with the previous best-performing method. Our code has been made available at: https://github.com/boqian333/E2ENet-Medical.
https://openreview.net/pdf/8a0a4586b53364bfb4c24a094f9633385ac9ae31.pdf
Personalized Federated Learning with Mixture of Models for Adaptive Prediction and Model Fine-Tuning
https://openreview.net/forum?id=yvUHnBkCzd
https://openreview.net/forum?id=yvUHnBkCzd
Pouya M. Ghari,Yanning Shen
NIPS 2024,Poster
Federated learning is renowned for its efficacy in distributed model training, ensuring that users, called clients, retain data privacy by not disclosing their data to the central server that orchestrates collaborations. Most previous work on federated learning assumes that clients possess static batches of training data. However, clients may also need to make real-time predictions on streaming data in non-stationary environments. In such dynamic environments, employing pre-trained models may be inefficient, as they struggle to adapt to the constantly evolving data streams. To address this challenge, clients can fine-tune models online, leveraging their observed data to enhance performance. Despite the potential benefits of client participation in federated online model fine-tuning, existing analyses have not conclusively demonstrated its superiority over local model fine-tuning. To bridge this gap, the present paper develops a novel personalized federated learning algorithm, wherein each client constructs a personalized model by combining a locally fine-tuned model with multiple federated models learned by the server over time. Theoretical analysis and experiments on real datasets corroborate the effectiveness of this approach for real-time predictions and federated model fine-tuning.
https://openreview.net/pdf/1f67e6c96793fc968860e4ecdc67eeb800a1dc2f.pdf
A Combinatorial Algorithm for the Semi-Discrete Optimal Transport Problem
https://openreview.net/forum?id=Xq0Jwbczkn
https://openreview.net/forum?id=Xq0Jwbczkn
Pankaj Agarwal,Sharath Raghvendra,Pouyan Shirzadian,Keegan Yao
NIPS 2024,Poster
Optimal Transport (OT, also known as the Wasserstein distance) is a popular metric for comparing probability distributions and has been successfully used in many machine-learning applications. In the semi-discrete $2$-Wasserstein problem, we wish to compute the cheapest way to transport all the mass from a continuous distribution $\mu$ to a discrete distribution $\nu$ in $\mathbb{R}^d$ for $d\ge 1$, where the cost of transporting unit mass between points $a$ and $b$ is $d(a,b)=||a-b||^2$. When both distributions are discrete, a simple combinatorial framework has been used to find the exact solution (see e.g. [Orlin, STOC 1988]). In this paper, we propose a combinatorial framework for the semi-discrete OT, which can be viewed as an extension of the combinatorial framework for the discrete OT but requires several new ideas. We present a new algorithm that given $\mu$ and $\nu$ in $\mathbb{R}^2$ and a parameter $\varepsilon>0$, computes an $\varepsilon$-additive approximate semi-discrete transport plan in $O(n^{4}\log n\log \frac{1}{\varepsilon})$ time (in the worst case), where $n$ is the support-size of the discrete distribution $\nu$ and we assume that the mass of $\mu$ inside a triangle can be computed in $O(1)$ time. Our algorithm is significantly faster than the known algorithms, and unlike many numerical algorithms, it does not make any assumptions on the smoothness of $\mu$. As an application of our algorithm, we describe a data structure to store a large discrete distribution $\mu$ (with support size $N$) using $O(N)$ space so that, given a query discrete distribution $\nu$ (with support size $k$), an $\varepsilon$-additive approximate transport plan can be computed in $O(k^{3}\sqrt{N}\log \frac{1}{\varepsilon})$ time in $2$ dimensions. Our algorithm and data structure extend to higher dimensions as well as to $p$-Wasserstein problem for any $p \ge 1$.
https://openreview.net/pdf/9d3d30f46c475b1352cf3332893e08e4e46342c6.pdf
Extending Video Masked Autoencoders to 128 frames
https://openreview.net/forum?id=bFrNPlWchg
https://openreview.net/forum?id=bFrNPlWchg
Nitesh Bharadwaj Gundavarapu,Luke Friedman,Raghav Goyal,Chaitra Hegde,Eirikur Agustsson,Sagar M. Waghmare,Mikhail Sirotenko,Ming-Hsuan Yang,Tobias Weyand,Boqing Gong,Leonid Sigal
NIPS 2024,Poster
Video understanding has witnessed significant progress with recent video foundation models demonstrating strong performance owing to self-supervised pre-training objectives; Masked Autoencoders (MAE) being the design of choice. Nevertheless, the majority of prior works that leverage MAE pre-training have focused on relatively short video representations (16 / 32 frames in length) largely due to hardware memory and compute limitations that scale poorly with video length due to the dense memory-intensive self-attention decoding. One natural strategy to address these challenges is to subsample tokens to reconstruct during decoding (or decoder masking). In this work, we propose an effective strategy for prioritizing tokens which allows training on longer video sequences (128 frames) and gets better performance than, more typical, random and uniform masking strategies. The core of our approach is an adaptive decoder masking strategy that prioritizes the most important tokens and uses quantized tokens as reconstruction objectives. Our adaptive strategy leverages a powerful MAGVIT-based tokenizer that jointly learns the tokens and their priority. We validate our design choices through exhaustive ablations and observe improved performance of the resulting long-video (128 frames) encoders over short-video (32 frames) counterparts. With our long-video masked autoencoder (LVMAE) strategy, we surpass state-of-the-art on Diving48 by 3.9 points and EPIC-Kitchens-100 verb classification by 2.5 points while relying on a simple core architecture and video-only pre-training (unlike some of the prior works that require millions of labeled video-text pairs or specialized encoders).
https://openreview.net/pdf/111644cfb458a030908543084cd59c0cc4b9c127.pdf
Déjà Vu Memorization in Vision–Language Models
https://openreview.net/forum?id=SFCZdXDyNs
https://openreview.net/forum?id=SFCZdXDyNs
Bargav Jayaraman,Chuan Guo,Kamalika Chaudhuri
NIPS 2024,Poster
Vision-Language Models (VLMs) have emerged as the state-of-the-art representation learning solution, with myriads of downstream applications such as image classification, retrieval and generation. A natural question is whether these models memorize their training data, which also has implications for generalization. We propose a new method for measuring memorization in VLMs, which we call dèjá vu memorization. For VLMs trained on image-caption pairs, we show that the model indeed retains information about individual objects in the training images beyond what can be inferred from correlations or the image caption. We evaluate dèjá vu memorization at both sample and population level, and show that it is significant for OpenCLIP trained on as many as 50M image-caption pairs. Finally, we show that text randomization considerably mitigates memorization risk while only moderately impacting the model’s downstream task performance. The code is available here: https://github.com/facebookresearch/VLMDejaVu.
https://openreview.net/pdf/a355ac38d9aa6df494ad197c4810d6481a1ee4a0.pdf
Propensity Score Alignment of Unpaired Multimodal Data
https://openreview.net/forum?id=hT4y7D2o2T
https://openreview.net/forum?id=hT4y7D2o2T
Johnny Xi,Jana Osea,Zuheng Xu,Jason Hartford
NIPS 2024,Poster
Multimodal representation learning techniques typically require paired samples to learn shared representations, but collecting paired samples can be challenging in fields like biology, where measurement devices often destroy the samples. This paper presents an approach to address the challenge of aligning unpaired samples across disparate modalities in multimodal representation learning. We draw an analogy between potential outcomes in causal inference and potential views in multimodal observations, allowing us to leverage Rubin's framework to estimate a common space for matching samples. Our approach assumes experimentally perturbed samples by treatments, and uses this to estimate a propensity score from each modality. We show that the propensity score encapsulates all shared information between a latent state and treatment, and can be used to define a distance between samples. We experiment with two alignment techniques that leverage this distance---shared nearest neighbours (SNN) and optimal transport (OT) matching---and find that OT matching results in significant improvements over state-of-the-art alignment approaches in on synthetic multi-modal tasks, in real-world data from NeurIPS Multimodal Single-Cell Integration Challenge, and on a single cell microscopy to expression prediction task.
https://openreview.net/pdf/e33e66092d51fb20f53ddbc85a231d7c32b7525d.pdf
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
https://openreview.net/forum?id=0LXotew9Du
https://openreview.net/forum?id=0LXotew9Du
Coleman Richard Charles Hooper,Sehoon Kim,Hiva Mohammadzadeh,Michael W. Mahoney,Sophia Shao,Kurt Keutzer,Amir Gholami
NIPS 2024,Poster
LLMs are seeing growing use for applications which require large context windows, and with these large context windows KV cache activations surface as the dominant contributor to memory consumption during inference. Quantization is a promising approach for compressing KV cache activations; however, existing solutions fail to represent activations accurately in sub-4-bit precision. Our work, KVQuant, facilitates low precision KV cache quantization by incorporating several novel methods: (i) Per-Channel Key Quantization, where we adjust the dimension along which we quantize the Key activations to better match the distribution; (ii) Pre-RoPE Key Quantization, where we quantize Key activations before the rotary positional embedding to mitigate its impact on quantization; (iii) Non-Uniform KV Cache Quantization, where we derive per-layer sensitivity-weighted non-uniform datatypes that better represent the distributions; and (iv) Per-Vector Dense-and-Sparse Quantization, where we isolate outliers separately for each vector to minimize skews in quantization ranges. By applying our method to the LLaMA, Llama-2, Llama-3, and Mistral models, we achieve < 0.1 perplexity degradation with 3-bit quantization on both Wikitext-2 and C4, outperforming existing approaches. Our method enables serving LLaMA-7B with a context length of up to 1 million on a single A100-80GB GPU and up to 10 million on an 8-GPU system. We develop custom CUDA kernels for KVQuant, showing that we can achieve up to ~1.7x speedups, compared to baseline fp16 matrix-vector multiplications, for the LLaMA-7B model.
https://openreview.net/pdf/14defcf80798b0426d9bd05b25ab492c11727c8a.pdf
Efficient multi-prompt evaluation of LLMs
https://openreview.net/forum?id=jzkpwcj200
https://openreview.net/forum?id=jzkpwcj200
Felipe Maia Polo,Ronald Xu,Lucas Weber,Mírian Silva,Onkar Bhardwaj,Leshem Choshen,Allysson Flavio Melo de Oliveira,Yuekai Sun,Mikhail Yurochkin
NIPS 2024,Poster
Most popular benchmarks for comparing LLMs rely on a limited set of prompt templates, which may not fully capture the LLMs’ abilities and can affect the reproducibility of results on leaderboards. Many recent works empirically verify prompt sensitivity and advocate for changes in LLM evaluation. In this paper, we consider the problem of estimating the performance distribution across many prompt variants instead of finding a single prompt to evaluate with. We introduce PromptEval, a method for estimating performance across a large set of prompts borrowing strength across prompts and examples to produce accurate estimates under practical evaluation budgets. The resulting distribution can be used to obtain performance quantiles to construct various robust performance metrics (e.g., top 95% quantile or median). We prove that PromptEval consistently estimates the performance distribution and demonstrate its efficacy empirically on three prominent LLM benchmarks: MMLU, BIG-bench Hard, and LMentry; for example, PromptEval can accurately estimate performance quantiles across 100 prompt templates on MMLU with a budget equivalent to two single-prompt evaluations. Moreover, we show how PromptEval can be useful in LLM-as-a-judge and best prompt identification applications.
https://openreview.net/pdf/25775e1605f94e86a0854b54c4025198032e1e76.pdf
Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression
https://openreview.net/forum?id=ntF7D8tAlQ
https://openreview.net/forum?id=ntF7D8tAlQ
Kai Tan,Pierre C Bellec
NIPS 2024,Poster
This paper studies the generalization performance of iterates obtained by Gradient Descent (GD), Stochastic Gradient Descent (SGD) and their proximal variants in high-dimensional robust regression problems. The number of features is comparable to the sample size and errors may be heavy-tailed. We introduce estimators that precisely track the generalization error of the iterates along the trajectory of the iterative algorithm. These estimators are provably consistent under suitable conditions. The results are illustrated through several examples, including Huber regression, pseudo-Huber regression, and their penalized variants with non-smooth regularizer. We provide explicit generalization error estimates for iterates generated from GD and SGD, or from proximal SGD in the presence of a non-smooth regularizer. The proposed risk estimates serve as effective proxies for the actual generalization error, allowing us to determine the optimal stopping iteration that minimizes the generalization error. Extensive simulations confirm the effectiveness of the proposed generalization error estimates.
https://openreview.net/pdf/303193742edff79eea3620b2e2526badbcb840dd.pdf
First-Explore, then Exploit: Meta-Learning to Solve Hard Exploration-Exploitation Trade-Offs
https://openreview.net/forum?id=AhjTu2aiiW
https://openreview.net/forum?id=AhjTu2aiiW
Ben Norman,Jeff Clune
NIPS 2024,Poster
Standard reinforcement learning (RL) agents never intelligently explore like a human (i.e. taking into account complex domain priors and adapting quickly based on previous exploration). Across episodes, RL agents struggle to perform even simple exploration strategies, for example systematic search that avoids exploring the same location multiple times. This poor exploration limits performance on challenging domains. Meta-RL is a potential solution, as unlike standard RL, meta-RL can *learn* to explore, and potentially learn highly complex strategies far beyond those of standard RL, strategies such as experimenting in early episodes to learn new skills, or conducting experiments to learn about the current environment. Traditional meta-RL focuses on the problem of learning to optimally balance exploration and exploitation to maximize the *cumulative reward* of the episode sequence (e.g., aiming to maximize the total wins in a tournament -- while also improving as a player). We identify a new challenge with state-of-the-art cumulative-reward meta-RL methods. When optimal behavior requires exploration that sacrifices immediate reward to enable higher subsequent reward, existing state-of-the-art cumulative-reward meta-RL methods become stuck on the local optimum of failing to explore. Our method, First-Explore, overcomes this limitation by learning two policies: one to solely explore, and one to solely exploit. When exploring requires forgoing early-episode reward, First-Explore significantly outperforms existing cumulative meta-RL methods. By identifying and solving the previously unrecognized problem of forgoing reward in early episodes, First-Explore represents a significant step towards developing meta-RL algorithms capable of human-like exploration on a broader range of domains.
https://openreview.net/pdf/d836aa944c9edfd65776c5dce9bdfa31dc753230.pdf
Iterative Reasoning Preference Optimization
https://openreview.net/forum?id=4XIKfvNYvx
https://openreview.net/forum?id=4XIKfvNYvx
Richard Yuanzhe Pang,Weizhe Yuan,He He,Kyunghyun Cho,Sainbayar Sukhbaatar,Jason E Weston
NIPS 2024,Poster
Iterative preference optimization methods have recently been shown to perform well for general instruction tuning tasks, but typically make little improvement on reasoning tasks. In this work we develop an iterative approach that optimizes the preference between competing generated Chain-of-Thought (CoT) candidates by optimizing for winning vs. losing reasoning steps. We train using a modified DPO loss with an additional negative log-likelihood term, which we find to be crucial. We show reasoning improves across repeated iterations of this scheme. While only relying on examples in the training set, our approach results in increasing accuracy on GSM8K, MATH, and ARC-Challenge for Llama-2-70B-Chat, outperforming other Llama-2-based models not relying on additionally sourced datasets. For example, we see a large improvement from 55.6% to 81.6% on GSM8K and an accuracy of 88.7% with majority voting out of 32 samples.
https://openreview.net/pdf/7e59c840774359c6db720256d9f471fcec640aa4.pdf
Robot Policy Learning with Temporal Optimal Transport Reward
https://openreview.net/forum?id=LEed5Is4oi
https://openreview.net/forum?id=LEed5Is4oi
Yuwei Fu,Haichao Zhang,Di Wu,Wei Xu,Benoit Boulet
NIPS 2024,Poster
Reward specification is one of the most tricky problems in Reinforcement Learning, which usually requires tedious hand engineering in practice. One promising approach to tackle this challenge is to adopt existing expert video demonstrations for policy learning. Some recent work investigates how to learn robot policies from only a single/few expert video demonstrations. For example, reward labeling via Optimal Transport (OT) has been shown to be an effective strategy to generate a proxy reward by measuring the alignment between the robot trajectory and the expert demonstrations. However, previous work mostly overlooks that the OT reward is invariant to temporal order information, which could bring extra noise to the reward signal. To address this issue, in this paper, we introduce the Temporal Optimal Transport (TemporalOT) reward to incorporate temporal order information for learning a more accurate OT-based proxy reward. Extensive experiments on the Meta-world benchmark tasks validate the efficacy of the proposed method. Our code is available at: https://github.com/fuyw/TemporalOT.
https://openreview.net/pdf/546d5a3bfcb9e2fdc8b68c1bf6c486d493da366e.pdf
Reinforcement Learning Guided Semi-Supervised Learning
https://openreview.net/forum?id=PSMBefUZa2
https://openreview.net/forum?id=PSMBefUZa2
Marzi Heidari,Hanping Zhang,Yuhong Guo
NIPS 2024,Poster
In recent years, semi-supervised learning (SSL) has gained significant attention due to its ability to leverage both labeled and unlabeled data to improve model performance, especially when labeled data is scarce. However, most current SSL methods rely on heuristics or predefined rules for generating pseudo-labels and leveraging unlabeled data. They are limited to exploiting loss functions and regularization methods within the standard norm. In this paper, we propose a novel Reinforcement Learning (RL) Guided SSL method, RLGSSL, that formulates SSL as a one-armed bandit problem and deploys an innovative RL loss based on weighted reward to adaptively guide the learning process of the prediction model. RLGSSL incorporates a carefully designed reward function that balances the use of labeled and unlabeled data to enhance generalization performance. A semi-supervised teacher-student framework is further deployed to increase the learning stability. We demonstrate the effectiveness of RLGSSL through extensive experiments on several benchmark datasets and show that our approach achieves consistent superior performance compared to state-of-the-art SSL methods.
https://openreview.net/pdf/2344add05a63e8e811361c96b898b85f89821417.pdf
Non-parametric classification via expand-and-sparsify representation
https://openreview.net/forum?id=0d50Il6enG
https://openreview.net/forum?id=0d50Il6enG
Kaushik Sinha
NIPS 2024,Poster
In *expand-and-sparsify* (EaS) representation, a data point in $\mathcal{S}^{d-1}$ is first randomly mapped to higher dimension $\mathbb{R}^m$, where $m>d$, followed by a sparsification operation where the informative $k \ll m$ of the $m$ coordinates are set to one and the rest are set to zero. We propose two algorithms for non-parametric classification using such EaS representation. For our first algorithm, we use *winners-take-all* operation for the sparsification step and show that the proposed classifier admits the form of a locally weighted average classifier and establish its consistency via Stone's Theorem. Further, assuming that the conditional probability function $P(y=1|x)=\eta(x)$ is H\"{o}lder continuous and for optimal choice of $m$, we show that the convergence rate of this classifier is minimax-optimal. For our second algorithm, we use *empirical $k$-thresholding* operation for the sparsification step, and under the assumption that data lie on a low dimensional manifold of dimension $d_0\ll d$, we show that the convergence rate of this classifier depends only on $d_0$ and is again minimax-optimal. Empirical evaluations performed on real-world datasets corroborate our theoretical results.
https://openreview.net/pdf/cf5fc42d6420b6f75735d9629078474bd70b836e.pdf
LoFiT: Localized Fine-tuning on LLM Representations
https://openreview.net/forum?id=dfiXFbECSZ
https://openreview.net/forum?id=dfiXFbECSZ
Fangcong Yin,Xi Ye,Greg Durrett
NIPS 2024,Poster
Recent work in interpretability shows that large language models (LLMs) can be adapted for new tasks in a learning-free way: it is possible to intervene on LLM representations to elicit desired behaviors for alignment. For instance, adding certain bias vectors to the outputs of certain attention heads is reported to boost the truthfulness of models. In this work, we show that localized fine-tuning serves as an effective alternative to such representation intervention methods. We introduce a framework called Localized Fine-Tuning on LLM Representations (LoFiT), which identifies a subset of attention heads that are most important for learning a specific task, then trains offset vectors to add to the model's hidden representations at those selected heads. LoFiT localizes to a sparse set of heads (3%-10%) and learns the offset vectors from limited training data, comparable to the settings used for representation intervention. For truthfulness and reasoning tasks, we find that LoFiT's intervention vectors are more effective for LLM adaptation than vectors from representation intervention methods such as Inference-time Intervention. We also find that the localization step is important: selecting a task-specific set of attention heads can lead to higher performance than intervening on heads selected for a different task. Finally, across 7 tasks we study, LoFiT achieves comparable performance to other parameter-efficient fine-tuning methods such as LoRA, despite modifying 20x-200x fewer parameters than these methods.
https://openreview.net/pdf/82c808befa2777dd14ef26f962250a30ba8ec10f.pdf
Physics-Informed Variational State-Space Gaussian Processes
https://openreview.net/forum?id=tCf7S75xFa
https://openreview.net/forum?id=tCf7S75xFa
Oliver Hamelijnck,Arno Solin,Theodoros Damoulas
NIPS 2024,Poster
Differential equations are important mechanistic models that are integral to many scientific and engineering applications. With the abundance of available data there has been a growing interest in data-driven physics-informed models. Gaussian processes (GPs) are particularly suited to this task as they can model complex, non-linear phenomena whilst incorporating prior knowledge and quantifying uncertainty. Current approaches have found some success but are limited as they either achieve poor computational scalings or focus only on the temporal setting. This work addresses these issues by introducing a variational spatio-temporal state-space GP that handles linear and non-linear physical constraints while achieving efficient linear-in-time computation costs. We demonstrate our methods in a range of synthetic and real-world settings and outperform the current state-of-the-art in both predictive and computational performance.
https://openreview.net/pdf/6a7d93ac3343cf1b9a2d5c8f88d19eceea0d58f8.pdf
Learning to Embed Distributions via Maximum Kernel Entropy
https://openreview.net/forum?id=A0cok1GK9c
https://openreview.net/forum?id=A0cok1GK9c
Oleksii Kachaiev,Stefano Recanatesi
NIPS 2024,Poster
Empirical data can often be considered as samples from a set of probability distributions. Kernel methods have emerged as a natural approach for learning to classify these distributions. Although numerous kernels between distributions have been proposed, applying kernel methods to distribution regression tasks remains challenging, primarily because selecting a suitable kernel is not straightforward. Surprisingly, the question of learning a data-dependent distribution kernel has received little attention. In this paper, we propose a novel objective for the unsupervised learning of data-dependent distribution kernel, based on the principle of entropy maximization in the space of probability measure embeddings. We examine the theoretical properties of the latent embedding space induced by our objective, demonstrating that its geometric structure is well-suited for solving downstream discriminative tasks. Finally, we demonstrate the performance of the learned kernel across different modalities.
https://openreview.net/pdf/b809679dcc0eec32c0cfbce8bcd7515295f66753.pdf
When is Multicalibration Post-Processing Necessary?
https://openreview.net/forum?id=OONojmx3wH
https://openreview.net/forum?id=OONojmx3wH
Dutch Hansen,Siddartha Devic,Preetum Nakkiran,Vatsal Sharan
NIPS 2024,Poster
Calibration is a well-studied property of predictors which guarantees meaningful uncertainty estimates. Multicalibration is a related notion --- originating in algorithmic fairness --- which requires predictors to be simultaneously calibrated over a potentially complex and overlapping collection of protected subpopulations (such as groups defined by ethnicity, race, or income). We conduct the first comprehensive study evaluating the usefulness of multicalibration post-processing across a broad set of tabular, image, and language datasets for models spanning from simple decision trees to 90 million parameter fine-tuned LLMs. Our findings can be summarized as follows: (1) models which are calibrated out of the box tend to be relatively multicalibrated without any additional post-processing; (2) multicalibration can help inherently uncalibrated models and also large vision and language models; and (3) traditional calibration measures may sometimes provide multicalibration implicitly. More generally, we also distill many independent observations which may be useful for practical and effective applications of multicalibration post-processing in real-world contexts.
https://openreview.net/pdf/0cdf6bb6b426ee8df5430bc9531e78cbd80ebeb7.pdf
Expected Probabilistic Hierarchies
https://openreview.net/forum?id=fMdrBucZnj
https://openreview.net/forum?id=fMdrBucZnj
Marcel Kollovieh,Bertrand Charpentier,Daniel Zügner,Stephan Günnemann
NIPS 2024,Poster
Hierarchical clustering has usually been addressed by discrete optimization using heuristics or continuous optimization of relaxed scores for hierarchies. In this work, we propose to optimize expected scores under a probabilistic model over hierarchies. (1) We show theoretically that the global optimal values of the expected Dasgupta cost and Tree-Sampling divergence (TSD), two unsupervised metrics for hierarchical clustering, are equal to the optimal values of their discrete counterparts contrary to some relaxed scores. (2) We propose Expected Probabilistic Hierarchies (EPH), a probabilistic model to learn hierarchies in data by optimizing expected scores. EPH uses differentiable hierarchy sampling enabling end-to-end gradient descent based optimization, and an unbiased subgraph sampling approach to scale to large datasets. (3) We evaluate EPH on synthetic and real-world datasets including vector and graph datasets. EPH outperforms all other approaches quantitatively and provides meaningful hierarchies in qualitative evaluations.
https://openreview.net/pdf/b84db14f49a1687fab66baf0417f23e71dc598d3.pdf
Prompt Tuning Strikes Back: Customizing Foundation Models with Low-Rank Prompt Adaptation
https://openreview.net/forum?id=SyMhGilvCv
https://openreview.net/forum?id=SyMhGilvCv
Abhinav Jain,Swarat Chaudhuri,Thomas Reps,Chris Jermaine
NIPS 2024,Poster
Parameter-Efficient Fine-Tuning (PEFT) has become the standard for customising Foundation Models (FMs) to user-specific downstream tasks. However, typical PEFT methods require storing multiple task-specific adapters, creating scalability issues as these adapters must be housed and run at the FM server. Traditional prompt tuning offers a potential solution by customising them through task-specific input prefixes, but it under-performs compared to other PEFT methods like LoRA. To address this gap, we propose Low-Rank Prompt Adaptation (LoPA), a prompt-tuning-based approach that performs on par with state-of-the-art PEFT methods and full fine-tuning while being more parameter-efficient and not requiring a server-based adapter. LoPA generates soft prompts by balancing between sharing task-specific information across instances and customization for each instance. It uses a low-rank decomposition of the soft-prompt component encoded for each instance to achieve parameter efficiency. We provide a comprehensive evaluation on multiple natural language understanding and code generation and understanding tasks across a wide range of foundation models with varying sizes.
https://openreview.net/pdf/f79a9adc44e79b654a39f910767c76091b4ab8ad.pdf
Differentially Private Graph Diffusion with Applications in Personalized PageRanks
https://openreview.net/forum?id=aon7bwYBiq
https://openreview.net/forum?id=aon7bwYBiq
Rongzhe Wei,Eli Chien,Pan Li
NIPS 2024,Poster
Graph diffusion, which iteratively propagates real-valued substances among the graph, is used in numerous graph/network-involved applications. However, releasing diffusion vectors may reveal sensitive linking information in the data such as transaction information in financial network data. However, protecting the privacy of graph data is challenging due to its interconnected nature. This work proposes a novel graph diffusion framework with edge-level different privacy guarantees by using noisy diffusion iterates. The algorithm injects Laplace noise per diffusion iteration and adopts a degree-based thresholding function to mitigate the high sensitivity induced by low-degree nodes. Our privacy loss analysis is based on Privacy Amplification by Iteration (PABI), which to our best knowledge, is the first effort that analyzes PABI with Laplace noise and provides relevant applications. We also introduce a novel $\infty$-Wasserstein distance tracking method, which tightens the analysis of privacy leakage and makes PABI more applicable in practice. We evaluate this framework by applying it to Personalized Pagerank computation for ranking tasks. Experiments on real-world network data demonstrate the superiority of our method under stringent privacy conditions.
https://openreview.net/pdf/8167cef85f4c4bf69b1b7b13a07c309e549f6be2.pdf
Hybrid Reinforcement Learning Breaks Sample Size Barriers In Linear MDPs
https://openreview.net/forum?id=bPuYxFBHyI
https://openreview.net/forum?id=bPuYxFBHyI
Kevin Tan,Wei Fan,Yuting Wei
NIPS 2024,Poster
Hybrid Reinforcement Learning (RL), where an agent learns from both an offline dataset and online explorations in an unknown environment, has garnered significant recent interest. A crucial question posed by Xie et al. (2022) is whether hybrid RL can improve upon the existing lower bounds established in purely offline and purely online RL without relying on the single-policy concentrability assumption. While Li et al. (2023) provided an affirmative answer to this question in the tabular PAC RL case, the question remains unsettled for both the regret-minimizing RL case and the non-tabular case. In this work, building upon recent advancements in offline RL and reward-agnostic exploration, we develop computationally efficient algorithms for both PAC and regret-minimizing RL with linear function approximation, without requiring concentrability on the entire state-action space. We demonstrate that these algorithms achieve sharper error or regret bounds that are no worse than, and can improve on, the optimal sample complexity in offline RL (the first algorithm, for PAC RL) and online RL (the second algorithm, for regret-minimizing RL) in linear Markov decision processes (MDPs), regardless of the quality of the behavior policy. To our knowledge, this work establishes the tightest theoretical guarantees currently available for hybrid RL in linear MDPs.
https://openreview.net/pdf/b7cd776027da50edf1fb90be41e9a35d302b347b.pdf
Theoretical Foundations of Deep Selective State-Space Models
https://openreview.net/forum?id=3SzrqwupUx
https://openreview.net/forum?id=3SzrqwupUx
Nicola Muca Cirone,Antonio Orvieto,Benjamin Walker,Cristopher Salvi,Terry Lyons
NIPS 2024,Poster
Structured state-space models (SSMs) are gaining popularity as effective foundational architectures for sequential data, demonstrating outstanding performance across a diverse set of domains alongside desirable scalability properties. Recent developments show that if the linear recurrence powering SSMs allows for a selectivity mechanism leveraging multiplicative interactions between inputs and hidden states (e.g. Mamba, GLA, Hawk/Griffin, HGRN2), then the resulting architecture can surpass attention-powered foundation models trained on text in both accuracy and efficiency, at scales of billion parameters. In this paper, we give theoretical grounding to the selectivity mechanism, often linked to in-context learning, using tools from Rough Path Theory. We provide a framework for the theoretical analysis of generalized selective SSMs, fully characterizing their expressive power and identifying the gating mechanism as the crucial architectural choice. Our analysis provides a closed-form description of the expressive powers of modern SSMs, such as Mamba, quantifying theoretically the drastic improvement in performance from the previous generation of models, such as S4. Our theory not only motivates the success of modern selective state-space models, but also provides a solid framework to understand the expressive power of future SSM variants. In particular, it suggests cross-channel interactions could play a vital role in future improvements.
https://openreview.net/pdf/4e86fe9ae93de98a547f68ad2934a6a01ebc450e.pdf
Divide-and-Conquer Predictive Coding: a structured Bayesian inference algorithm
https://openreview.net/forum?id=dxwIaCVkWU
https://openreview.net/forum?id=dxwIaCVkWU
Eli Zachary Sennesh,Hao Wu,Tommaso Salvatori
NIPS 2024,Poster
Unexpected stimuli induce "error" or "surprise" signals in the brain. The theory of predictive coding promises to explain these observations in terms of Bayesian inference by suggesting that the cortex implements variational inference in a probabilistic graphical model. However, when applied to machine learning tasks, this family of algorithms has yet to perform on par with other variational approaches in high-dimensional, structured inference problems. To address this, we introduce a novel predictive coding algorithm for structured generative models, that we call divide-and-conquer predictive coding (DCPC); it differs from other formulations of predictive coding, as it respects the correlation structure of the generative model and provably performs maximum-likelihood updates of model parameters, all without sacrificing biological plausibility. Empirically, DCPC achieves better numerical performance than competing algorithms and provides accurate inference in a number of problems not previously addressed with predictive coding. We provide an open implementation of DCPC in Pyro on Github.
https://openreview.net/pdf/c5dcbc4afe1bf94b2c8c24246234641c0fb36cdd.pdf
Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces
https://openreview.net/forum?id=VUgXAWOCQz
https://openreview.net/forum?id=VUgXAWOCQz
Angeliki Kamoutsi,Peter Schmitt-Förster,Tobias Sutter,Volkan Cevher,John Lygeros
NIPS 2024,Poster
This work studies discrete-time discounted Markov decision processes with continuous state and action spaces and addresses the inverse problem of inferring a cost function from observed optimal behavior. We first consider the case in which we have access to the entire expert policy and characterize the set of solutions to the inverse problem by using occupation measures, linear duality, and complementary slackness conditions. To avoid trivial solutions and ill-posedness, we introduce a natural linear normalization constraint. This results in an infinite-dimensional linear feasibility problem, prompting a thorough analysis of its properties. Next, we use linear function approximators and adopt a randomized approach, namely the scenario approach and related probabilistic feasibility guarantees, to derive $\varepsilon$-optimal solutions for the inverse problem. We further discuss the sample complexity for a desired approximation accuracy. Finally, we deal with the more realistic case where we only have access to a finite set of expert demonstrations and a generative model and provide bounds on the error made when working with samples.
https://openreview.net/pdf/6b739551f0cafd5d9a306eda4ee36802fb033a87.pdf
Stratified Prediction-Powered Inference for Effective Hybrid Evaluation of Language Models
https://openreview.net/forum?id=8CBcdDQFDQ
https://openreview.net/forum?id=8CBcdDQFDQ
Adam Fisch,Joshua Maynez,R. Alex Hofer,Bhuwan Dhingra,Amir Globerson,William W. Cohen
NIPS 2024,Poster
Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data. PPI achieves this by combining small amounts of human-labeled data with larger amounts of data labeled by a reasonably accurate---but potentially biased---automatic system, in a way that results in tighter confidence intervals for certain parameters of interest (e.g., the mean performance of a language model). In this paper, we propose a method called Stratified Prediction-Powered Inference (StratPPI), in which we show that the basic PPI estimates can be considerably improved by employing simple data stratification strategies. Without making any assumptions on the underlying automatic labeling system or data distribution, we derive an algorithm for computing provably valid confidence intervals for parameters of any dimensionality that is based on stratified sampling. In particular, we show both theoretically and empirically that, with appropriate choices of stratification and sample allocation, our approach can provide substantially tighter confidence intervals than unstratified approaches. Specifically, StratPPI is expected to improve in cases where the performance of the autorater varies across different conditional distributions of the target data.
https://openreview.net/pdf/4e7db3a23f6df7ed68c466099c4a79ff0c20e3b3.pdf
OASIS: Conditional Distribution Shaping for Offline Safe Reinforcement Learning
https://openreview.net/forum?id=3uDEmsf3Jf
https://openreview.net/forum?id=3uDEmsf3Jf
Yihang Yao,Zhepeng Cen,Wenhao Ding,Haohong Lin,Shiqi Liu,Tingnan Zhang,Wenhao Yu,Ding Zhao
NIPS 2024,Poster
Offline safe reinforcement learning (RL) aims to train a policy that satisfies con- straints using a pre-collected dataset. Most current methods struggle with the mismatch between imperfect demonstrations and the desired safe and rewarding performance. In this paper, we mitigate this issue from a data-centric perspective and introduce OASIS (cOnditionAl diStributIon Shaping), a new paradigm in offline safe RL designed to overcome these critical limitations. OASIS utilizes a conditional diffusion model to synthesize offline datasets, thus shaping the data dis- tribution toward a beneficial target domain. Our approach makes compliance with safety constraints through effective data utilization and regularization techniques to benefit offline safe RL training. Comprehensive evaluations on public benchmarks and varying datasets showcase OASIS’s superiority in benefiting offline safe RL agents to achieve high-reward behavior while satisfying the safety constraints, out- performing established baselines. Furthermore, OASIS exhibits high data efficiency and robustness, making it suitable for real-world applications, particularly in tasks where safety is imperative and high-quality demonstrations are scarce. More details are available at the website https://sites.google.com/view/saferl-oasis/home.
https://openreview.net/pdf/b117426626e9be7de2fbb787c26872b3a9a39334.pdf
Density-based User Representation using Gaussian Process Regression for Multi-interest Personalized Retrieval
https://openreview.net/forum?id=Px1hQM72iX
https://openreview.net/forum?id=Px1hQM72iX
Haolun Wu,Ofer Meshi,Masrour Zoghi,Fernando Diaz,Xue Liu,Craig Boutilier,MARYAM KARIMZADEHGAN
NIPS 2024,Poster
Accurate modeling of the diverse and dynamic interests of users remains a significant challenge in the design of personalized recommender systems. Existing user modeling methods, like single-point and multi-point representations, have limitations w.r.t.\ accuracy, diversity, and adaptability. To overcome these deficiencies, we introduce density-based user representations (DURs), a novel method that leverages Gaussian process regression (GPR) for effective multi-interest recommendation and retrieval. Our approach, GPR4DUR, exploits DURs to capture user interest variability without manual tuning, incorporates uncertainty-awareness, and scales well to large numbers of users. Experiments using real-world offline datasets confirm the adaptability and efficiency of GPR4DUR, while online experiments with simulated users demonstrate its ability to address the exploration-exploitation trade-off by effectively utilizing model uncertainty.
https://openreview.net/pdf/6fcc2fb80f8768e72d83fd0a25391e91b6872df1.pdf
WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models
https://openreview.net/forum?id=n5R6TvBVcX
https://openreview.net/forum?id=n5R6TvBVcX
Liwei Jiang,Kavel Rao,Seungju Han,Allyson Ettinger,Faeze Brahman,Sachin Kumar,Niloofar Mireshghallah,Ximing Lu,Maarten Sap,Yejin Choi,Nouha Dziri
NIPS 2024,Poster
We introduce WildTeaming, an automatic red-teaming framework that mines in-the-wild user-chatbot interactions to discover 5.7K unique clusters of novel jailbreak tactics, and then composes selections of multiple mined tactics for systematic exploration of novel and even more challenging jailbreaks. Compared to prior work that performed red-teaming via recruited human workers, gradient-based optimization, or iterative revision with large language models (LLMs), our work investigates jailbreaks from chatbot users in-the-wild who were not specifically instructed to break the system. WildTeaming reveals previously unidentified vulnerabilities of frontier LLMs, resulting in more diverse and successful adversarial attacks compared to state-of-the-art jailbreaking methods. While there exist many datasets for jailbreak evaluation, very few open-source datasets exist for jailbreak training, as safety training data has been closed among all frontier models even when their weights are open. Therefore, with WildTeaming we create WildJailbreak, a large-scale open-source synthetic safety dataset with 262K vanilla (direct request) and adversarial (complex jailbreak) prompt-response pairs. In order to mitigate exaggerated safety behaviors, WildJailbreak provides two contrastive types of queries: 1) harmful queries (both vanilla and adversarial) and 2) benign queries that resemble harmful queries in form but contain no harmful intent. As WildJailbreak considerably upgrades the quality and scale of existing safety resources, it uniquely enables us to examine the scaling effects of data and the interplay of data properties and model capabilities during safety training. Through extensive model training and evaluations, we identify the training properties that enable an ideal balance of safety behaviors: appropriate safeguarding without over-refusal, effective handling of both vanilla and adversarial queries, and minimal, if any, decrease in general capabilities. All the components of WildJailbreak contribute to achieving balanced safety behaviors of models
https://openreview.net/pdf/5c0e189c5b92a109f691a752108334b171f24840.pdf
Structured flexibility in recurrent neural networks via neuromodulation
https://openreview.net/forum?id=HbIBqn3grD
https://openreview.net/forum?id=HbIBqn3grD
Julia C Costacurta,Shaunak Bhandarkar,David M. Zoltowski,Scott Linderman
NIPS 2024,Poster
A core aim in theoretical and systems neuroscience is to develop models which help us better understand biological intelligence. Such models range broadly in both complexity and biological plausibility. One widely-adopted example is task-optimized recurrent neural networks (RNNs), which have been used to generate hypotheses about how the brain’s neural dynamics may organize to accomplish tasks. However, task-optimized RNNs typically have a fixed weight matrix representing the synaptic connectivity between neurons. From decades of neuroscience research, we know that synaptic weights are constantly changing, controlled in part by chemicals such as neuromodulators. In this work we explore the computational implications of synaptic gain scaling, a form of neuromodulation, using task-optimized low-rank RNNs. In our neuromodulated RNN (NM-RNN) model, a neuromodulatory subnetwork outputs a low-dimensional neuromodulatory signal that dynamically scales the low-rank recurrent weights of an output-generating RNN. In empirical experiments, we find that the structured flexibility in the NM-RNN allows it to both train and generalize with a higher degree of accuracy than low-rank RNNs on a set of canonical tasks. Additionally, via theoretical analyses we show how neuromodulatory gain scaling endows networks with gating mechanisms commonly found in artificial RNNs. We end by analyzing the low-rank dynamics of trained NM-RNNs, to show how task computations are distributed.
https://openreview.net/pdf/7621b2faa9f3d4ded6dd91fe4f5fc5a67af2525a.pdf
SEL-BALD: Deep Bayesian Active Learning for Selective Labeling with Instance Rejection
https://openreview.net/forum?id=tDMTwto6jv
https://openreview.net/forum?id=tDMTwto6jv
Ruijiang Gao,Mingzhang Yin,Maytal Saar-Tsechansky
NIPS 2024,Poster
Machine learning systems are widely used in many high-stakes contexts in which experimental designs for assigning treatments are infeasible. When evaluating a decision instance is costly, such as investigating a fraud case, or evaluating a biopsy decision, a sample-efficient strategy is needed. However, while existing active learning methods assume humans will always label the instances selected by the machine learning model, in many critical applications, humans may decline to label instances selected by the machine learning model due to reasons such as regulation constraint, domain knowledge, or algorithmic aversion, thus not sample efficient. In this paper, we propose the Active Learning with Instance Rejection (ALIR) problem, which is a new active learning problem that considers the human discretion behavior for high-stakes decision making problems. We propose new active learning algorithms under deep Bayesian active learning for selective labeling (SEL-BALD) to address the ALIR problem. Our algorithms consider how to acquire information for both the machine learning model and the human discretion model. We conduct experiments on both synthetic and real-world datasets to demonstrate the effectiveness of our proposed algorithms.
https://openreview.net/pdf/6d282e436d10a31e2f510dc93fd45a23aff5e571.pdf
Interpolating Item and User Fairness in Multi-Sided Recommendations
https://openreview.net/forum?id=tAOg1HdvGy
https://openreview.net/forum?id=tAOg1HdvGy
Qinyi Chen,Jason Cheuk Nam Liang,Negin Golrezaei,Djallel Bouneffouf
NIPS 2024,Poster
Today's online platforms heavily lean on algorithmic recommendations for bolstering user engagement and driving revenue. However, these recommendations can impact multiple stakeholders simultaneously---the platform, items (sellers), and users (customers)---each with their unique objectives, making it difficult to find the right middle ground that accommodates all stakeholders. To address this, we introduce a novel fair recommendation framework, Problem (FAIR), that flexibly balances multi-stakeholder interests via a constrained optimization formulation. We next explore Problem (FAIR) in a dynamic online setting where data uncertainty further adds complexity, and propose a low-regret algorithm FORM that concurrently performs real-time learning and fair recommendations, two tasks that are often at odds. Via both theoretical analysis and a numerical case study on real-world data, we demonstrate the efficacy of our framework and method in maintaining platform revenue while ensuring desired levels of fairness for both items and users.
https://openreview.net/pdf/779d8fafd139b66f38faec4a1301dd5616d6e34f.pdf
Sparse High Rank Adapters
https://openreview.net/forum?id=6hY60tkiEK
https://openreview.net/forum?id=6hY60tkiEK
Kartikeya Bhardwaj,Nilesh Prasad Pandey,Sweta Priyadarshi,Viswanath Ganapathy,Shreya Kadambi,Rafael Esteves,Shubhankar Borse,Paul Whatmough,Risheek Garrepalli,Mart Van Baalen,Harris Teague,Markus Nagel
NIPS 2024,Poster
Low Rank Adaptation (LoRA) has gained massive attention in the recent generative AI research. One of the main advantages of LoRA is its ability to be fused with pretrained models, adding no overhead during inference. However, from a mobile deployment standpoint, we can either avoid inference overhead in the fused mode but lose the ability to switch adapters rapidly, or suffer significant (up to 30% higher) inference latency while enabling rapid switching in the unfused mode. LoRA also exhibits concept-loss when multiple adapters are used concurrently. In this paper, we propose Sparse High Rank Adapters (SHiRA), a new paradigm which incurs no inference overhead, enables rapid switching, and significantly reduces concept-loss. Specifically, SHiRA can be trained by directly tuning only 1-2% of the base model weights while leaving others unchanged. This results in a highly sparse adapter which can be switched directly in the fused mode. We further provide theoretical and empirical insights on how high sparsity in SHiRA can aid multi-adapter fusion by reducing concept loss. Our extensive experiments on LVMs and LLMs demonstrate that finetuning only a small fraction of the parameters in the base model significantly outperforms LoRA while enabling both rapid switching and multi-adapter fusion. Finally, we provide a latency- and memory-efficient SHiRA implementation based on Parameter-Efficient Finetuning (PEFT) Library which trains at nearly the same speed as LoRA while consuming up to 16% lower peak GPU memory, thus making SHiRA easy to adopt for practical use cases. To demonstrate rapid switching benefits during inference, we show that loading SHiRA on a base model can be 5x-16x faster than LoRA fusion on a CPU.
https://openreview.net/pdf/8fbda02958d2d96c786fbd9463f21e8f2dabc6c3.pdf
Compact Proofs of Model Performance via Mechanistic Interpretability
https://openreview.net/forum?id=2zWbzx50mH
https://openreview.net/forum?id=2zWbzx50mH
Jason Gross,Rajashree Agrawal,Thomas Kwa,Euan Ong,Chun Hei Yip,Alex Gibson,Soufiane Noubir,Lawrence Chan
NIPS 2024,Poster
We propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving accuracy lower bounds for a small transformer trained on Max-of-$K$, validating proof transferability across 151 random seeds and four values of $K$. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding. Moreover, we find that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs. Finally, we identify compounding structureless errors as a key challenge for using mechanistic interpretability to generate compact proofs on model performance.
https://openreview.net/pdf/2b080dafbe4fe995df64a4516389ff273902a32c.pdf
DISP-LLM: Dimension-Independent Structural Pruning for Large Language Models
https://openreview.net/forum?id=YxaY6tHgg0
https://openreview.net/forum?id=YxaY6tHgg0
Shangqian Gao,Chi-Heng Lin,Ting Hua,Zheng Tang,Yilin Shen,Hongxia Jin,Yen-Chang Hsu
NIPS 2024,Poster
Large Language Models (LLMs) have achieved remarkable success in various natural language processing tasks, including language modeling, understanding, and generation. However, the increased memory and computational costs associated with these models pose significant challenges for deployment on resource-limited devices. Structural pruning has emerged as a promising solution to reduce the costs of LLMs without requiring post-processing steps. Prior structural pruning methods either follow the dependence of structures at the cost of limiting flexibility, or introduce non-trivial additional parameters by incorporating different projection matrices. In this work, we propose a novel approach that relaxes the constraint imposed by regular structural pruning methods and eliminates the structural dependence along the embedding dimension. Our dimension-independent structural pruning method offers several benefits. Firstly, our method enables different blocks to utilize different subsets of the feature maps. Secondly, by removing structural dependence, we facilitate each block to possess varying widths along its input and output dimensions, thereby significantly enhancing the flexibility of structural pruning. We evaluate our method on various LLMs, including OPT, LLaMA, LLaMA-2, Phi-1.5, and Phi-2. Experimental results demonstrate that our approach outperforms other state-of-the-art methods, showing for the first time that structural pruning can achieve an accuracy similar to semi-structural pruning.
https://openreview.net/pdf/53109daab0c04edaf62237ef9cddf9b4256644bd.pdf
Learning Transferable Features for Implicit Neural Representations
https://openreview.net/forum?id=ABYdKpDb8p
https://openreview.net/forum?id=ABYdKpDb8p
Kushal Vyas,Ahmed Imtiaz Humayun,Aniket Dashpute,Richard Baraniuk,Ashok Veeraraghavan,Guha Balakrishnan
NIPS 2024,Poster
Implicit neural representations (INRs) have demonstrated success in a variety of applications, including inverse problems and neural rendering. An INR is typically trained to capture one signal of interest, resulting in learned neural features that are highly attuned to that signal. Assumed to be less generalizable, we explore the aspect of transferability of such learned neural features for fitting similar signals. We introduce a new INR training framework, STRAINER that learns transferable features for fitting INRs to new signals from a given distribution, faster and with better reconstruction quality. Owing to the sequential layer-wise affine operations in an INR, we propose to learn transferable representations by sharing initial encoder layers across multiple INRs with independent decoder layers. At test time, the learned encoder representations are transferred as initialization for an otherwise randomly initialized INR. We find STRAINER to yield extremely powerful initialization for fitting images from the same domain and allow for a ≈ +10dB gain in signal quality early on compared to an untrained INR itself. STRAINER also provides a simple way to encode data-driven priors in INRs. We evaluate STRAINER on multiple in-domain and out-of-domain signal fitting tasks and inverse problems and further provide detailed analysis and discussion on the transferability of STRAINER’s features.
https://openreview.net/pdf/7acd584d735cad977d879073a05945b253109e05.pdf
Randomized Strategic Facility Location with Predictions
https://openreview.net/forum?id=YvOeN0kUzT
https://openreview.net/forum?id=YvOeN0kUzT
Eric Balkanski,Vasilis Gkatzelis,Golnoosh Shahkarami
NIPS 2024,Poster
In the strategic facility location problem, a set of agents report their locations in a metric space and the goal is to use these reports to open a new facility, minimizing an aggregate distance measure from the agents to the facility. However, agents are strategic and may misreport their locations to influence the facility’s placement in their favor. The aim is to design truthful mechanisms, ensuring agents cannot gain by misreporting. This problem was recently revisited through the learning-augmented framework, aiming to move beyond worst-case analysis and design truthful mechanisms that are augmented with (machine-learned) predictions. The focus of this work was on mechanisms that are deterministic and augmented with a prediction regarding the optimal facility location. In this paper, we provide a deeper understanding of this problem by exploring the power of randomization as well as the impact of different types of predictions on the performance of truthful learning-augmented mechanisms. We study both the single-dimensional and the Euclidean case and provide upper and lower bounds regarding the achievable approximation of the optimal egalitarian social cost.
https://openreview.net/pdf/10d8ade064748613e7375bb2d18cfa8a8826262c.pdf
How does Gradient Descent Learn Features --- A Local Analysis for Regularized Two-Layer Neural Networks
https://openreview.net/forum?id=XYw051ZmUn
https://openreview.net/forum?id=XYw051ZmUn
Mo Zhou,Rong Ge
NIPS 2024,Poster
The ability of learning useful features is one of the major advantages of neural networks. Although recent works show that neural network can operate in a neural tangent kernel (NTK) regime that does not allow feature learning, many works also demonstrate the potential for neural networks to go beyond NTK regime and perform feature learning. Recently, a line of work highlighted the feature learning capabilities of the early stages of gradient-based training. In this paper we consider another mechanism for feature learning via gradient descent through a local convergence analysis. We show that once the loss is below a certain threshold, gradient descent with a carefully regularized objective will capture ground-truth directions. We further strengthen this local convergence analysis by incorporating early-stage feature learning analysis. Our results demonstrate that feature learning not only happens at the initial gradient steps, but can also occur towards the end of training.
https://openreview.net/pdf/0f1e81a2939ab48ebc34170b7937ba6f4308236b.pdf
Measuring Dejavu Memorization Efficiently
https://openreview.net/forum?id=v8RRFNbJ43
https://openreview.net/forum?id=v8RRFNbJ43
Narine Kokhlikyan,Bargav Jayaraman,Florian Bordes,Chuan Guo,Kamalika Chaudhuri
NIPS 2024,Poster
Recent research has shown that representation learning models may accidentally memorize their training data. For example, the déjà vu method shows that for certain representation learning models and training images, it is sometimes possible to correctly predict the foreground label given only the representation of he background – better than through dataset-level correlations. However, their measurement method requires training two models – one to estimate dataset-level correlations and the other to estimate memorization. This multiple model setup becomes infeasible for large open-source models. In this work, we propose alter native simple methods to estimate dataset-level correlations, and show that these can be used to approximate an off-the-shelf model’s memorization ability without any retraining. This enables, for the first time, the measurement of memorization in pre-trained open-source image representation and vision-language models. Our results show that different ways of measuring memorization yield very similar aggregate results. We also find that open-source models typically have lower aggregate memorization than similar models trained on a subset of the data. The code is available both for vision (https://github.com/facebookresearch/DejaVuOSS) and vision language (https://github.com/facebookresearch/VLMDejaVu) models.
https://openreview.net/pdf/6f697238a026167fa803f7aeaffa5b79df2b1057.pdf
A Topology-aware Graph Coarsening Framework for Continual Graph Learning
https://openreview.net/forum?id=VpINEEVLX0
https://openreview.net/forum?id=VpINEEVLX0
Xiaoxue Han,Zhuo Feng,Yue Ning
NIPS 2024,Poster
Graph Neural Networks (GNNs) experience "catastrophic forgetting" in continual learning setups, where they tend to lose previously acquired knowledge and perform poorly on old tasks. Rehearsal-based methods, which consolidate old knowledge with a replay memory buffer, are a de facto solution due to their straightforward workflow. However, these methods often fail to adequately capture topological information, leading to incorrect input-label mappings in replay samples. To address this, we propose TACO, a topology-aware graph coarsening and continual learning framework that stores information from previous tasks as a reduced graph. Throughout each learning period, this reduced graph expands by integrating with a new graph and aligning shared nodes, followed by a "zoom-out" reduction process to maintain a stable size. We have developed a graph coarsening algorithm based on node representation proximities to efficiently reduce a graph while preserving essential topological information. We empirically demonstrate that the learning process on the reduced graph can closely approximate that on the original graph. We compare TACO with a wide range of state-of-the-art baselines, proving its superiority and the necessity of preserving high-quality topological information for effective replaying.
https://openreview.net/pdf/406408d7839e9d5c643715d8429ea93609e08c84.pdf
Score-based 3D molecule generation with neural fields
https://openreview.net/forum?id=9lGJrkqJUw
https://openreview.net/forum?id=9lGJrkqJUw
Matthieu Kirchmeyer,Pedro O. Pinheiro,Saeed Saremi
NIPS 2024,Poster
We introduce a new functional representation for 3D molecules based on their continuous atomic density fields. Using this representation, we propose a new model based on neural empirical Bayes for unconditional 3D molecule generation in the continuous space using neural fields. Our model, FuncMol, encodes molecular fields into latent codes using a conditional neural field, samples noisy codes from a Gaussian-smoothed distribution with Langevin MCMC, denoises these samples in a single step and finally decodes them into molecular fields. FuncMol performs all-atom generation of 3D molecules without assumptions on the molecular structure and scales well with the size of molecules, unlike most existing approaches. Our method achieves competitive results on drug-like molecules and easily scales to macro-cyclic peptides, with at least one order of magnitude faster sampling. The code is available at https://github.com/prescient-design/funcmol.
https://openreview.net/pdf/a064e6a79267ef29c1fc0fc85ce268979434c99a.pdf
Hybrid Generative AI for De Novo Design of Co-Crystals with Enhanced Tabletability
https://openreview.net/forum?id=G4vFNmraxj
https://openreview.net/forum?id=G4vFNmraxj
Nina Gubina,Andrei Dmitrenko,Gleb Vitalevich Solovev,Lyubov Yamshchikova,Oleg Petrov,Ivan Lebedev,Nikita Serov,Grigorii Kirgizov,Nikolay Nikitin,Vladimir Vinogradov
NIPS 2024,Poster
Co-crystallization is an accessible way to control physicochemical characteristics of organic crystals, which finds many biomedical applications. In this work, we present Generative Method for Co-crystal Design (GEMCODE), a novel pipeline for automated co-crystal screening based on the hybridization of deep generative models and evolutionary optimization for broader exploration of the target chemical space. GEMCODE enables fast *de novo* co-crystal design with target tabletability profiles, which is crucial for the development of pharmaceuticals. With a series of experimental studies highlighting validation and discovery cases, we show that GEMCODE is effective even under realistic computational constraints. Furthermore, we explore the potential of language models in generating co-crystals. Finally, we present numerous previously unknown co-crystals predicted by GEMCODE and discuss its potential in accelerating drug development.
https://openreview.net/pdf/a22aecaac8f6647154414ad4d6d6530c86631f90.pdf
Efficient and Private Marginal Reconstruction with Local Non-Negativity
https://openreview.net/forum?id=lKnl4CLhhS
https://openreview.net/forum?id=lKnl4CLhhS
Brett Mullins,Miguel Fuentes,Yingtai Xiao,Daniel Kifer,Cameron N Musco,Daniel Sheldon
NIPS 2024,Poster
Differential privacy is the dominant standard for formal and quantifiable privacy and has been used in major deployments that impact millions of people. Many differentially private algorithms for query release and synthetic data contain steps that reconstruct answers to queries from answers to other queries that have been measured privately. Reconstruction is an important subproblem for such mechanisms to economize the privacy budget, minimize error on reconstructed answers, and allow for scalability to high-dimensional datasets. In this paper, we introduce a principled and efficient postprocessing method ReM (Residuals-to-Marginals) for reconstructing answers to marginal queries. Our method builds on recent work on efficient mechanisms for marginal query release, based on making measurements using a residual query basis that admits efficient pseudoinversion, which is an important primitive used in reconstruction. An extension GReM-LNN (Gaussian Residuals-to-Marginals with Local Non-negativity) reconstructs marginals under Gaussian noise satisfying consistency and non-negativity, which often reduces error on reconstructed answers. We demonstrate the utility of ReM and GReM-LNN by applying them to improve existing private query answering mechanisms.
https://openreview.net/pdf/74ef2a254d1aef2663edcdb2e0ac71b90a95897e.pdf
Achieving Constant Regret in Linear Markov Decision Processes
https://openreview.net/forum?id=02r24A8doi
https://openreview.net/forum?id=02r24A8doi
Weitong Zhang,Zhiyuan Fan,Jiafan He,Quanquan Gu
NIPS 2024,Poster
We study the constant regret guarantees in reinforcement learning (RL). Our objective is to design an algorithm that incurs only finite regret over infinite episodes with high probability. We introduce an algorithm, Cert-LSVI-UCB, for misspecified linear Markov decision processes (MDPs) where both the transition kernel and the reward function can be approximated by some linear function up to misspecification level $\zeta$. At the core of Cert-LSVI-UCB is an innovative certified estimator, which facilitates a fine-grained concentration analysis for multi-phase value-targeted regression, enabling us to establish an instance-dependent regret bound that is constant w.r.t. the number of episodes. Specifically, we demonstrate that for a linear MDP characterized by a minimal suboptimality gap $\Delta$, Cert-LSVI-UCB has a cumulative regret of $\tilde{\mathcal{O}}(d^3H^5/\Delta)$ with high probability, provided that the misspecification level $\zeta$ is below $\tilde{\mathcal{O}}(\Delta / (\sqrt{d}H^2))$. Here $d$ is the dimension of the feature space and $H$ is the horizon. Remarkably, this regret bound is independent of the number of episodes $K$. To the best of our knowledge, Cert-LSVI-UCB is the first algorithm to achieve a constant, instance-dependent, high-probability regret bound in RL with linear function approximation without relying on prior distribution assumptions.
https://openreview.net/pdf/c4416b40b8b47e9d8fa8155df573d6a2c68b8f6e.pdf
Gaussian Process Bandits for Top-k Recommendations
https://openreview.net/forum?id=50nEnmVLRb
https://openreview.net/forum?id=50nEnmVLRb
Mohit Yadav,Cameron N Musco,Daniel Sheldon
NIPS 2024,Poster
Algorithms that utilize bandit feedback to optimize top-k recommendations are vital for online marketplaces, search engines, and content platforms. However, the combinatorial nature of this problem poses a significant challenge, as the possible number of ordered top-k recommendations from $n$ items grows exponentially with $k$. As a result, previous work often relies on restrictive assumptions about the reward or bandit feedback models, such as assuming that the feedback discloses rewards for each recommended item rather than a single scalar feedback for the entire set of top-k recommendations. We introduce a novel contextual bandit algorithm for top-k recommendations, leveraging a Gaussian process with a Kendall kernel to model the reward function. Our algorithm requires only scalar feedback from the top-k recommendations and does not impose restrictive assumptions on the reward structure. Theoretical analysis confirms that the proposed algorithm achieves sub-linear regret in relation to the number of rounds and arms. Additionally, empirical results using a bandit simulator demonstrate that the proposed algorithm outperforms other baselines across various scenarios.
https://openreview.net/pdf/94eed9db04418322ec15845e44a60d94cafb69a4.pdf
Mixture of Link Predictors on Graphs
https://openreview.net/forum?id=X3oeoyJlMw
https://openreview.net/forum?id=X3oeoyJlMw
Li Ma,Haoyu Han,Juanhui Li,Harry Shomer,Hui Liu,Xiaofeng Gao,Jiliang Tang
NIPS 2024,Poster
Link prediction, which aims to forecast unseen connections in graphs, is a fundamental task in graph machine learning. Heuristic methods, leveraging a range of different pairwise measures such as common neighbors and shortest paths, often rival the performance of vanilla Graph Neural Networks (GNNs). Therefore, recent advancements in GNNs for link prediction (GNN4LP) have primarily focused on integrating one or a few types of pairwise information. In this work, we reveal that different node pairs within the same dataset necessitate varied pairwise information for accurate prediction and models that only apply the same pairwise information uniformly could achieve suboptimal performance. As a result, we propose a simple mixture of experts model Link-MoE for link prediction. Link-MoE utilizes various GNNs as experts and strategically selects the appropriate expert for each node pair based on various types of pairwise information. Experimental results across diverse real-world datasets demonstrate substantial performance improvement from Link-MoE. Notably, Link-Mo achieves a relative improvement of 18.71% on the MRR metric for the Pubmed dataset and 9.59% on the Hits@100 metric for the ogbl-ppa dataset, compared to the best baselines. The code is available at https://github.com/ml-ml/Link-MoE/.
https://openreview.net/pdf/d56be3eb2aef98f8ecc17ea9a679ec017299efbc.pdf
SmallToLarge (S2L): Scalable Data Selection for Fine-tuning Large Language Models by Summarizing Training Trajectories of Small Models
https://openreview.net/forum?id=K9IGlMQpif
https://openreview.net/forum?id=K9IGlMQpif
Yu Yang,Siddhartha Mishra,Jeffrey N Chiang,Baharan Mirzasoleiman
NIPS 2024,Poster
Despite the effectiveness of data selection for pretraining and instruction fine-tuning large language models (LLMs), improving data efficiency in supervised fine-tuning (SFT) for specialized domains poses significant challenges due to the complexity of fine-tuning data. To bridge this gap, we introduce an effective and scalable data selection method for SFT, SmallToLarge (S2L), which trains a small model, clusters loss trajectories of the examples, and samples from these clusters to guide data selection for larger models. We prove that during fine-tuning, samples within the same loss trajectory cluster exhibit similar gradients. Then, we show that S2L subsets have a bounded gradient error w.r.t. the full data, hence guarantee convergence to the neighborhood of the optimal solution. We demonstrate through extensive experiments that S2L significantly improves data efficiency in SFT for mathematical problem-solving, reducing the training data requirement to just $11$% of the original MathInstruct dataset to match full dataset performance while outperforming state-of-the-art data selection algorithms by an average of $4.7$% across $6$ in- and out-domain evaluation datasets. Remarkably, selecting only 50K data for SFT, S2L achieves a $32.7$% accuracy on the challenging MATH benchmark, improving Phi-2 by $16.6$%. In clinical text summarization on the MIMIC-III dataset, S2L again outperforms training on the full dataset using only $50$% of the data. Notably, S2L can perform scalable data selection using a reference model $100\times$ smaller than the target model, proportionally reducing the computational cost.
https://openreview.net/pdf/0a2da6d64cd7f9e62b8aa4f9c56311ab881fcbfa.pdf
DeltaDEQ: Exploiting Heterogeneous Convergence for Accelerating Deep Equilibrium Iterations
https://openreview.net/forum?id=7qBkADV4zD
https://openreview.net/forum?id=7qBkADV4zD
Zuowen Wang,Longbiao Cheng,Pehuen Moure,Niklas Hahn,Shih-Chii Liu
NIPS 2024,Poster
Implicit neural networks including deep equilibrium models have achieved superior task performance with better parameter efficiency in various applications. However, it is often at the expense of higher computation costs during inference. In this work, we identify a phenomenon named $\textbf{heterogeneous convergence}$ that exists in deep equilibrium models and other iterative methods. We observe much faster convergence of state activations in certain dimensions therefore indicating the dimensionality of the underlying dynamics of the forward pass is much lower than the defined dimension of the states. We thereby propose to exploit heterogeneous convergence by storing past linear operation results (e.g., fully connected and convolutional layers) and only propagating the state activation when its change exceeds a threshold. Thus, for the already converged dimensions, the computations can be skipped. We verified our findings and reached 84\% FLOPs reduction on the implicit neural representation task, 73\% on the Sintel and 76\% on the KITTI datasets for the optical flow estimation task while keeping comparable task accuracy with the models that perform the full update.
https://openreview.net/pdf/3557279fbfb0846e372d74d08c4eae97d63126db.pdf
Retrieval & Fine-Tuning for In-Context Tabular Models
https://openreview.net/forum?id=337dHOexCM
https://openreview.net/forum?id=337dHOexCM
Valentin Thomas,Junwei Ma,Rasa Hosseinzadeh,Keyvan Golestan,Guangwei Yu,Maksims Volkovs,Anthony L. Caterini
NIPS 2024,Poster
Tabular data is a pervasive modality spanning a wide range of domains, and this inherent diversity poses a considerable challenge for deep learning. Recent advancements using transformer-based in-context learning have shown promise on smaller and less complex tabular datasets, but have struggled to scale to larger and more complex ones. To address this limitation, we propose a combination of retrieval and fine-tuning: we can adapt the transformer to a local subset of the data by collecting nearest neighbours, and then perform task-specific fine-tuning with this retrieved set of neighbours in context. Using TabPFN as the base model -- currently the best tabular in-context learner -- and applying our retrieval and fine-tuning scheme on top results in what we call a locally-calibrated PFN, or LoCalPFN. We conduct extensive evaluation on 95 datasets curated by TabZilla from OpenML, upon which we establish a new state-of-the-art with LoCalPFN -- even with respect to tuned tree-based models. Notably, we show a significant boost in performance compared to the base in-context model, demonstrating the efficacy of our approach and advancing the frontier of deep learning in tabular data.
https://openreview.net/pdf/3da8933f3aa37b7d79634b4c7b1c46ece4ec364a.pdf
Cost-aware Bayesian Optimization via the Pandora's Box Gittins Index
https://openreview.net/forum?id=Ouc1F0Sfb7
https://openreview.net/forum?id=Ouc1F0Sfb7
Qian Xie,Raul Astudillo,Peter I. Frazier,Ziv Scully,Alexander Terenin
NIPS 2024,Poster
Bayesian optimization is a technique for efficiently optimizing unknown functions in a black-box manner. To handle practical settings where gathering data requires use of finite resources, it is desirable to explicitly incorporate function evaluation costs into Bayesian optimization policies. To understand how to do so, we develop a previously-unexplored connection between cost-aware Bayesian optimization and the Pandora's Box problem, a decision problem from economics. The Pandora's Box problem admits a Bayesian-optimal solution based on an expression called the Gittins index, which can be reinterpreted as an acquisition function. We study the use of this acquisition function for cost-aware Bayesian optimization, and demonstrate empirically that it performs well, particularly in medium-high dimensions. We further show that this performance carries over to classical Bayesian optimization without explicit evaluation costs. Our work constitutes a first step towards integrating techniques from Gittins index theory into Bayesian optimization.
https://openreview.net/pdf/6fbb510ffd1f4ed16480310bffe739f433d2595b.pdf
Online Budgeted Matching with General Bids
https://openreview.net/forum?id=Vtxy8wFpTj
https://openreview.net/forum?id=Vtxy8wFpTj
Jianyi Yang,Pengfei Li,Adam Wierman,Shaolei Ren
NIPS 2024,Poster
Online Budgeted Matching (OBM) is a classic problem with important applications in online advertising, online service matching, revenue management, and beyond. Traditional online algorithms typically assume a small bid setting, where the maximum bid-to-budget ratio ($\kappa$) is infinitesimally small. While recent algorithms have tried to address scenarios with non-small or general bids, they often rely on the Fractional Last Matching (FLM) assumption, which allows for accepting partial bids when the remaining budget is insufficient. This assumption, however, does not hold for many applications with indivisible bids. In this paper, we remove the FLM assumption and tackle the open problem of OBM with general bids. We first establish an upper bound of $1-\kappa$ on the competitive ratio for any deterministic online algorithm. We then propose a novel meta algorithm, called MetaAd, which reduces to different algorithms with first known provable competitive ratios parameterized by the maximum bid-to-budget ratio $\kappa\in [0,1]$. As a by-product, we extend MetaAd to the FLM setting and get provable competitive algorithms. Finally, we apply our competitive analysis to the design learning- augmented algorithms.
https://openreview.net/pdf/ebe77c6c808116602dfd7ba418a75985e1fd02c9.pdf
Risk-Averse Fine-tuning of Large Language Models
https://openreview.net/forum?id=1BZKqZphsW
https://openreview.net/forum?id=1BZKqZphsW
Sapana Chaudhary,Ujwal Dinesha,Dileep Kalathil,Srinivas Shakkottai
NIPS 2024,Poster
We consider the challenge of mitigating the generation of negative or toxic content by the Large Language Models (LLMs) in response to certain prompts. We propose integrating risk-averse principles into LLM fine-tuning to minimize the occurrence of harmful outputs, particularly rare but significant events. By optimizing the risk measure of Conditional Value at Risk (CVaR), our methodology trains LLMs to exhibit superior performance in avoiding toxic outputs while maintaining effectiveness in generative tasks. Empirical evaluations on sentiment modification and toxicity mitigation tasks demonstrate the efficacy of risk-averse reinforcement learning with human feedback (RLHF) in promoting a safer and more constructive online discourse environment.
https://openreview.net/pdf/8af24d9f39f3b0c337121b49bcb9650bd874a225.pdf
RankRAG: Unifying Context Ranking with Retrieval-Augmented Generation in LLMs
https://openreview.net/forum?id=S1fc92uemC
https://openreview.net/forum?id=S1fc92uemC
Yue Yu,Wei Ping,Zihan Liu,Boxin Wang,Jiaxuan You,Chao Zhang,Mohammad Shoeybi,Bryan Catanzaro
NIPS 2024,Poster
Large language models (LLMs) typically utilize the top-k contexts from a retriever in retrieval-augmented generation (RAG). In this work, we propose a novel method called RankRAG, which instruction-tunes a single LLM for both context ranking and answer generation in RAG. In particular, the instruction-tuned LLMs work surprisingly well by adding a small fraction of ranking data into the training blend, and outperform existing expert ranking models, including the same LLM exclusively fine-tuned on a large amount of ranking data. For generation, we compare our model with many strong baselines, including ChatQA-1.5, an open-sourced model with the state-of-the-art performance on RAG benchmarks. Specifically, our Llama3-RankRAG-8B and Llama3-RankRAG-70B significantly outperform Llama3-ChatQA-1.5-8B and Llama3-ChatQA-1.5-70B, respectively, on nine general knowledge-intensive benchmarks for RAG. In addition, it also performs comparably to GPT-4 on five RAG benchmarks in the biomedical domain without instruction fine-tuning on biomedical data, demonstrating its superb capability for generalization to new domains.
https://openreview.net/pdf/e799910ea1c9e2dfb86d87d93e60724fc05e0aab.pdf
ARC: A Generalist Graph Anomaly Detector with In-Context Learning
https://openreview.net/forum?id=IdIVfzjPK4
https://openreview.net/forum?id=IdIVfzjPK4
Yixin Liu,Shiyuan Li,Yu Zheng,Qingfeng Chen,Chengqi Zhang,Shirui Pan
NIPS 2024,Poster
Graph anomaly detection (GAD), which aims to identify abnormal nodes that differ from the majority within a graph, has garnered significant attention. However, current GAD methods necessitate training specific to each dataset, resulting in high training costs, substantial data requirements, and limited generalizability when being applied to new datasets and domains. To address these limitations, this paper proposes ARC, a generalist GAD approach that enables a ``one-for-all'' GAD model to detect anomalies across various graph datasets on-the-fly. Equipped with in-context learning, ARC can directly extract dataset-specific patterns from the target dataset using few-shot normal samples at the inference stage, without the need for retraining or fine-tuning on the target dataset. ARC comprises three components that are well-crafted for capturing universal graph anomaly patterns: 1) smoothness-based feature **A**lignment module that unifies the features of different datasets into a common and anomaly-sensitive space; 2) ego-neighbor **R**esidual graph encoder that learns abnormality-related node embeddings; and 3) cross-attentive in-**C**ontext anomaly scoring module that predicts node abnormality by leveraging few-shot normal samples. Extensive experiments on multiple benchmark datasets from various domains demonstrate the superior anomaly detection performance, efficiency, and generalizability of ARC.
https://openreview.net/pdf/5901f52b70dd880c8fef76934a6deb110ec30d9a.pdf
Active learning of neural population dynamics using two-photon holographic optogenetics
https://openreview.net/forum?id=nLQeE8QGGe
https://openreview.net/forum?id=nLQeE8QGGe
Andrew Wagenmaker,Lu Mi,Marton Rozsa,Matthew Storm Bull,Karel Svoboda,Kayvon Daie,Matthew D. Golub,Kevin Jamieson
NIPS 2024,Poster
Recent advances in techniques for monitoring and perturbing neural populations have greatly enhanced our ability to study circuits in the brain. In particular, two-photon holographic optogenetics now enables precise photostimulation of experimenter-specified groups of individual neurons, while simultaneous two-photon calcium imaging enables the measurement of ongoing and induced activity across the neural population. Despite the enormous space of potential photostimulation patterns and the time-consuming nature of photostimulation experiments, very little algorithmic work has been done to determine the most effective photostimulation patterns for identifying the neural population dynamics. Here, we develop methods to efficiently select which neurons to stimulate such that the resulting neural responses will best inform a dynamical model of the neural population activity. Using neural population responses to photostimulation in mouse motor cortex, we demonstrate the efficacy of a low-rank linear dynamical systems model, and develop an active learning procedure which takes advantage of low-rank structure to determine informative photostimulation patterns. We demonstrate our approach on both real and synthetic data, obtaining in some cases as much as a two-fold reduction in the amount of data required to reach a given predictive power. Our active stimulation design method is based on a novel active learning procedure for low-rank regression, which may be of independent interest.
https://openreview.net/pdf/f65e32e082781fef08ab450af2a506bb55487173.pdf
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning
https://openreview.net/forum?id=6LVxO1C819
https://openreview.net/forum?id=6LVxO1C819
Momin Ahmad Khan,Yasra Chandio,Fatima M. Anwar
NIPS 2024,Poster
Data heterogeneity among Federated Learning (FL) users poses a significant challenge, resulting in reduced global model performance. The community has designed various techniques to tackle this issue, among which Knowledge Distillation (KD)-based techniques are common. While these techniques effectively improve performance under high heterogeneity, they inadvertently cause higher accuracy degradation under model poisoning attacks (known as \emph{attack amplification}). This paper presents a case study to reveal this critical vulnerability in KD-based FL systems. We show why KD causes this issue through empirical evidence and use it as motivation to design a hybrid distillation technique. We introduce a novel algorithm, Hybrid Knowledge Distillation for Robust and Accurate FL (HYDRA-FL), which reduces the impact of attacks in attack scenarios by offloading some of the KD loss to a shallow layer via an auxiliary classifier. We model HYDRA-FL as a generic framework and adapt it to two KD-based FL algorithms, FedNTD and MOON. Using these two as case studies, we demonstrate that our technique outperforms baselines in attack settings while maintaining comparable performance in benign settings.
https://openreview.net/pdf/cbd50dc60faa8cd6883e9b52f9d043e35ba178dc.pdf
Clustering with Non-adaptive Subset Queries
https://openreview.net/forum?id=lgtsXxk4dF
https://openreview.net/forum?id=lgtsXxk4dF
Hadley Black,Euiwoong Lee,Arya Mazumdar,Barna Saha
NIPS 2024,Poster
Recovering the underlying clustering of a set $U$ of $n$ points by asking pair-wise same-cluster queries has garnered significant interest in the last decade. Given a query $S \subset U$, $|S|=2$, the oracle returns "yes" if the points are in the same cluster and "no" otherwise. We study a natural generalization of this problem to subset queries for $|S|>2$, where the oracle returns the number of clusters intersecting $S$. Our aim is to determine the minimum number of queries needed for exactly recovering an arbitrary $k$-clustering. We focus on non-adaptive schemes, where all the queries are asked in one round, thus allowing for the querying process to be parallelized, which is a highly desirable property. For adaptive algorithms with pair-wise queries, the complexity is known to be $\Theta(nk)$, where $k$ is the number of clusters. In contrast, non-adaptive pair-wise query algorithms are extremely limited: even for $k=3$, such algorithms require $\Omega(n^2)$ queries, which matches the trivial $O(n^2)$ upper bound attained by querying every pair of points. Allowing for subset queries of unbounded size, $O(n)$ queries is possible with an adaptive scheme. However, the realm of non-adaptive algorithms remains completely unknown. Is it possible to attain algorithms that are non-adaptive while still making a near-linear number of queries? In this paper, we give the first non-adaptive algorithms for clustering with subset queries. We provide, (i) a non-adaptive algorithm making $O(n \log^2 n \log k)$ queries which improves to $O(n \log k)$ when the cluster sizes are within any constant factor of each other, (ii) for constant $k$, a non-adaptive algorithm making $O(n \log{\log{n}})$ queries. In addition to non-adaptivity, we take into account other practical considerations, such as enforcing a bound on query size. For constant $k$, we give an algorithm making $\smash{\widetilde{O}(n^2/s^2)}$ queries on subsets of size at most $s \leq \sqrt{n}$, which is optimal among all non-adaptive algorithms within a $\log n$-factor. For arbitrary $k$, the dependence varies as $\tilde{O}(n^2/s)$.
https://openreview.net/pdf/61ca56abf9e2fcb4a96d5c3908c1d3617a81cb55.pdf
FIDE: Frequency-Inflated Conditional Diffusion Model for Extreme-Aware Time Series Generation
https://openreview.net/forum?id=5HQhYiGnYb
https://openreview.net/forum?id=5HQhYiGnYb
Asadullah Hill Galib,Pang-Ning Tan,Lifeng Luo
NIPS 2024,Poster
Time series generation is a crucial aspect of data analysis, playing a pivotal role in learning the temporal patterns and their underlying dynamics across diverse fields. Conventional time series generation methods often struggle to capture extreme values adequately, diminishing their value in critical applications such as scenario planning and management for healthcare, finance, climate change adaptation, and beyond. In this paper, we introduce a conditional diffusion model called FIDE to address the challenge of preserving the distribution of extreme values in generative modeling for time series. FIDE employs a novel high-frequency inflation strategy in the frequency domain, preventing premature fade-out of the extreme value. It also extends traditional diffusion-based model, enabling the generation of samples conditioned on the block maxima, thereby enhancing the model's capacity to capture extreme events. Additionally, the FIDE framework incorporates the Generalized Extreme Value (GEV) distribution within its generative modeling framework, ensuring fidelity to both block maxima and overall data distribution. Experimental results on real-world and synthetic data showcase the efficacy of FIDE over baseline methods, highlighting its potential in advancing Generative AI for time series analysis, specifically in accurately modeling extreme events.
https://openreview.net/pdf/285d2747b600e6d3316e485f911ea40684b85114.pdf
Robust Mixture Learning when Outliers Overwhelm Small Groups
https://openreview.net/forum?id=TrXV4dMDcG
https://openreview.net/forum?id=TrXV4dMDcG
Daniil Dmitriev,Rares-Darius Buhai,Stefan Tiegel,Alexander Wolters,Gleb Novikov,Amartya Sanyal,David Steurer,Fanny Yang
NIPS 2024,Poster
We study the problem of estimating the means of well-separated mixtures when an adversary may add arbitrary outliers. While strong guarantees are available when the outlier fraction is significantly smaller than the minimum mixing weight, much less is known when outliers may crowd out low-weight clusters – a setting we refer to as list-decodable mixture learning (LD-ML). In this case, adversarial outliers can simulate additional spurious mixture components. Hence, if all means of the mixture must be recovered up to a small error in the output list, the list size needs to be larger than the number of (true) components. We propose an algorithm that obtains order-optimal error guarantees for each mixture mean with a minimal list-size overhead, significantly improving upon list-decodable mean estimation, the only existing method that is applicable for LD-ML. Although improvements are observed even when the mixture is non-separated, our algorithm achieves particularly strong guarantees when the mixture is separated: it can leverage the mixture structure to partially cluster the samples before carefully iterating a base learner for list-decodable mean estimation at different scales.
https://openreview.net/pdf/d1548289b846549ea783a01479447919fa5de63c.pdf
Revisiting Score Propagation in Graph Out-of-Distribution Detection
https://openreview.net/forum?id=jb5qN3212b
https://openreview.net/forum?id=jb5qN3212b
Longfei Ma,Yiyou Sun,Kaize Ding,Zemin Liu,Fei Wu
NIPS 2024,Poster
The field of graph learning has been substantially advanced by the development of deep learning models, in particular graph neural networks. However, one salient yet largely under-explored challenge is detecting Out-of-Distribution (OOD) nodes on graphs. Prevailing OOD detection techniques developed in other domains like computer vision, do not cater to the interconnected nature of graphs. This work aims to fill this gap by exploring the potential of a simple yet effective method -- OOD score propagation, which propagates OOD scores among neighboring nodes along the graph structure. This post hoc solution can be easily integrated with existing OOD scoring functions, showcasing its excellent flexibility and effectiveness in most scenarios. However, the conditions under which score propagation proves beneficial remain not fully elucidated. Our study meticulously derives these conditions and, inspired by this discovery, introduces an innovative edge augmentation strategy with theoretical guarantee. Empirical evaluations affirm the superiority of our proposed method, outperforming strong OOD detection baselines in various scenarios and settings.
https://openreview.net/pdf/b0f4cc1c8ccb1100775d7e2f880543d59e43318d.pdf
FewViewGS: Gaussian Splatting with Few View Matching and Multi-stage Training
https://openreview.net/forum?id=liHe9iumIi
https://openreview.net/forum?id=liHe9iumIi
Ruihong Yin,Vladimir Yugay,Yue Li,Sezer Karaoglu,Theo Gevers
NIPS 2024,Poster
The field of novel view synthesis from images has seen rapid advancements with the introduction of Neural Radiance Fields (NeRF) and more recently with 3D Gaussian Splatting. Gaussian Splatting became widely adopted due to its efficiency and ability to render novel views accurately. While Gaussian Splatting performs well when a sufficient amount of training images are available, its unstructured explicit representation tends to overfit in scenarios with sparse input images, resulting in poor rendering performance. To address this, we present a 3D Gaussian-based novel view synthesis method using sparse input images that can accurately render the scene from the viewpoints not covered by the training images. We propose a multi-stage training scheme with matching-based consistency constraints imposed on the novel views without relying on pre-trained depth estimation or diffusion models. This is achieved by using the matches of the available training images to supervise the generation of the novel views sampled between the training frames with color, geometry, and semantic losses. In addition, we introduce a locality preserving regularization for 3D Gaussians which removes rendering artifacts by preserving the local color structure of the scene. Evaluation on synthetic and real-world datasets demonstrates competitive or superior performance of our method in few-shot novel view synthesis compared to existing state-of-the-art methods.
https://openreview.net/pdf/ed5cc740cd652472f5409c5773b77bab5a4ff3c2.pdf
Adversarial Representation Engineering: A General Model Editing Framework for Large Language Models
https://openreview.net/forum?id=dQ9ji8e9qQ
https://openreview.net/forum?id=dQ9ji8e9qQ
Yihao Zhang,Zeming Wei,Jun Sun,Meng Sun
NIPS 2024,Poster
Since the rapid development of Large Language Models (LLMs) has achieved remarkable success, understanding and rectifying their internal complex mechanisms has become an urgent issue. Recent research has attempted to interpret their behaviors through the lens of inner representation. However, developing practical and efficient methods for applying these representations for general and flexible model editing remains challenging. In this work, we explore how to leverage insights from representation engineering to guide the editing of LLMs by deploying a representation sensor as an editing oracle. We first identify the importance of a robust and reliable sensor during editing, then propose an \textbf{A}dversarial \textbf{R}epresentation \textbf{E}ngineering (\textbf{ARE}) framework to provide a unified and interpretable approach for conceptual model editing without compromising baseline performance. Experiments on multiple tasks demonstrate the effectiveness of ARE in various model editing scenarios. Our code and data are available at \url{https://github.com/Zhang-Yihao/Adversarial-Representation-Engineering}.
https://openreview.net/pdf/d67c48f284a2271ad2cdb6755e1439c333e17f11.pdf