title
stringlengths 17
147
| url
stringlengths 42
42
| detail_url
stringlengths 42
42
| authors
stringlengths 8
486
| tags
stringclasses 2
values | abstract
stringlengths 468
2.51k
| pdf
stringlengths 71
71
|
---|---|---|---|---|---|---|
Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks | https://openreview.net/forum?id=aVh9KRZdRk | https://openreview.net/forum?id=aVh9KRZdRk | Tianyu He,Darshil Doshi,Aritra Das,Andrey Gromov | NIPS 2024,Oral | Large language models can solve tasks that were not present in the training set. This capability is believed to be due to in-context learning and skill composition. In this work, we study the emergence of in-context learning and skill composition in a collection of modular arithmetic tasks. Specifically, we consider a finite collection of linear modular functions $z = a x + b y \text{ mod } p$ labeled by the vector $(a, b) \in \mathbb{Z}_p^2$. We use some of these tasks for pre-training and the rest for out-of-distribution testing. We empirically show that a GPT-style transformer exhibits a transition from in-distribution to out-of-distribution generalization as the number of pre-training tasks increases. We find that the smallest model capable of out-of-distribution generalization requires two transformer blocks, while for deeper models, the out-of-distribution generalization phase is *transient*, necessitating early stopping. Finally, we perform an interpretability study of the pre-trained models, revealing highly structured representations in both attention heads and MLPs; and discuss the learned algorithms. Notably, we find an algorithmic shift in deeper models, as we go from few to many in-context examples. | https://openreview.net/pdf/5737b58d308dafc16130635934df4276a7a574aa.pdf |
Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal Codes | https://openreview.net/forum?id=REIK4SZMJt | https://openreview.net/forum?id=REIK4SZMJt | Spencer Rooke,Zhaoze Wang,Ronald W Di Tullio,Vijay Balasubramanian | NIPS 2024,Oral | Many animals learn cognitive maps of their environment - a simultaneous representation of context, experience, and position. Place cells in the hippocampus, named for their explicit encoding of position, are believed to be a neural substrate of these maps, with place cell "remapping" explaining how this system can represent different contexts. Briefly, place cells alter their firing properties, or "remap", in response to changes in experiential or sensory cues. Substantial sensory changes, produced, e.g., by moving between environments, cause large subpopulations of place cells to change their tuning entirely. While many studies have looked at the physiological basis of remapping, we lack explicit calculations of how the contextual capacity of the place cell system changes as a function of place field firing properties. Here, we propose a geometric approach to understanding population level activity of place cells. Using known firing field statistics, we investigate how changes to place cell firing properties affect the distances between representations of different environments within firing rate space. Using this approach, we find that the number of contexts storable by the hippocampus grows exponentially with the number of place cells, and calculate this exponent for environments of different sizes. We identify a fundamental trade-off between high resolution encoding of position and the number of storable contexts. This trade-off is tuned by place cell width, which might explain the change in firing field scale along the dorsal-ventral axis of the hippocampus. We demonstrate that clustering of place cells near likely points of confusion, such as boundaries, increases the contextual capacity of the place system within our framework and conclude by discussing how our geometric approach could be extended to include other cell types and abstract spaces. | https://openreview.net/pdf/9753767cc23ca7180fd4278699c23a3b28c99199.pdf |
Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction | https://openreview.net/forum?id=gojL67CfS8 | https://openreview.net/forum?id=gojL67CfS8 | Keyu Tian,Yi Jiang,Zehuan Yuan,BINGYUE PENG,Liwei Wang | NIPS 2024,Oral | We present Visual AutoRegressive modeling (VAR), a new generation paradigm that redefines the autoregressive learning on images as coarse-to-fine "next-scale prediction" or "next-resolution prediction", diverging from the standard raster-scan "next-token prediction". This simple, intuitive methodology allows autoregressive (AR) transformers to learn visual distributions fast and generalize well: VAR, for the first time, makes GPT-style AR models surpass diffusion transformers in image generation. On ImageNet 256x256 benchmark, VAR significantly improve AR baseline by improving Frechet inception distance (FID) from 18.65 to 1.73, inception score (IS) from 80.4 to 350.2, with around 20x faster inference speed. It is also empirically verified that VAR outperforms the Diffusion Transformer (DiT) in multiple dimensions including image quality, inference speed, data efficiency, and scalability. Scaling up VAR models exhibits clear power-law scaling laws similar to those observed in LLMs, with linear correlation coefficients near -0.998 as solid evidence. VAR further showcases zero-shot generalization ability in downstream tasks including image in-painting, out-painting, and editing. These results suggest VAR has initially emulated the two important properties of LLMs: Scaling Laws and zero-shot task generalization. We have released all models and codes to promote the exploration of AR/VAR models for visual generation and unified learning. | https://openreview.net/pdf/1366e6f25deff9942d17a853f81351d6caa8dcdf.pdf |
Cracking the Code of Juxtaposition: Can AI Models Understand the Humorous Contradictions | https://openreview.net/forum?id=bCMpdaQCNW | https://openreview.net/forum?id=bCMpdaQCNW | Zhe Hu,Tuo Liang,Jing Li,Yiren Lu,Yunlai Zhou,Yiran Qiao,Jing Ma,Yu Yin | NIPS 2024,Oral | Recent advancements in large vision language models have demonstrated remarkable proficiency across a wide range of tasks.
Yet, these models still struggle with understanding the nuances of human humor through juxtaposition, particularly when it involves nonlinear narratives that underpin many jokes and humor cues. This paper investigates this challenge by focusing on comics with contradictory narratives, where each comic consists of two panels that create a humorous contradiction. We introduce the YesBut benchmark, which comprises tasks of varying difficulty aimed at assessing AI's capabilities in recognizing and interpreting these comics, ranging from literal content comprehension to deep narrative reasoning. Through extensive experimentation and analysis of recent commercial or open-sourced large vision language models, we assess their capability to comprehend the complex interplay of the narrative humor inherent in these comics. Our results show that even the state-of-the-art models still struggle with this task. Our findings offer insights into the current limitations and potential improvements for AI in understanding human creative expressions. | https://openreview.net/pdf/1f618d0020c8650176d91ef4418ef3cea6151adb.pdf |
Human Expertise in Algorithmic Prediction | https://openreview.net/forum?id=wpGJ2AX6SZ | https://openreview.net/forum?id=wpGJ2AX6SZ | Rohan Alur,Manish Raghavan,Devavrat Shah | NIPS 2024,Oral | We introduce a novel framework for incorporating human expertise into algorithmic predictions. Our approach leverages human judgment to distinguish inputs which are *algorithmically indistinguishable*, or "look the same" to predictive algorithms. We argue that this framing clarifies the problem of human-AI collaboration in prediction tasks, as experts often form judgments by drawing on information which is not encoded in an algorithm's training data. Algorithmic indistinguishability yields a natural test for assessing whether experts incorporate this kind of "side information", and further provides a simple but principled method for selectively incorporating human feedback into algorithmic predictions. We show that this method provably improves the performance of any feasible algorithmic predictor and precisely quantify this improvement. We find empirically that although algorithms often outperform their human counterparts *on average*, human judgment can improve algorithmic predictions on *specific* instances (which can be identified ex-ante). In an X-ray classification task, we find that this subset constitutes nearly 30% of the patient population. Our approach provides a natural way of uncovering this heterogeneity and thus enabling effective human-AI collaboration. | https://openreview.net/pdf/4f5dc6075a84c5c600343c682e95020208b5f943.pdf |
Learning diffusion at lightspeed | https://openreview.net/forum?id=y10avdRFNK | https://openreview.net/forum?id=y10avdRFNK | Antonio Terpin,Nicolas Lanzetti,Martín Gadea,Florian Dorfler | NIPS 2024,Oral | Diffusion regulates numerous natural processes and the dynamics of many successful generative models. Existing models to learn the diffusion terms from observational data rely on complex bilevel optimization problems and model only the drift of the system.
We propose a new simple model, JKOnet*, which bypasses the complexity of existing architectures while presenting significantly enhanced representational capabilities: JKOnet* recovers the potential, interaction, and internal energy components of the underlying diffusion process. JKOnet* minimizes a simple quadratic loss and outperforms other baselines in terms of sample efficiency, computational complexity, and accuracy. Additionally, JKOnet* provides a closed-form optimal solution for linearly parametrized functionals, and, when applied to predict the evolution of cellular processes from real-world data, it achieves state-of-the-art accuracy at a fraction of the computational cost of all existing methods.
Our methodology is based on the interpretation of diffusion processes as energy-minimizing trajectories in the probability space via the so-called JKO scheme, which we study via its first-order optimality conditions. | https://openreview.net/pdf/71e85a95e3f40ebd277c5df65f9dff3c748e2ddb.pdf |
Weisfeiler and Leman Go Loopy: A New Hierarchy for Graph Representational Learning | https://openreview.net/forum?id=9O2sVnEHor | https://openreview.net/forum?id=9O2sVnEHor | Raffaele Paolino,Sohir Maskey,Pascal Welke,Gitta Kutyniok | NIPS 2024,Oral | We introduce $r$-loopy Weisfeiler-Leman ($r$-$\ell$WL), a novel hierarchy of graph isomorphism tests and a corresponding GNN framework, $r$-$\ell$MPNN, that can count cycles up to length $r{+}2$. Most notably, we show that $r$-$\ell$WL can count homomorphisms of cactus graphs. This extends 1-WL, which can only count homomorphisms of trees and, in fact, is incomparable to $k$-WL for any fixed $k$. We empirically validate the expressive and counting power of $r$-$\ell$MPNN on several synthetic datasets and demonstrate the scalability and strong performance on various real-world datasets, particularly on sparse graphs. | https://openreview.net/pdf/160b0368f27f6ae00575a4abc8d44870237c95f9.pdf |
Unlocking the Capabilities of Thought: A Reasoning Boundary Framework to Quantify and Optimize Chain-of-Thought | https://openreview.net/forum?id=pC44UMwy2v | https://openreview.net/forum?id=pC44UMwy2v | Qiguang Chen,Libo Qin,Jiaqi WANG,Jingxuan Zhou,Wanxiang Che | NIPS 2024,Oral | Chain-of-Thought (CoT) reasoning has emerged as a promising approach for enhancing the performance of large language models (LLMs) on complex reasoning tasks. Recently, a series of studies attempt to explain the mechanisms underlying CoT, aiming to deepen the understanding of its efficacy. Nevertheless, the existing research faces two major challenges: (1) a lack of quantitative metrics to assess CoT capabilities and (2) a dearth of guidance on optimizing CoT performance. Motivated by this, in this work, we introduce a novel reasoning boundary framework (RBF) to address these challenges. To solve the lack of quantification, we first define a reasoning boundary (RB) to quantify the upper-bound of CoT and establish a combination law for RB, enabling a practical quantitative approach applicable to various real-world CoT tasks. To address the lack of optimization, we propose three categories of RBs. We further optimize these categories with combination laws focused on RB promotion and reasoning path optimization for CoT improvement. Through extensive experiments on 27 models and 5 tasks, the study validates the existence and rationality of the proposed framework. Furthermore, it explains the effectiveness of 10 CoT strategies and guides optimization from two perspectives. We hope this work can provide a comprehensive understanding of the boundaries and optimization strategies for reasoning in LLMs. Our code and data are available at https://github.com/LightChen233/reasoning-boundary. | https://openreview.net/pdf/47a165ca745dea00bf9fe4ba52210932fb6d1787.pdf |
Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity | https://openreview.net/forum?id=qf2uZAdy1N | https://openreview.net/forum?id=qf2uZAdy1N | Philip Amortila,Dylan J Foster,Nan Jiang,Akshay Krishnamurthy,Zakaria Mhammedi | NIPS 2024,Oral | Real-world applications of reinforcement learning often involve environments where agents operate on complex, high-dimensional observations, but the underlying (``latent'') dynamics are comparatively simple. However, beyond restrictive settings
such as tabular latent dynamics, the fundamental statistical requirements and algorithmic principles for *reinforcement learning under latent dynamics* are poorly
understood.
This paper addresses the question of reinforcement learning under *general latent dynamics* from a
statistical and algorithmic perspective. On the statistical side, our main negative
result shows that *most* well-studied settings for reinforcement learning with function approximation become intractable when composed with rich observations; we complement this with a positive result, identifying *latent pushforward coverability* as a
general condition that enables statistical tractability. Algorithmically, we develop provably efficient *observable-to-latent* reductions ---that is, reductions that transform an arbitrary algorithm for the
latent MDP into an algorithm that can operate on rich observations--- in two settings: one where the agent has access to hindsight
observations of the latent dynamics (Lee et al., 2023) and one
where the agent can estimate *self-predictive* latent models (Schwarzer et al., 2020). Together, our results serve as a
first step toward a unified statistical and algorithmic theory for
reinforcement learning under latent dynamics. | https://openreview.net/pdf/17710a946394531d22cd1cf32e0a7fd7bac1e6ac.pdf |
Generalization Error Bounds for Two-stage Recommender Systems with Tree Structure | https://openreview.net/forum?id=m1a4CrRJR7 | https://openreview.net/forum?id=m1a4CrRJR7 | Jin Zhang,Ze Liu,Defu Lian,Enhong Chen | NIPS 2024,Oral | Two-stage recommender systems play a crucial role in efficiently identifying relevant items and personalizing recommendations from a vast array of options. This paper, based on an error decomposition framework, analyzes the generalization error for two-stage recommender systems with a tree structure, which consist of an efficient tree-based retriever and a more precise yet time-consuming ranker. We use the Rademacher complexity to establish the generalization upper bound for various tree-based retrievers using beam search, as well as for different ranker models under a shifted training distribution. Both theoretical insights and practical experiments on real-world datasets indicate that increasing the branches in tree-based retrievers and harmonizing distributions across stages can enhance the generalization performance of two-stage recommender systems. | https://openreview.net/pdf/0573ad42adbbc93100e6c898b23c116d78de695b.pdf |
Aligner: Efficient Alignment by Learning to Correct | https://openreview.net/forum?id=kq166jACVP | https://openreview.net/forum?id=kq166jACVP | Jiaming Ji,Boyuan Chen,Hantao Lou,Donghai Hong,Borong Zhang,Xuehai Pan,Tianyi Qiu,Juntao Dai,Yaodong Yang | NIPS 2024,Oral | With the rapid development of large language models (LLMs) and ever-evolving practical requirements, finding an efficient and effective alignment method has never been more critical. However, the tension between the complexity of current alignment methods and the need for rapid iteration in deployment scenarios necessitates the development of a model-agnostic alignment approach that can operate under these constraints. In this paper, we introduce Aligner, a novel and simple alignment paradigm that learns the correctional residuals between preferred and dispreferred answers using a small model. Designed as a model-agnostic, plug-and-play module, Aligner can be directly applied to various open-source and API-based models with only one-off training, making it suitable for rapid iteration. Notably, Aligner can be applied to any powerful, large-scale upstream models. Moreover, it can even iteratively bootstrap the upstream models using corrected responses as synthetic human preference data, breaking through the model's performance ceiling. Our experiments demonstrate performance improvements by deploying the same Aligner model across 11 different LLMs, evaluated on the 3H dimensions (helpfulness, harmlessness, and honesty). Specifically, Aligner-7B has achieved an average improvement of 68.9\% in helpfulness and 23.8\% in harmlessness across the tested LLMs while also effectively reducing hallucination. In the Alpaca-Eval leaderboard, stacking Aligner-2B on GPT-4 Turbo improved its LC Win Rate from 55.0\% to 58.3\%, surpassing GPT-4 Omni's 57.5\% Win Rate (community report). | https://openreview.net/pdf/80ca837e0c7f9e0d8dbf5b1edefbdf611c8ded34.pdf |
Questioning the Survey Responses of Large Language Models | https://openreview.net/forum?id=Oo7dlLgqQX | https://openreview.net/forum?id=Oo7dlLgqQX | Ricardo Dominguez-Olmedo,Moritz Hardt,Celestine Mendler-Dünner | NIPS 2024,Oral | Surveys have recently gained popularity as a tool to study large language models. By comparing models’ survey responses to those of different human reference populations, researchers aim to infer the demographics, political opinions, or values best represented by current language models. In this work, we critically examine language models' survey responses on the basis of the well-established American Community Survey by the U.S. Census Bureau. Evaluating 43 different language models using de-facto standard prompting methodologies, we establish two dominant patterns. First, models' responses are governed by ordering and labeling biases, for example, towards survey responses labeled with the letter “A”. Second, when adjusting for these systematic biases through randomized answer ordering, models across the board trend towards uniformly random survey responses, irrespective of model size or training data. As a result, models consistently appear to better represent subgroups whose aggregate statistics are closest to uniform for the survey under consideration, leading to potentially misguided conclusions about model alignment. | https://openreview.net/pdf/6a9813651d8de7fdc565ddb5dacecf057526a29a.pdf |
Stochastic Taylor Derivative Estimator: Efficient amortization for arbitrary differential operators | https://openreview.net/forum?id=J2wI2rCG2u | https://openreview.net/forum?id=J2wI2rCG2u | Zekun Shi,Zheyuan Hu,Min Lin,Kenji Kawaguchi | NIPS 2024,Oral | Optimizing neural networks with loss that contain high-dimensional and high-order differential operators
is expensive to evaluate with back-propagation due to $\mathcal{O}(d^{k})$ scaling of the derivative tensor size and the $\mathcal{O}(2^{k-1}L)$ scaling in the computation graph, where $d$ is the dimension of the domain, $L$ is the number of ops in the forward computation graph, and $k$ is the derivative order. In previous works, the polynomial scaling in $d$ was addressed by amortizing the computation over the optimization process via randomization. Separately, the exponential scaling in $k$ for univariate functions ($d=1$) was addressed with high-order auto-differentiation (AD). In this work, we show how to efficiently perform arbitrary contraction of the derivative tensor of arbitrary order for multivariate functions, by properly constructing the input tangents to univariate high-order AD, which can be used to efficiently randomize any differential operator.
When applied to Physics-Informed Neural Networks (PINNs), our method provides >1000$\times$ speed-up and >30$\times$ memory reduction over randomization with first-order AD, and we can now solve 1-million-dimensional PDEs in 8 minutes on a single NVIDIA A100 GPU. This work opens the possibility of using high-order differential operators in large-scale problems. | https://openreview.net/pdf/525882bf51a6cb819e7762a437a606419814f5c7.pdf |
Do Finetti: On Causal Effects for Exchangeable Data | https://openreview.net/forum?id=4rCZeCZAON | https://openreview.net/forum?id=4rCZeCZAON | Siyuan Guo,Chi Zhang,Karthika Mohan,Ferenc Huszár,Bernhard Schölkopf | NIPS 2024,Oral | We study causal effect estimation in a setting where the data are not i.i.d.$\ $(independent and identically distributed). We focus on exchangeable data satisfying an assumption of independent causal mechanisms. Traditional causal effect estimation frameworks, e.g., relying on structural causal models and do-calculus, are typically limited to i.i.d. data and do not extend to more general exchangeable generative processes, which naturally arise in multi-environment data. To address this gap, we develop a generalized framework for exchangeable data and introduce a truncated factorization formula that facilitates both the identification and estimation of causal effects in our setting. To illustrate potential applications, we introduce a causal Pólya urn model and demonstrate how intervention propagates effects in exchangeable data settings. Finally, we develop an algorithm that performs simultaneous causal discovery and effect estimation given multi-environment data. | https://openreview.net/pdf/8f348634669f055ea725df69d4de4fac31b49194.pdf |
LLM Evaluators Recognize and Favor Their Own Generations | https://openreview.net/forum?id=4NJBV6Wp0h | https://openreview.net/forum?id=4NJBV6Wp0h | Arjun Panickssery,Samuel R. Bowman,Shi Feng | NIPS 2024,Oral | Self-evaluation using large language models (LLMs) has proven valuable not only in benchmarking but also methods like reward modeling, constitutional AI, and self-refinement. But new biases are introduced due to the same LLM acting as both the evaluator and the evaluatee. One such bias is self-preference, where an LLM evaluator scores its own outputs higher than others’ while human annotators consider them of equal quality. But do LLMs actually recognize their own outputs when they give those texts higher scores, or is it just a coincidence? In this paper, we investigate if self-recognition capability contributes to self-preference. We discover that, out of the box, LLMs such as GPT-4 and Llama 2 have non-trivial accuracy at distinguishing themselves from other LLMs and humans. By finetuning LLMs, we discover a linear correlation between self-recognition capability and the strength of self-preference bias; using controlled experiments, we show that the causal explanation resists straightforward confounders. We discuss how self-recognition can interfere with unbiased evaluations and AI safety more generally. | https://openreview.net/pdf/17f3e3ce067de145352b0881a5a5a351cfcceac4.pdf |
Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs | https://openreview.net/forum?id=pGEY8JQ3qx | https://openreview.net/forum?id=pGEY8JQ3qx | Matthew Zurek,Yudong Chen | NIPS 2024,Oral | We study the sample complexity of learning an $\varepsilon$-optimal policy in an average-reward Markov decision process (MDP) under a generative model. For weakly communicating MDPs, we establish the complexity bound $\widetilde{O}\left(SA\frac{\mathsf{H}}{\varepsilon^2} \right)$, where $\mathsf{H}$ is the span of the bias function of the optimal policy and $SA$ is the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters $S,A,\mathsf{H}$, and $\varepsilon$, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. We also initiate the study of sample complexity in general (multichain) average-reward MDPs. We argue a new transient time parameter $\mathsf{B}$ is necessary, establish an $\widetilde{O}\left(SA\frac{\mathsf{B} + \mathsf{H}}{\varepsilon^2} \right)$ complexity bound, and prove a matching (up to log factors) minimax lower bound. Both results are based on reducing the average-reward MDP to a discounted MDP, which requires new ideas in the general setting. To optimally analyze this reduction, we develop improved bounds for $\gamma$-discounted MDPs, showing that $\widetilde{O}\left(SA\frac{\mathsf{H}}{(1-\gamma)^2\varepsilon^2} \right)$ and $\widetilde{O}\left(SA\frac{\mathsf{B} + \mathsf{H}}{(1-\gamma)^2\varepsilon^2} \right)$ samples suffice to learn $\varepsilon$-optimal policies in weakly communicating and in general MDPs, respectively. Both these results circumvent the well-known minimax lower bound of $\widetilde{\Omega}\left(SA\frac{1}{(1-\gamma)^3\varepsilon^2} \right)$ for $\gamma$-discounted MDPs, and establish a quadratic rather than cubic horizon dependence for a fixed MDP instance. | https://openreview.net/pdf/2ff245e09d2ec82378e2aa6ffea57a9ec01c043c.pdf |
Learning Formal Mathematics From Intrinsic Motivation | https://openreview.net/forum?id=uNKlTQ8mBD | https://openreview.net/forum?id=uNKlTQ8mBD | Gabriel Poesia,David Broman,Nick Haber,Noah Goodman | NIPS 2024,Oral | How did humanity coax mathematics from the aether? We explore the Platonic view that mathematics can be discovered from its axioms---a game of conjecture and proof. We describe an agent that jointly learns to pose challenging problems for itself (conjecturing) and solve them (theorem proving). Given a mathematical domain axiomatized in dependent type theory, we first combine methods for constrained decoding and type-directed synthesis to sample valid conjectures from a language model. Our method guarantees well-formed conjectures by construction, even as we start with a randomly initialized model. We use the same model to represent a policy and value function for guiding proof search. Our agent targets generating hard but provable conjectures --- a moving target, since its own theorem proving ability also improves as it trains. We propose novel methods for hindsight relabeling on proof search trees to significantly improve the agent's sample efficiency in both tasks. Experiments on 3 axiomatic domains (propositional logic, arithmetic and group theory) demonstrate that our agent can bootstrap from only the axioms, self-improving in generating true and challenging conjectures and in finding proofs. | https://openreview.net/pdf/42d3b14720041d447c657071a08de640733954a0.pdf |
Identification and Estimation of the Bi-Directional MR with Some Invalid Instruments | https://openreview.net/forum?id=S2P6KPLtm8 | https://openreview.net/forum?id=S2P6KPLtm8 | Feng Xie,Zhen Yao,Lin Xie,Yan Zeng,Zhi Geng | NIPS 2024,Oral | We consider the challenging problem of estimating causal effects from purely observational data in the bi-directional Mendelian randomization (MR), where some invalid instruments, as well as unmeasured confounding, usually exist.
To address this problem, most existing methods attempt to find proper valid instrumental variables (IVs) for the target causal effect by expert knowledge or by assuming that the causal model is a one-directional MR model.
As such, in this paper, we first theoretically investigate the identification of the bi-directional MR from observational data. In particular, we provide necessary and sufficient conditions under which valid IV sets are correctly identified such that the bi-directional MR model is identifiable, including the causal directions of a pair of phenotypes (i.e., the treatment and outcome).
Moreover, based on the identification theory, we develop a cluster fusion-like method to discover valid IV sets and estimate the causal effects of interest.
We theoretically demonstrate the correctness of the proposed algorithm.
Experimental results show the effectiveness of our method for estimating causal effects in both one-directional and bi-directional MR models. | https://openreview.net/pdf/7864b4bc0bd0c32d66af795cacadc545cbdd6432.pdf |
Maximum Entropy Inverse Reinforcement Learning of Diffusion Models with Energy-Based Models | https://openreview.net/forum?id=V0oJaLqY4E | https://openreview.net/forum?id=V0oJaLqY4E | Sangwoong Yoon,Himchan Hwang,Dohyun Kwon,Yung-Kyun Noh,Frank C. Park | NIPS 2024,Oral | We present a maximum entropy inverse reinforcement learning (IRL) approach for improving the sample quality of diffusion generative models, especially when the number of generation time steps is small. Similar to how IRL trains a policy based on the reward function learned from expert demonstrations, we train (or fine-tune) a diffusion model using the log probability density estimated from training data.
Since we employ an energy-based model (EBM) to represent the log density, our approach boils down to the joint training of a diffusion model and an EBM. Our IRL formulation, named Diffusion by Maximum Entropy IRL (DxMI), is a minimax problem that reaches equilibrium when both models converge to the data distribution. The entropy maximization plays a key role in DxMI, facilitating the exploration of the diffusion model and ensuring the convergence of the EBM. We also propose Diffusion by Dynamic Programming (DxDP), a novel reinforcement learning algorithm for diffusion models, as a subroutine in DxMI. DxDP makes the diffusion model update in DxMI efficient by transforming the original problem into an optimal control formulation where value functions replace back-propagation in time. Our empirical studies show that diffusion models fine-tuned using DxMI can generate high-quality samples in as few as 4 and 10 steps. Additionally, DxMI enables the training of an EBM without MCMC, stabilizing EBM training dynamics and enhancing anomaly detection performance. | https://openreview.net/pdf/fbd48eb1b53fd48de22ddd59edf0d18875315635.pdf |
Improving Environment Novelty Quantification for Effective Unsupervised Environment Design | https://openreview.net/forum?id=UdxpjKO2F9 | https://openreview.net/forum?id=UdxpjKO2F9 | Jayden Teoh,Wenjun Li,Pradeep Varakantham | NIPS 2024,Oral | Unsupervised Environment Design (UED) formalizes the problem of autocurricula through interactive training between a teacher agent and a student agent. The teacher generates new training environments with high learning potential, curating an adaptive curriculum that strengthens the student's ability to handle unseen scenarios. Existing UED methods mainly rely on *regret*, a metric that measures the difference between the agent's optimal and actual performance, to guide curriculum design. Regret-driven methods generate curricula that progressively increase environment complexity for the student but overlook environment *novelty* — a critical element for enhancing an agent's generalizability. Measuring environment novelty is especially challenging due to the underspecified nature of environment parameters in UED, and existing approaches face significant limitations. To address this, this paper introduces the *Coverage-based Evaluation of Novelty In Environment* (CENIE) framework. CENIE proposes a scalable, domain-agnostic, and curriculum-aware approach to quantifying environment novelty by leveraging the student's state-action space coverage from previous curriculum experiences. We then propose an implementation of CENIE that models this coverage and measures environment novelty using Gaussian Mixture Models. By integrating both regret and novelty as complementary objectives for curriculum design, CENIE facilitates effective exploration across the state-action space while progressively increasing curriculum complexity. Empirical evaluations demonstrate that augmenting existing regret-based UED algorithms with CENIE achieves state-of-the-art performance across multiple benchmarks, underscoring the effectiveness of novelty-driven autocurricula for robust generalization. | https://openreview.net/pdf/395c3c5df43310736f6134ab07ff32330b2a8f45.pdf |
Enhancing Preference-based Linear Bandits via Human Response Time | https://openreview.net/forum?id=aIPwlkdOut | https://openreview.net/forum?id=aIPwlkdOut | Shen Li,Yuyang Zhang,Zhaolin Ren,Claire Liang,Na Li,Julie Shah | NIPS 2024,Oral | Interactive preference learning systems infer human preferences by presenting queries as pairs of options and collecting binary choices. Although binary choices are simple and widely used, they provide limited information about preference strength. To address this, we leverage human response times, which are inversely related to preference strength, as an additional signal. We propose a computationally efficient method that combines choices and response times to estimate human utility functions, grounded in the EZ diffusion model from psychology. Theoretical and empirical analyses show that for queries with strong preferences, response times complement choices by providing extra information about preference strength, leading to significantly improved utility estimation. We incorporate this estimator into preference-based linear bandits for fixed-budget best-arm identification. Simulations on three real-world datasets demonstrate that using response times significantly accelerates preference learning compared to choice-only approaches. Additional materials, such as code, slides, and talk video, are available at https://shenlirobot.github.io/pages/NeurIPS24.html. | https://openreview.net/pdf/b32d10afd0c5117bb0b9ac42cf07b7786e40cbd9.pdf |
Scale Equivariant Graph Metanetworks | https://openreview.net/forum?id=8Fxqn1tZM1 | https://openreview.net/forum?id=8Fxqn1tZM1 | Ioannis Kalogeropoulos,Giorgos Bouritsas,Yannis Panagakis | NIPS 2024,Oral | This paper pertains to an emerging machine learning paradigm: learning higher- order functions, i.e. functions whose inputs are functions themselves, particularly when these inputs are Neural Networks (NNs). With the growing interest in architectures that process NNs, a recurring design principle has permeated the field: adhering to the permutation symmetries arising from the connectionist structure of
NNs. However, are these the sole symmetries present in NN parameterizations? Zooming into most practical activation functions (e.g. sine, ReLU, tanh) answers this question negatively and gives rise to intriguing new symmetries, which we collectively refer to as scaling symmetries, that is, non-zero scalar multiplications and divisions of weights and biases. In this work, we propose Scale Equivariant Graph MetaNetworks - ScaleGMNs, a framework that adapts the Graph Metanetwork (message-passing) paradigm by incorporating scaling symmetries and thus rendering neuron and edge representations equivariant to valid scalings. We introduce novel building blocks, of independent technical interest, that allow for equivariance or invariance with respect to individual scalar multipliers or their product and use them in all components of ScaleGMN. Furthermore, we prove that, under certain expressivity conditions, ScaleGMN can simulate the forward and backward pass of any input feedforward neural network. Experimental results demonstrate that our method advances the state-of-the-art performance for several datasets and activation functions, highlighting the power of scaling symmetries as an inductive bias for NN processing. The source code is publicly available at https://github.com/jkalogero/scalegmn. | https://openreview.net/pdf/6d3b36cd5d6e1acb5d27b18b7da7333f5c075e0e.pdf |
CAT3D: Create Anything in 3D with Multi-View Diffusion Models | https://openreview.net/forum?id=TFZlFRl9Ks | https://openreview.net/forum?id=TFZlFRl9Ks | Ruiqi Gao,Aleksander Holynski,Philipp Henzler,Arthur Brussee,Ricardo Martin Brualla,Pratul P. Srinivasan,Jonathan T. Barron,Ben Poole | NIPS 2024,Oral | Advances in 3D reconstruction have enabled high-quality 3D capture, but require a user to collect hundreds to thousands of images to create a 3D scene. We present CAT3D, a method for creating anything in 3D by simulating this real-world capture process with a multi-view diffusion model. Given any number of input images and a set of target novel viewpoints, our model generates highly consistent novel views of a scene. These generated views can be used as input to robust 3D reconstruction techniques to produce 3D representations that can be rendered from any viewpoint in real-time. CAT3D can create entire 3D scenes in as little as one minute, and outperforms existing methods for single image and few-view 3D scene creation. | https://openreview.net/pdf/a17526d158b6388ba1714b7d1decfdd7ec50e8da.pdf |
Stylus: Automatic Adapter Selection for Diffusion Models | https://openreview.net/forum?id=3Odq2tGSpp | https://openreview.net/forum?id=3Odq2tGSpp | Michael Luo,Justin Wong,Brandon Trabucco,Yanping Huang,Joseph E. Gonzalez,Zhifeng Chen,Russ Salakhutdinov,Ion Stoica | NIPS 2024,Oral | Beyond scaling base models with more data or parameters, fine-tuned adapters provide an alternative way to generate high fidelity, custom images at reduced costs. As such, adapters have been widely adopted by open-source communities, accumulating a database of over 100K adapters—most of which are highly customized with insufficient descriptions. To generate high quality images, this paper explores the problem of matching the prompt to a Stylus of relevant adapters, built on recent work that highlight the performance gains of composing adapters. We introduce Stylus, which efficiently selects and automatically composes task-specific adapters based on a prompt's keywords. Stylus outlines a three-stage approach that first summarizes adapters with improved descriptions and embeddings, retrieves relevant adapters, and then further assembles adapters based on prompts' keywords by checking how well they fit the prompt. To evaluate Stylus, we developed StylusDocs, a curated dataset featuring 75K adapters with pre-computed adapter embeddings. In our evaluation on popular Stable Diffusion checkpoints, Stylus achieves greater CLIP/FID Pareto efficiency and is twice as preferred, with humans and multimodal models as evaluators, over the base model. | https://openreview.net/pdf/b41be568e09a4892b988b18214b6686115e4ccb9.pdf |
The Sample-Communication Complexity Trade-off in Federated Q-Learning | https://openreview.net/forum?id=6YIpvnkjUK | https://openreview.net/forum?id=6YIpvnkjUK | Sudeep Salgia,Yuejie Chi | NIPS 2024,Oral | We consider the problem of Federated Q-learning, where $M$ agents aim to collaboratively learn the optimal Q-function of an unknown infinite horizon Markov Decision Process with finite state and action spaces. We investigate the trade-off between sample and communication complexity for the widely used class of intermittent communication algorithms. We first establish the converse result, where we show that any Federated Q-learning that offers a linear speedup with respect to number of agents in sample complexity needs to incur a communication cost of at least $\Omega(\frac{1}{1-\gamma})$, where $\gamma$ is the discount factor. We also propose a new Federated Q-learning algorithm, called Fed-DVR-Q, which is the first Federated Q-learning algorithm to simultaneously achieve order-optimal sample and communication complexities. Thus, together these results provide a complete characterization of the sample-communication complexity trade-off in Federated Q-learning. | https://openreview.net/pdf/aa89287b43d0d38cc8ef9cd412964652a0b005cb.pdf |
Guiding a Diffusion Model with a Bad Version of Itself | https://openreview.net/forum?id=bg6fVPVs3s | https://openreview.net/forum?id=bg6fVPVs3s | Tero Karras,Miika Aittala,Tuomas Kynkäänniemi,Jaakko Lehtinen,Timo Aila,Samuli Laine | NIPS 2024,Oral | The primary axes of interest in image-generating diffusion models are image quality, the amount of variation in the results, and how well the results align with a given condition, e.g., a class label or a text prompt. The popular classifier-free guidance approach uses an unconditional model to guide a conditional model, leading to simultaneously better prompt alignment and higher-quality images at the cost of reduced variation. These effects seem inherently entangled, and thus hard to control. We make the surprising observation that it is possible to obtain disentangled control over image quality without compromising the amount of variation by guiding generation using a smaller, less-trained version of the model itself rather than an unconditional model. This leads to significant improvements in ImageNet generation, setting record FIDs of 1.01 for 64x64 and 1.25 for 512x512, using publicly available networks. Furthermore, the method is also applicable to unconditional diffusion models, drastically improving their quality. | https://openreview.net/pdf/9173da6000cdac7dc5129691366a29747954b7ef.pdf |
RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation | https://openreview.net/forum?id=r5spnrY6H3 | https://openreview.net/forum?id=r5spnrY6H3 | Changli Wu,Qi Chen,Jiayi Ji,Haowei Wang,Yiwei Ma,You Huang,Gen Luo,Hao Fei,Xiaoshuai Sun,Rongrong Ji | NIPS 2024,Oral | 3D Referring Expression Segmentation (3D-RES) aims to segment 3D objects by correlating referring expressions with point clouds. However, traditional approaches frequently encounter issues like over-segmentation or mis-segmentation, due to insufficient emphasis on spatial information of instances. In this paper, we introduce a Rule-Guided Spatial Awareness Network (RG-SAN) by utilizing solely the spatial information of the target instance for supervision. This approach enables the network to accurately depict the spatial relationships among all entities described in the text, thus enhancing the reasoning capabilities. The RG-SAN consists of the Text-driven Localization Module (TLM) and the Rule-guided Weak Supervision (RWS) strategy. The TLM initially locates all mentioned instances and iteratively refines their positional information. The RWS strategy, acknowledging that only target objects have supervised positional information, employs dependency tree rules to precisely guide the core instance’s positioning. Extensive testing on the ScanRefer benchmark has shown that RG-SAN not only establishes new performance benchmarks, with an mIoU increase of 5.1 points, but also exhibits significant improvements in robustness when processing descriptions with spatial ambiguity. All codes are available at https://github.com/sosppxo/RG-SAN. | https://openreview.net/pdf/074c8caaa0b5feabaad18b25db6c0ee86ed09863.pdf |
VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time | https://openreview.net/forum?id=5zSCSE0k41 | https://openreview.net/forum?id=5zSCSE0k41 | Sicheng Xu,Guojun Chen,Yu-Xiao Guo,Jiaolong Yang,Chong Li,Zhenyu Zang,Yizhong Zhang,Xin Tong,Baining Guo | NIPS 2024,Oral | We introduce VASA, a framework for generating lifelike talking faces with appealing visual affective skills (VAS) given a single static image and a speech audio clip. Our premiere model, VASA-1, is capable of not only generating lip movements that are exquisitely synchronized with the audio, but also producing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness.
The core innovations include a diffusion-based holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos.
Through extensive experiments including evaluation on a set of new metrics, we show that our method significantly outperforms previous methods along various dimensions comprehensively. Our method delivers high video quality with realistic facial and head dynamics and also supports the online generation of 512$\times$512 videos at up to 40 FPS with negligible starting latency.
It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors. | https://openreview.net/pdf/ccbb9d0f4688567aed95ad757cf65f0dd4538631.pdf |
Learning rigid-body simulators over implicit shapes for large-scale scenes and vision | https://openreview.net/forum?id=QDYts5dYgq | https://openreview.net/forum?id=QDYts5dYgq | Yulia Rubanova,Tatiana Lopez-Guevara,Kelsey R Allen,William F Whitney,Kim Stachenfeld,Tobias Pfaff | NIPS 2024,Oral | Simulating large scenes with many rigid objects is crucial for a variety of applications, such as robotics, engineering, film and video games. Rigid interactions are notoriously hard to model: small changes to the initial state or the simulation parameters can lead to large changes in the final state. Recently, learned simulators based on graph networks (GNNs) were developed as an alternative to hand-designed simulators like MuJoCo and Bullet. They are able to accurately capture dynamics of real objects directly from real-world observations. However, current state-of-the-art learned simulators operate on meshes and scale poorly to scenes with many objects or detailed shapes. Here we present SDF-Sim, the first learned rigid-body simulator designed for scale. We use learned signed-distance functions (SDFs) to represent the object shapes and to speed up distance computation. We design the simulator to leverage SDFs and avoid the fundamental bottleneck of the previous simulators associated with collision detection.
For the first time in literature, we demonstrate that we can scale the GNN-based simulators to scenes with hundreds of objects and up to 1.1 million nodes, where mesh-based approaches run out of memory. Finally, we show that SDF-Sim can be applied to real world scenes by extracting SDFs from multi-view images. | https://openreview.net/pdf/a025a4908402e558708ed28771812dd10af193dd.pdf |
Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations | https://openreview.net/forum?id=HRkniCWM3E | https://openreview.net/forum?id=HRkniCWM3E | Nicholas Gao,Stephan Günnemann | NIPS 2024,Oral | Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost. Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independently. Enforcing the permutation antisymmetry of electrons in such generalized neural wave functions remained challenging as existing methods require discrete orbital selection via non-learnable hand-crafted algorithms. This work tackles the problem by defining overparametrized, fully learnable neural wave functions suitable for generalization across molecules. We achieve this by relying on Pfaffians rather than Slater determinants. The Pfaffian allows us to enforce the antisymmetry on arbitrary electronic systems without any constraint on electronic spin configurations or molecular structure. Our empirical evaluation finds that a single neural Pfaffian calculates the ground state and ionization energies with chemical accuracy across various systems. On the TinyMol dataset, we outperform the `gold-standard' CCSD(T) CBS reference energies by 1.9m$E_h$ and reduce energy errors compared to previous generalized neural wave functions by up to an order of magnitude. | https://openreview.net/pdf/c766b139548380a74ad7a69a3c638798a81d5de3.pdf |
DapperFL: Domain Adaptive Federated Learning with Model Fusion Pruning for Edge Devices | https://openreview.net/forum?id=Pezt0xttae | https://openreview.net/forum?id=Pezt0xttae | Yongzhe Jia,Xuyun Zhang,Hongsheng Hu,Kim-Kwang Raymond Choo,Lianyong Qi,Xiaolong Xu,Amin Beheshti,Wanchun Dou | NIPS 2024,Oral | Federated learning (FL) has emerged as a prominent machine learning paradigm in edge computing environments, enabling edge devices to collaboratively optimize a global model without sharing their private data. However, existing FL frameworks suffer from efficacy deterioration due to the system heterogeneity inherent in edge computing, especially in the presence of domain shifts across local data.
In this paper, we propose a heterogeneous FL framework DapperFL, to enhance model performance across multiple domains. In DapperFL, we introduce a dedicated Model Fusion Pruning (MFP) module to produce personalized compact local models for clients to address the system heterogeneity challenges. The MFP module prunes local models with fused knowledge obtained from both local and remaining domains, ensuring robustness to domain shifts. Additionally, we design a Domain Adaptive Regularization (DAR) module to further improve the overall performance of DapperFL. The DAR module employs regularization generated by the pruned model, aiming to learn robust representations across domains. Furthermore, we introduce a specific aggregation algorithm for aggregating heterogeneous local models with tailored architectures and weights. We implement DapperFL on a real-world FL platform with heterogeneous clients. Experimental results on benchmark datasets with multiple domains demonstrate that DapperFL outperforms several state-of-the-art FL frameworks by up to 2.28%, while significantly achieving model volume reductions ranging from 20% to 80%. Our code is available at: https://github.com/jyzgh/DapperFL. | https://openreview.net/pdf/40235b2ea6b49d81841886f194bd9d4a2897ff15.pdf |
DenoiseRep: Denoising Model for Representation Learning | https://openreview.net/forum?id=OycU0bAus6 | https://openreview.net/forum?id=OycU0bAus6 | zhengrui Xu,Guan'an Wang,Xiaowen Huang,Jitao Sang | NIPS 2024,Oral | The denoising model has been proven a powerful generative model but has little exploration of discriminative tasks. Representation learning is important in discriminative tasks, which is defined as *"learning representations (or features) of the data that make it easier to extract useful information when building classifiers or other predictors"*. In this paper, we propose a novel Denoising Model for Representation Learning (*DenoiseRep*) to improve feature discrimination with joint feature extraction and denoising. *DenoiseRep* views each embedding layer in a backbone as a denoising layer, processing the cascaded embedding layers as if we are recursively denoise features step-by-step. This unifies the frameworks of feature extraction and denoising, where the former progressively embeds features from low-level to high-level, and the latter recursively denoises features step-by-step. After that, *DenoiseRep* fuses the parameters of feature extraction and denoising layers, and *theoretically demonstrates* its equivalence before and after the fusion, thus making feature denoising computation-free. *DenoiseRep* is a label-free algorithm that incrementally improves features but also complementary to the label if available. Experimental results on various discriminative vision tasks, including re-identification (Market-1501, DukeMTMC-reID, MSMT17, CUHK-03, vehicleID), image classification (ImageNet, UB200, Oxford-Pet, Flowers), object detection (COCO), image segmentation (ADE20K) show stability and impressive improvements. We also validate its effectiveness on the CNN (ResNet) and Transformer (ViT, Swin, Vmamda) architectures. | https://openreview.net/pdf/ccc22185c7b5ceeab3929bff884d84473546f5d7.pdf |
Optimal Parallelization of Boosting | https://openreview.net/forum?id=rtz4df9IF1 | https://openreview.net/forum?id=rtz4df9IF1 | Arthur da Cunha,Mikael Møller Høgsgaard,Kasper Green Larsen | NIPS 2024,Oral | Recent works on the parallel complexity of Boosting have established strong lower bounds on the tradeoff between the number of training rounds $p$ and the total parallel work per round $t$.
These works have also presented highly non-trivial parallel algorithms that shed light on different regions of this tradeoff.
Despite these advancements, a significant gap persists between the theoretical lower bounds and the performance of these algorithms across much of the tradeoff space.
In this work, we essentially close this gap by providing both improved lower bounds on the parallel complexity of weak-to-strong learners, and a parallel Boosting algorithm whose performance matches these bounds across the entire $p$ vs. $t$ compromise spectrum, up to logarithmic factors.
Ultimately, this work settles the parallel complexity of Boosting algorithms that are nearly sample-optimal. | https://openreview.net/pdf/b88f812c42a45b79e5e8663c27463c4580ab45a6.pdf |
Decompose, Analyze and Rethink: Solving Intricate Problems with Human-like Reasoning Cycle | https://openreview.net/forum?id=NPKZF1WDjZ | https://openreview.net/forum?id=NPKZF1WDjZ | Shangzi Xue,Zhenya Huang,Jiayu Liu,Xin Lin,Yuting Ning,Binbin Jin,Xin Li,Qi Liu | NIPS 2024,Oral | In this paper, we introduce DeAR (_Decompose-Analyze-Rethink_), a framework that iteratively builds a reasoning tree to tackle intricate problems within a single large language model (LLM). Unlike approaches that extend or search for rationales, DeAR is featured by 1) adopting a tree-based question decomposition manner to plan the organization of rationales, which mimics the logical planning inherent
in human cognition; 2) globally updating the rationales at each reasoning step through natural language feedback. Specifically, the _Decompose_ stage decomposes the question into simpler sub-questions, storing them as new nodes; the _Analyze_ stage generates and self-checks rationales for sub-questions at each node evel; and the _Rethink_ stage updates parent-node rationales based on feedback from their child nodes. By generating and updating the reasoning process from a more global perspective, DeAR constructs more adaptive and accurate logical structures for complex problems, facilitating timely error correction compared to rationale-extension and search-based approaches such as Tree-of-Thoughts (ToT) and Graph-of-Thoughts (GoT). We conduct extensive experiments on three reasoning benchmarks, including ScienceQA, StrategyQA, and GSM8K, which cover a variety of reasoning tasks, demonstrating that our approach significantly reduces logical errors and enhances performance across various LLMs. Furthermore, we validate that DeAR is an efficient method that achieves a superior trade-off between accuracy and reasoning time compared to ToT and GoT. | https://openreview.net/pdf/48641218f9362ec9ed75e6482a2030d00757c6d8.pdf |
Bayesian-guided Label Mapping for Visual Reprogramming | https://openreview.net/forum?id=135eKqDoRR | https://openreview.net/forum?id=135eKqDoRR | Chengyi Cai,Zesheng Ye,Lei Feng,Jianzhong Qi,Feng Liu | NIPS 2024,Oral | *Visual reprogramming* (VR) leverages the intrinsic capabilities of pretrained vision models by adapting their input or output interfaces to solve downstream tasks whose labels (i.e., downstream labels) might be totally different from the labels associated with the pretrained models (i.e., pretrained labels).
When adapting the output interface, label mapping methods transform the pretrained labels to downstream labels by establishing a gradient-free one-to-one correspondence between the two sets of labels.
However, in this paper, we reveal that one-to-one mappings may overlook the complex relationship between pretrained and downstream labels. Motivated by this observation, we propose a ***B**ayesian-guided **L**abel **M**apping* (BLM) method.
BLM constructs an iteratively-updated probabilistic label mapping matrix, with each element quantifying a pairwise relationship between pretrained and downstream labels.
The assignment of values to the constructed matrix is guided by Bayesian conditional probability, considering the joint distribution of the downstream labels and the labels predicted by the pretrained model on downstream samples. Experiments conducted on both pretrained vision models (e.g., ResNeXt) and vision-language models (e.g., CLIP) demonstrate the superior performance of BLM over existing label mapping methods. The success of BLM also offers a probabilistic lens through which to understand and analyze the effectiveness of VR.
Our code is available at https://github.com/tmlr-group/BayesianLM. | https://openreview.net/pdf/5bd51ea14b1857a137832007130aaf712c5b6a63.pdf |
Policy Learning from Tutorial Books via Understanding, Rehearsing and Introspecting | https://openreview.net/forum?id=Ddak3nSqQM | https://openreview.net/forum?id=Ddak3nSqQM | Xiong-Hui Chen,Ziyan Wang,Yali Du,Shengyi Jiang,Meng Fang,Yang Yu,Jun Wang | NIPS 2024,Oral | When humans need to learn a new skill, we can acquire knowledge through written books, including textbooks, tutorials, etc. However, current research for decision-making, like reinforcement learning (RL), has primarily required numerous real interactions with the target environment to learn a skill, while failing to utilize the existing knowledge already summarized in the text. The success of Large Language Models (LLMs) sheds light on utilizing such knowledge behind the books. In this paper, we discuss a new policy learning problem called Policy Learning from tutorial Books (PLfB) upon the shoulders of LLMs’ systems, which aims to leverage rich resources such as tutorial books to derive a policy network. Inspired by how humans learn from books, we solve the problem via a three-stage framework: Understanding, Rehearsing, and Introspecting (URI). In particular, it first rehearses decision-making trajectories based on the derived knowledge after understanding the books, then introspects in the imaginary dataset to distill a policy network.
We build two benchmarks for PLfB~based on Tic-Tac-Toe and Football games. In experiment, URI's policy achieves at least 44% net win rate against GPT-based agents without any real data; In Football game, which is a complex scenario, URI's policy beat the built-in AIs with a 37% while using GPT-based agent can only achieve a 6\% winning rate. The project page: https://plfb-football.github.io. | https://openreview.net/pdf/f4d95b3399a1323142228b0362d42345119de142.pdf |
GIC: Gaussian-Informed Continuum for Physical Property Identification and Simulation | https://openreview.net/forum?id=SSCtCq2MH2 | https://openreview.net/forum?id=SSCtCq2MH2 | Junhao Cai,Yuji Yang,Weihao Yuan,Yisheng HE,Zilong Dong,Liefeng Bo,Hui Cheng,Qifeng Chen | NIPS 2024,Oral | This paper studies the problem of estimating physical properties (system identification) through visual observations. To facilitate geometry-aware guidance in physical property estimation, we introduce a novel hybrid framework that leverages 3D Gaussian representation to not only capture explicit shapes but also enable the simulated continuum to render object masks as 2D shape surrogates during training. We propose a new dynamic 3D Gaussian framework based on motion factorization to recover the object as 3D Gaussian point sets across different time states. Furthermore, we develop a coarse-to-fine filling strategy to generate the density fields of the object from the Gaussian reconstruction, allowing for the extraction of object continuums along with their surfaces and the integration of Gaussian attributes into these continuum. In addition to the extracted object surfaces, the Gaussian-informed continuum also enables the rendering of object masks during simulations, serving as 2D-shape guidance for physical property estimation. Extensive experimental evaluations demonstrate that our pipeline achieves state-of-the-art performance across multiple benchmarks and metrics. Additionally, we illustrate the effectiveness of the proposed method through real-world demonstrations, showcasing its practical utility. Our project page is at https://jukgei.github.io/project/gic. | https://openreview.net/pdf/35d3fb34ac9b1b65eb96b7a01480e9b13895a855.pdf |
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression | https://openreview.net/forum?id=YvA8UF0I37 | https://openreview.net/forum?id=YvA8UF0I37 | Vladimir Malinovskii,Denis Mazur,Ivan Ilin,Denis Kuznedelev,Konstantin Pavlovich Burlachenko,Kai Yi,Dan Alistarh,Peter Richtárik | NIPS 2024,Oral | There has been significant interest in "extreme" compression of large language models (LLMs), i.e. to 1-2 bits per parameter, which allows such models to be executed efficiently on resource-constrained devices.
Existing work focused on improved one-shot quantization techniques and weight representations; yet, purely post-training approaches are reaching diminishing returns in terms of the accuracy-vs-bit-width trade-off. State-of-the-art quantization methods such as QuIP# and AQLM include fine-tuning (part of) the compressed parameters over a limited amount of calibration data; however, such fine-tuning techniques over compressed weights often make exclusive use of straight-through estimators (STE), whose performance is not well-understood in this setting.
In this work, we question the use of STE for extreme LLM compression, showing that it can be sub-optimal, and perform a systematic study of quantization-aware fine-tuning strategies for LLMs.
We propose PV-Tuning - a representation-agnostic framework that generalizes and improves upon existing fine-tuning strategies, and provides convergence guarantees in restricted cases.
On the practical side, when used for 1-2 bit vector quantization, PV-Tuning outperforms prior techniques for highly-performant models such as Llama and Mistral.
Using PV-Tuning, we achieve the first Pareto-optimal quantization for Llama-2 family models at 2 bits per parameter. | https://openreview.net/pdf/a41bd553618c035e26d1f1f6a8ebd19108274f50.pdf |
RL-GPT: Integrating Reinforcement Learning and Code-as-policy | https://openreview.net/forum?id=LEzx6QRkRH | https://openreview.net/forum?id=LEzx6QRkRH | Shaoteng Liu,Haoqi Yuan,Minda Hu,Yanwei Li,Yukang Chen,Shu Liu,Zongqing Lu,Jiaya Jia | NIPS 2024,Oral | Large Language Models (LLMs) have demonstrated proficiency in utilizing various tools by coding, yet they face limitations in handling intricate logic and precise control. In embodied tasks, high-level planning is amenable to direct coding, while low-level actions often necessitate task-specific refinement, such as Reinforcement Learning (RL). To seamlessly integrate both modalities, we introduce a two-level hierarchical framework, RL-GPT, comprising a slow agent and a fast agent. The slow agent analyzes actions suitable for coding, while the fast agent executes coding tasks. This decomposition effectively focuses each agent on specific tasks, proving highly efficient within our pipeline. Our approach outperforms traditional RL methods and existing GPT agents, demonstrating superior efficiency. In the Minecraft game, it rapidly obtains diamonds within a single day on an RTX3090. Additionally, it achieves SOTA performance across all designated MineDojo tasks. | https://openreview.net/pdf/8489e6d14edc65b16f5f04f6773edb790ac430a4.pdf |
Statistical Efficiency of Distributional Temporal Difference Learning | https://openreview.net/forum?id=eWUM5hRYgH | https://openreview.net/forum?id=eWUM5hRYgH | Yang Peng,Liangyu Zhang,Zhihua Zhang | NIPS 2024,Oral | Distributional reinforcement learning (DRL) has achieved empirical success in various domains.
One of the core tasks in the field of DRL is distributional policy evaluation, which involves estimating the return distribution $\eta^\pi$ for a given policy $\pi$.
The distributional temporal difference learning has been accordingly proposed, which
is an extension of the temporal difference learning (TD) in the classic RL area.
In the tabular case, Rowland et al. [2018] and Rowland et al. [2023] proved the asymptotic convergence of two instances of distributional TD, namely categorical temporal difference learning (CTD) and quantile temporal difference learning (QTD), respectively.
In this paper, we go a step further and analyze the finite-sample performance of distributional TD.
To facilitate theoretical analysis, we propose a non-parametric distributional TD learning (NTD).
For a $\gamma$-discounted infinite-horizon tabular Markov decision process,
we show that for NTD we need $\widetilde O\left(\frac{1}{\varepsilon^{2p}(1-\gamma)^{2p+1}}\right)$ iterations to achieve an $\varepsilon$-optimal estimator with high probability, when the estimation error is measured by the $p$-Wasserstein distance.
This sample complexity bound is minimax optimal (up to logarithmic factors) in the case of the $1$-Wasserstein distance.
To achieve this, we establish a novel Freedman's inequality in Hilbert spaces, which would be of independent interest.
In addition, we revisit CTD, showing that the same non-asymptotic convergence bounds hold for CTD in the case of the $p$-Wasserstein distance. | https://openreview.net/pdf/3002a75ebfe6a386efc8dee88d8a2382d1d837e1.pdf |
Exploitation of a Latent Mechanism in Graph Contrastive Learning: Representation Scattering | https://openreview.net/forum?id=R8SolCx62K | https://openreview.net/forum?id=R8SolCx62K | Dongxiao He,Lianze Shan,Jitao Zhao,Hengrui Zhang,Zhen Wang,Weixiong Zhang | NIPS 2024,Oral | Graph Contrastive Learning (GCL) has emerged as a powerful approach for generating graph representations without the need for manual annotation. Most advanced GCL methods fall into three main frameworks: node discrimination, group discrimination, and bootstrapping schemes, all of which achieve comparable performance. However, the underlying mechanisms and factors that contribute to their effectiveness are not yet fully understood. In this paper, we revisit these frameworks and reveal a common mechanism—representation scattering—that significantly enhances their performance. Our discovery highlights an essential feature of GCL and unifies these seemingly disparate methods under the concept of representation scattering. To leverage this insight, we introduce Scattering Graph Representation Learning (SGRL), a novel framework that incorporates a new representation scattering mechanism designed to enhance representation diversity through a center-away strategy. Additionally, consider the interconnected nature of graphs, we develop a topology-based constraint mechanism that integrates graph structural properties with representation scattering to prevent excessive scattering. We extensively evaluate SGRL across various downstream tasks on benchmark datasets, demonstrating its efficacy and superiority over existing GCL methods. Our findings underscore the significance of representation scattering in GCL and provide a structured framework for harnessing this mechanism to advance graph representation learning. The code of SGRL is at https://github.com/hedongxiao-tju/SGRL. | https://openreview.net/pdf/e21a9b3822e99ccaefbd6f6562cd41ff019e09ba.pdf |
You Only Cache Once: Decoder-Decoder Architectures for Language Models | https://openreview.net/forum?id=25Ioxw576r | https://openreview.net/forum?id=25Ioxw576r | Yutao Sun,Li Dong,Yi Zhu,Shaohan Huang,Wenhui Wang,Shuming Ma,Quanlu Zhang,Jianyong Wang,Furu Wei | NIPS 2024,Oral | We introduce a decoder-decoder architecture, YOCO, for large language models, which only caches key-value pairs once. It consists of two components, i.e., a cross-decoder stacked upon a self-decoder. The self-decoder efficiently encodes global key-value (KV) caches that are reused by the cross-decoder via cross-attention. The overall model behaves like a decoder-only Transformer, although YOCO only caches once. The design substantially reduces GPU memory demands, yet retains global attention capability. Additionally, the computation flow enables prefilling to early exit without changing the final output, thereby significantly speeding up the prefill stage. Experimental results demonstrate that YOCO achieves favorable performance compared to Transformer in various settings of scaling up model size and number of training tokens. We also extend YOCO to 1M context length with near-perfect needle retrieval accuracy. The profiling results show that YOCO improves inference memory, prefill latency, and throughput by orders of magnitude across context lengths and model sizes. | https://openreview.net/pdf/c001fdfd3a2894f8c62da3eef3be8317b3800c61.pdf |
Divide-and-Conquer Meets Consensus: Unleashing the Power of Functions in Code Generation | https://openreview.net/forum?id=cFqAANINgW | https://openreview.net/forum?id=cFqAANINgW | Jingchang Chen,Hongxuan Tang,Zheng Chu,Qianglong Chen,Zekun Wang,Ming Liu,Bing Qin | NIPS 2024,Oral | Despite recent progress made by large language models in code generation, they still struggle with programs that meet complex requirements. Recent work utilizes plan-and-solve decomposition to decrease the complexity and leverage self-tests to refine the generated program. Yet, planning deep-inside requirements in advance can be challenging, and the tests need to be accurate to accomplish self-improvement. To this end, we propose FunCoder, a code generation framework incorporating the divide-and-conquer strategy with functional consensus. Specifically, FunCoder recursively branches off sub-functions as smaller goals during code generation, represented by a tree hierarchy. These sub-functions are then composited to attain more complex objectives. Additionally, we designate functions via a consensus formed by identifying similarities in program behavior, mitigating error propagation. FunCoder outperforms state-of-the-art methods by +9.8% on average in HumanEval, MBPP, xCodeEval and MATH with GPT-3.5 and GPT-4. Moreover, our method demonstrates superiority on smaller models: With FunCoder, StableCode-3b surpasses GPT-3.5 by +18.6% and achieves 97.7% of GPT-4's performance on HumanEval. Further analysis reveals that our proposed dynamic function decomposition is capable of handling complex requirements, and the functional consensus prevails over self-testing in correctness evaluation. | https://openreview.net/pdf/d6fd653a659d95ce4466896d76af521361a4e0ef.pdf |
DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs | https://openreview.net/forum?id=mp8u2Pcmqz | https://openreview.net/forum?id=mp8u2Pcmqz | Haokun Lin,Haobo Xu,Yichen Wu,Jingzhi Cui,Yingtao Zhang,Linzhan Mou,Linqi Song,Zhenan Sun,Ying Wei | NIPS 2024,Oral | Quantization of large language models (LLMs) faces significant challenges, particularly due to the presence of outlier activations that impede efficient low-bit representation. Traditional approaches predominantly address Normal Outliers, which are activations across all tokens with relatively large magnitudes. However, these methods struggle with smoothing Massive Outliers that display significantly larger values, which leads to significant performance degradation in low-bit quantization. In this paper, we introduce DuQuant, a novel approach that utilizes rotation and permutation transformations to more effectively mitigate both massive and normal outliers. First, DuQuant starts by constructing the rotation matrix, using specific outlier dimensions as prior knowledge, to redistribute outliers to adjacent channels by block-wise rotation. Second, We further employ a zigzag permutation to balance the distribution of outliers across blocks, thereby reducing block-wise variance. A subsequent rotation further smooths the activation landscape, enhancing model performance. DuQuant simplifies the quantization process and excels in managing outliers, outperforming the state-of-the-art baselines across various sizes and types of LLMs on multiple tasks, even with 4-bit weight-activation quantization. Our code is available at https://github.com/Hsu1023/DuQuant. | https://openreview.net/pdf/e940d83a63794869ac25c4a08c075cc76b1ebdef.pdf |
Not All Tokens Are What You Need for Pretraining | https://openreview.net/forum?id=0NMzBwqaAJ | https://openreview.net/forum?id=0NMzBwqaAJ | Zhenghao Lin,Zhibin Gou,Yeyun Gong,Xiao Liu,yelong shen,Ruochen Xu,Chen Lin,Yujiu Yang,Jian Jiao,Nan Duan,Weizhu Chen | NIPS 2024,Oral | Previous language model pre-training methods have uniformly applied a next-token prediction loss to all training tokens. Challenging this norm, we posit that ''Not all tokens in a corpus are equally important for language model training''. Our initial analysis examines token-level training dynamics of language model, revealing distinct loss patterns for different tokens. Leveraging these insights, we introduce a new language model called Rho-1. Unlike traditional LMs that learn to predict every next token in a corpus, Rho-1 employs Selective Language Modeling (SLM), which selectively trains on useful tokens that aligned with the desired distribution. This approach involves scoring training tokens using a reference model, and then training the language model with a focused loss on tokens with higher scores. When continual continual pretraining on 15B OpenWebMath corpus, Rho-1 yields an absolute improvement in few-shot accuracy of up to 30% in 9 math tasks. After fine-tuning, Rho-1-1B and 7B achieved state-of-the-art results of 40.6% and 51.8% on MATH dataset, respectively - matching DeepSeekMath with only 3% of the pretraining tokens. Furthermore, when continual pretraining on 80B general tokens, Rho-1 achieves 6.8% average enhancement across 15 diverse tasks, increasing both data efficiency and performance of the language model pre-training. | https://openreview.net/pdf/479db135fe05befa88285a35b9f23c2e1122fa8f.pdf |
Achieving Optimal Clustering in Gaussian Mixture Models with Anisotropic Covariance Structures | https://openreview.net/forum?id=ge8GZn8Gtu | https://openreview.net/forum?id=ge8GZn8Gtu | Xin Chen,Anderson Ye Zhang | NIPS 2024,Oral | We study clustering under anisotropic Gaussian Mixture Models (GMMs), where covariance matrices from different clusters are unknown and are not necessarily the identity matrix. We analyze two anisotropic scenarios: homogeneous, with identical covariance matrices, and heterogeneous, with distinct matrices per cluster. For these models, we derive minimax lower bounds that illustrate the critical influence of covariance structures on clustering accuracy. To solve the clustering problem, we consider a variant of Lloyd's algorithm, adapted to estimate and utilize covariance information iteratively. We prove that the adjusted algorithm not only achieves the minimax optimality but also converges within a logarithmic number of iterations, thus bridging the gap between theoretical guarantees and practical efficiency. | https://openreview.net/pdf/43a0e0281aa6e1dcadbd067c201ceb2c07c5bf4c.pdf |
Return of Unconditional Generation: A Self-supervised Representation Generation Method | https://openreview.net/forum?id=clTa4JFBML | https://openreview.net/forum?id=clTa4JFBML | Tianhong Li,Dina Katabi,Kaiming He | NIPS 2024,Oral | Unconditional generation -- the problem of modeling data distribution without relying on human-annotated labels -- is a long-standing and fundamental challenge in generative models, creating a potential of learning from large-scale unlabeled data. In the literature, the generation quality of an unconditional method has been much worse than that of its conditional counterpart. This gap can be attributed to the lack of semantic information provided by labels. In this work, we show that one can close this gap by generating semantic representations in the representation space produced by a self-supervised encoder. These representations can be used to condition the image generator. This framework, called Representation-Conditioned Generation (RCG), provides an effective solution to the unconditional generation problem without using labels. Through comprehensive experiments, we observe that RCG significantly improves unconditional generation quality: e.g., it achieves a new state-of-the-art FID of 2.15 on ImageNet 256x256, largely reducing the previous best of 5.91 by a relative 64%. Our unconditional results are situated in the same tier as the leading class-conditional ones. We hope these encouraging observations will attract the community's attention to the fundamental problem of unconditional generation. Code is available at [https://github.com/LTH14/rcg](https://github.com/LTH14/rcg). | https://openreview.net/pdf/5eb9f339be4769dbc0a7ac40c1b8e020626b9052.pdf |
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs | https://openreview.net/forum?id=Vi8AepAXGy | https://openreview.net/forum?id=Vi8AepAXGy | Shengbang Tong,Ellis L Brown II,Penghao Wu,Sanghyun Woo,ADITHYA JAIRAM IYER,Sai Charitha Akula,Shusheng Yang,Jihan Yang,Manoj Middepogu,Ziteng Wang,Xichen Pan,Rob Fergus,Yann LeCun,Saining Xie | NIPS 2024,Oral | We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach. While stronger language models can enhance multimodal capabilities, the design choices for vision components are often insufficiently explored and disconnected from visual representation learning research. This gap hinders accurate sensory grounding in real-world scenarios. Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations, offering new insights into different models and architectures—self-supervised, strongly supervised, or combinations thereof—based on experiments with over 15 vision models. We critically examine existing MLLM benchmarks, addressing the difficulties involved in consolidating and interpreting results from various tasks. To further improve visual grounding, we propose spatial vision aggregator (SVA), a dynamic and spatially-aware connector that integrates vision features with LLMs while reducing the number of tokens. Additionally, we discuss the curation of high-quality visual instruction-tuning data from publicly available sources, emphasizing the importance of distribution balancing. Collectively, Cambrian-1 not only achieves state-of-the-art performances but also serves as a comprehensive, open cookbook for instruction-tuned MLLMs. We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes. We hope our release will inspire and accelerate advancements in multimodal systems and visual representation learning. | https://openreview.net/pdf/6e2bfbfc4a63dae9ce2226db223d05c1152a1fb8.pdf |
MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making | https://openreview.net/forum?id=EKdk4vxKO4 | https://openreview.net/forum?id=EKdk4vxKO4 | Yubin Kim,Chanwoo Park,Hyewon Jeong,Yik Siu Chan,Xuhai Xu,Daniel McDuff,Hyeonhoon Lee,Marzyeh Ghassemi,Cynthia Breazeal,Hae Won Park | NIPS 2024,Oral | Foundation models are becoming valuable tools in medicine. Yet despite their promise, the best way to leverage Large Language Models (LLMs) in complex medical tasks remains an open question. We introduce a novel multi-agent framework, named **M**edical **D**ecision-making **Agents** (**MDAgents**) that helps to address this gap by automatically assigning a collaboration structure to a team of LLMs. The assigned solo or group collaboration structure is tailored to the medical task at hand, a simple emulation inspired by the way real-world medical decision-making processes are adapted to tasks of different complexities. We evaluate our framework and baseline methods using state-of-the-art LLMs across a suite of real-world medical knowledge and clinical diagnosis benchmarks, including a comparison of
LLMs’ medical complexity classification against human physicians. MDAgents achieved the **best performance in seven out of ten** benchmarks on tasks requiring an understanding of medical knowledge and multi-modal reasoning, showing a significant **improvement of up to 4.2\%** ($p$ < 0.05) compared to previous methods' best performances. Ablation studies reveal that MDAgents effectively determines medical complexity to optimize for efficiency and accuracy across diverse medical tasks. Notably, the combination of moderator review and external medical knowledge in group collaboration resulted in an average accuracy **improvement of 11.8\%**. Our code can be found at https://github.com/mitmedialab/MDAgents. | https://openreview.net/pdf/9993edbaf6679577c07aeae6b39fe0a546abaca1.pdf |
Graph Diffusion Transformers for Multi-Conditional Molecular Generation | https://openreview.net/forum?id=cfrDLD1wfO | https://openreview.net/forum?id=cfrDLD1wfO | Gang Liu,Jiaxin Xu,Tengfei Luo,Meng Jiang | NIPS 2024,Oral | Inverse molecular design with diffusion models holds great potential for advancements in material and drug discovery. Despite success in unconditional molecule generation, integrating multiple properties such as synthetic score and gas permeability as condition constraints into diffusion models remains unexplored. We present the Graph Diffusion Transformer (Graph DiT) for multi-conditional molecular generation. Graph DiT has a condition encoder to learn the representation of numerical and categorical properties and utilizes a Transformer-based graph denoiser to achieve molecular graph denoising under conditions. Unlike previous graph diffusion models that add noise separately on the atoms and bonds in the forward diffusion process, we propose a graph-dependent noise model for training Graph DiT, designed to accurately estimate graph-related noise in molecules. We extensively validate the Graph DiT for multi-conditional polymer and small molecule generation. Results demonstrate our superiority across metrics from distribution learning to condition control for molecular properties. A polymer inverse design task for gas separation with feedback from domain experts further demonstrates its practical utility. The code is available at https://github.com/liugangcode/Graph-DiT. | https://openreview.net/pdf/46c02e1bf7e313ee41cca4c78d39825812de8c3d.pdf |
MeshFormer : High-Quality Mesh Generation with 3D-Guided Reconstruction Model | https://openreview.net/forum?id=x7pjdDod6Z | https://openreview.net/forum?id=x7pjdDod6Z | Minghua Liu,Chong Zeng,Xinyue Wei,Ruoxi Shi,Linghao Chen,Chao Xu,Mengqi Zhang,Zhaoning Wang,Xiaoshuai Zhang,Isabella Liu,Hongzhi Wu,Hao Su | NIPS 2024,Oral | Open-world 3D reconstruction models have recently garnered significant attention. However, without sufficient 3D inductive bias, existing methods typically entail expensive training costs and struggle to extract high-quality 3D meshes. In this work, we introduce MeshFormer, a sparse-view reconstruction model that explicitly leverages 3D native structure, input guidance, and training supervision. Specifically, instead of using a triplane representation, we store features in 3D sparse voxels and combine transformers with 3D convolutions to leverage an explicit 3D structure and projective bias. In addition to sparse-view RGB input, we require the network to take input and generate corresponding normal maps. The input normal maps can be predicted by 2D diffusion models, significantly aiding in the guidance and refinement of the geometry's learning. Moreover, by combining Signed Distance Function (SDF) supervision with surface rendering, we directly learn to generate high-quality meshes without the need for complex multi-stage training processes. By incorporating these explicit 3D biases, MeshFormer can be trained efficiently and deliver high-quality textured meshes with fine-grained geometric details. It can also be integrated with 2D diffusion models to enable fast single-image-to-3D and text-to-3D tasks. **Videos are available at https://meshformer3d.github.io/** | https://openreview.net/pdf/0137993914b1c34b105ba8ce5545d99389e3b12a.pdf |
Get Rid of Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework | https://openreview.net/forum?id=tnh4LK72yj | https://openreview.net/forum?id=tnh4LK72yj | Zhongchao Yi,Zhengyang Zhou,Qihe Huang,Yanjiang Chen,Liheng Yu,Xu Wang,Yang Wang | NIPS 2024,Oral | Spatiotemporal learning has become a pivotal technique to enable urban intelligence. Traditional spatiotemporal models mostly focus on a specific task by assuming a same distribution between training and testing sets. However, given that urban systems are usually dynamic, multi-sourced with imbalanced data distributions, current specific task-specific models fail to generalize to new urban conditions and adapt to new domains without explicitly modeling interdependencies across various dimensions and types of urban data. To this end, we argue that there is an essential to propose a Continuous Multi-task Spatio-Temporal learning framework (CMuST) to empower collective urban intelligence, which reforms the urban spatiotemporal learning from single-domain to cooperatively multi-dimensional and multi-task learning. Specifically, CMuST proposes a new multi-dimensional spatiotemporal interaction network (MSTI) to allow cross-interactions between context and main observations as well as self-interactions within spatial and temporal aspects to be exposed, which is also the core for capturing task-level commonality and personalization. To ensure continuous task learning, a novel Rolling Adaptation training scheme (RoAda) is devised, which not only preserves task uniqueness by constructing data summarization-driven task prompts, but also harnesses correlated patterns among tasks by iterative model behavior modeling. We further establish a benchmark of three cities for multi-task spatiotemporal learning, and empirically demonstrate the superiority of CMuST via extensive evaluations on these datasets. The impressive improvements on both few-shot streaming data and new domain tasks against existing SOAT methods are achieved. Code is available at https://github.com/DILab-USTCSZ/CMuST. | https://openreview.net/pdf/97148ef3439d4c09aeb2847ed85a61ab7bd105d9.pdf |
HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning | https://openreview.net/forum?id=qEpi8uWX3N | https://openreview.net/forum?id=qEpi8uWX3N | Chunlin Tian,Zhan Shi,Zhijiang Guo,Li Li,Cheng-zhong Xu | NIPS 2024,Oral | Adapting Large Language Models (LLMs) to new tasks through fine-tuning has been made more efficient by the introduction of Parameter-Efficient Fine-Tuning (PEFT) techniques, such as LoRA. However, these methods often underperform compared to full fine-tuning, particularly in scenarios involving complex datasets. This issue becomes even more pronounced in complex domains, highlighting the need for improved PEFT approaches that can achieve better performance. Through a series of experiments, we have uncovered two critical insights that shed light on the training and parameter inefficiency of LoRA. Building on these insights, we have developed HydraLoRA, a LoRA framework with an asymmetric structure that eliminates the need for domain expertise. Our experiments demonstrate that HydraLoRA outperforms other PEFT approaches, even those that rely on domain knowledge during the training and inference phases. Our anonymous codes are submitted with the paper and will be publicly available. Code is available: https://github.com/Clin0212/HydraLoRA. | https://openreview.net/pdf/60e4bb51758f975380df1586e785d29a101c7f4a.pdf |
SeeA*: Efficient Exploration-Enhanced A* Search by Selective Sampling | https://openreview.net/forum?id=mSaqxZVZW8 | https://openreview.net/forum?id=mSaqxZVZW8 | Dengwei Zhao,Shikui Tu,Lei Xu | NIPS 2024,Oral | Monte-Carlo tree search (MCTS) and reinforcement learning contributed crucially to the success of AlphaGo and AlphaZero, and A$^*$ is a tree search algorithm among the most well-known ones in the classical AI literature. MCTS and A$^*$ both perform heuristic search and are mutually beneficial. Efforts have been made to the renaissance of A$^*$ from three possible aspects, two of which have been confirmed by studies in recent years, while the third is about the OPEN list that consists of open nodes of A$^*$ search, but still lacks deep investigation. This paper aims at the third, i.e., developing the Sampling-exploration enhanced A$^*$ (SeeA$^*$) search by constructing a dynamic subset of OPEN through a selective sampling process, such that the node with the best heuristic value in this subset instead of in the OPEN is expanded. Nodes with the best heuristic values in OPEN are most probably picked into this subset, but sometimes may not be included, which enables SeeA$^*$ to explore other promising branches. Three sampling techniques are presented for comparative investigations. Moreover, under the assumption about the distribution of prediction errors, we have theoretically shown the superior efficiency of SeeA$^*$ over A$^*$ search, particularly when the accuracy of the guiding heuristic function is insufficient. Experimental results on retrosynthetic planning in organic chemistry, logic synthesis in integrated circuit design, and the classical Sokoban game empirically demonstrate the efficiency of SeeA$^*$, in comparison with the state-of-the-art heuristic search algorithms. | https://openreview.net/pdf/fa5dedfe169ea46edcf332d8d7d9b5256b506793.pdf |
Improved Distribution Matching Distillation for Fast Image Synthesis | https://openreview.net/forum?id=tQukGCDaNT | https://openreview.net/forum?id=tQukGCDaNT | Tianwei Yin,Michaël Gharbi,Taesung Park,Richard Zhang,Eli Shechtman,Fredo Durand,William T. Freeman | NIPS 2024,Oral | Recent approaches have shown promises distilling expensive diffusion models into efficient one-step generators.
Amongst them, Distribution Matching Distillation (DMD) produces one-step generators that match their teacher in distribution, i.e., the distillation process does not enforce a one-to-one correspondence with the sampling trajectories of their teachers.
However, to ensure stable training in practice, DMD requires an additional regression loss computed using a large set of noise--image pairs, generated by the teacher with many steps of a deterministic sampler.
This is not only computationally expensive for large-scale text-to-image synthesis, but it also limits the student's quality, tying it too closely to the teacher's original sampling paths.
We introduce DMD2, a set of techniques that lift this limitation and improve DMD training.
First, we eliminate the regression loss and the need for expensive dataset construction.
We show that the resulting instability is due to the "fake" critic not estimating the distribution
of generated samples with sufficient accuracy and propose a two time-scale update rule as a remedy.
Second, we integrate a GAN loss into the distillation procedure, discriminating between generated samples and real images.
This lets us train the student model on real data, thus mitigating the imperfect "real" score estimation from the teacher model, and thereby enhancing quality.
Third, we introduce a new training procedure that enables multi-step sampling in the student, and
addresses the training--inference input mismatch of previous work, by simulating inference-time generator samples during training.
Taken together, our improvements set new benchmarks in one-step image generation, with FID scores of 1.28 on ImageNet-64×64 and 8.35 on zero-shot COCO 2014, surpassing the original teacher despite a 500X reduction in inference cost.
Further, we show our approach can generate megapixel images by distilling SDXL, demonstrating exceptional visual quality among few-step methods, and surpassing the teacher.
We release our code and pretrained models. | https://openreview.net/pdf/3c7ea6adb0b86f707c8c396aa752165bc482e55b.pdf |
E2E-MFD: Towards End-to-End Synchronous Multimodal Fusion Detection | https://openreview.net/forum?id=47loYmzxep | https://openreview.net/forum?id=47loYmzxep | Jiaqing Zhang,Mingxiang Cao,Weiying Xie,Jie Lei,DaixunLi,Wenbo Huang,Yunsong Li,Xue Yang | NIPS 2024,Oral | Multimodal image fusion and object detection are crucial for autonomous driving. While current methods have advanced the fusion of texture details and semantic information, their complex training processes hinder broader applications. Addressing this challenge, we introduce E2E-MFD, a novel end-to-end algorithm for multimodal fusion detection. E2E-MFD streamlines the process, achieving high performance with a single training phase. It employs synchronous joint optimization across components to avoid suboptimal solutions associated to individual tasks. Furthermore, it implements a comprehensive optimization strategy in the gradient matrix for shared parameters, ensuring convergence to an optimal fusion detection configuration. Our extensive testing on multiple public datasets reveals E2E-MFD's superior capabilities, showcasing not only visually appealing image fusion but also impressive detection outcomes, such as a 3.9\% and 2.0\% $\text{mAP}_{50}$ increase on horizontal object detection dataset M3FD and oriented object detection dataset DroneVehicle, respectively, compared to state-of-the-art approaches. | https://openreview.net/pdf/b861f70a3f6d0b0377a6c809e5aeb3cc2bb8a6ba.pdf |
MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map | https://openreview.net/forum?id=Y8YVCOMEpz | https://openreview.net/forum?id=Y8YVCOMEpz | Yuhong Chou,Man Yao,Kexin Wang,Yuqi Pan,Rui-Jie Zhu,Jibin Wu,Yiran Zhong,Yu Qiao,Bo XU,Guoqi Li | NIPS 2024,Oral | Various linear complexity models, such as Linear Transformer (LinFormer), State Space Model (SSM), and Linear RNN (LinRNN), have been proposed to replace the conventional softmax attention in Transformer structures. However, the optimal design of these linear models is still an open question. In this work, we attempt to answer this question by finding the best linear approximation to softmax attention from a theoretical perspective. We start by unifying existing linear complexity models as the linear attention form and then identify three conditions for the optimal linear attention design: (1) Dynamic memory ability; (2) Static approximation ability; (3) Least parameter approximation. We find that none of the current linear models meet all three conditions, resulting in suboptimal performance. Instead, we propose Meta Linear Attention (MetaLA) as a solution that satisfies these conditions. Our experiments on Multi-Query Associative Recall (MQAR) task, language modeling, image classification, and Long-Range Arena (LRA) benchmark demonstrate that MetaLA is more effective than the existing linear models. | https://openreview.net/pdf/6115a7c6711108daff03a490bc177f2d26b8446b.pdf |
Flipped Classroom: Aligning Teacher Attention with Student in Generalized Category Discovery | https://openreview.net/forum?id=C4NbtYnyQg | https://openreview.net/forum?id=C4NbtYnyQg | Haonan Lin,Wenbin An,Jiahao Wang,Yan Chen,Feng Tian,Mengmeng Wang,QianYing Wang,Guang Dai,Jingdong Wang | NIPS 2024,Oral | Recent advancements have shown promise in applying traditional Semi-Supervised Learning strategies to the task of Generalized Category Discovery (GCD). Typically, this involves a teacher-student framework in which the teacher imparts knowledge to the student to classify categories, even in the absence of explicit labels. Nevertheless, GCD presents unique challenges, particularly the absence of priors for new classes, which can lead to the teacher's misguidance and unsynchronized learning with the student, culminating in suboptimal outcomes. In our work, we delve into why traditional teacher-student designs falter in generalized category discovery as compared to their success in closed-world semi-supervised learning. We identify inconsistent pattern learning as the crux of this issue and introduce FlipClass—a method that dynamically updates the teacher to align with the student's attention, instead of maintaining a static teacher reference. Our teacher-attention-update strategy refines the teacher's focus based on student feedback, promoting consistent pattern recognition and synchronized learning across old and new classes. Extensive experiments on a spectrum of benchmarks affirm that FlipClass significantly surpasses contemporary GCD methods, establishing new standards for the field. | https://openreview.net/pdf/2b0097d679b2b1297e2351cac3b7369e7b84e150.pdf |
NeuroClips: Towards High-fidelity and Smooth fMRI-to-Video Reconstruction | https://openreview.net/forum?id=8qu52Fl1Dt | https://openreview.net/forum?id=8qu52Fl1Dt | Zixuan Gong,Guangyin Bao,Qi Zhang,Zhongwei Wan,Duoqian Miao,Shoujin Wang,Lei Zhu,Changwei Wang,Rongtao Xu,Liang Hu,Ke Liu,Yu Zhang | NIPS 2024,Oral | Reconstruction of static visual stimuli from non-invasion brain activity fMRI achieves great success, owning to advanced deep learning models such as CLIP and Stable Diffusion. However, the research on fMRI-to-video reconstruction remains limited since decoding the spatiotemporal perception of continuous visual experiences is formidably challenging. We contend that the key to addressing these challenges lies in accurately decoding both high-level semantics and low-level perception flows, as perceived by the brain in response to video stimuli. To the end, we propose NeuroClips, an innovative framework to decode high-fidelity and smooth video from fMRI. NeuroClips utilizes a semantics reconstructor to reconstruct video keyframes, guiding semantic accuracy and consistency, and employs a perception reconstructor to capture low-level perceptual details, ensuring video smoothness. During inference, it adopts a pre-trained T2V diffusion model injected with both keyframes and low-level perception flows for video reconstruction. Evaluated on a publicly available fMRI-video dataset, NeuroClips achieves smooth high-fidelity video reconstruction of up to 6s at 8FPS, gaining significant improvements over state-of-the-art models in various metrics, e.g., a 128% improvement in SSIM and an 81% improvement in spatiotemporal metrics. Our project is available at https://github.com/gongzix/NeuroClips. | https://openreview.net/pdf/258f5ea41fed74143053a220d1c9971bc970b99a.pdf |
The Road Less Scheduled | https://openreview.net/forum?id=0XeNkkENuI | https://openreview.net/forum?id=0XeNkkENuI | Aaron Defazio,Xingyu Alice Yang,Ahmed Khaled,Konstantin Mishchenko,Harsh Mehta,Ashok Cutkosky | NIPS 2024,Oral | Existing learning rate schedules that do not require specification of the optimization stopping step $T$ are greatly out-performed by learning rate schedules that depend on $T$. We propose an approach that avoids the need for this stopping time by eschewing the use of schedules entirely, while exhibiting state-of-the-art performance compared to schedules across a wide family of problems ranging from convex problems to large-scale deep learning problems. Our Schedule-Free approach introduces no additional hyper-parameters over standard optimizers with momentum. Our method is a direct consequence of a new theory we develop that unifies scheduling and iterate averaging. An open source implementation of our method is available at https://github.com/facebookresearch/schedule_free. Schedule-Free AdamW is the core algorithm behind our winning entry to the MLCommons 2024 AlgoPerf Algorithmic Efficiency Challenge Self-Tuning track. | https://openreview.net/pdf/6c9eff74f240a8115542beea292c058b239a8712.pdf |
Convolutional Differentiable Logic Gate Networks | https://openreview.net/forum?id=4bKEFyUHT4 | https://openreview.net/forum?id=4bKEFyUHT4 | Felix Petersen,Hilde Kuehne,Christian Borgelt,Julian Welzel,Stefano Ermon | NIPS 2024,Oral | With the increasing inference cost of machine learning models, there is a growing interest in models with fast and efficient inference.
Recently, an approach for learning logic gate networks directly via a differentiable relaxation was proposed. Logic gate networks are faster than conventional neural network approaches because their inference only requires logic gate operators such as NAND, OR, and XOR, which are the underlying building blocks of current hardware and can be efficiently executed. We build on this idea, extending it by deep logic gate tree convolutions, logical OR pooling, and residual initializations. This allows scaling logic gate networks up by over one order of magnitude and utilizing the paradigm of convolution. On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller. | https://openreview.net/pdf/550935e8b4e775076ce2310d9d089be095ad0708.pdf |
SPRINQL: Sub-optimal Demonstrations driven Offline Imitation Learning | https://openreview.net/forum?id=uDD44NROOt | https://openreview.net/forum?id=uDD44NROOt | Huy Hoang,Tien Anh Mai,Pradeep Varakantham | NIPS 2024,Poster | We focus on offline imitation learning (IL), which aims to mimic an expert's behavior using demonstrations without any interaction with the environment. One of the main challenges in offline IL is the limited support of expert demonstrations, which typically cover only a small fraction of the state-action space. While it may not be feasible to obtain numerous expert demonstrations, it is often possible to gather a larger set of sub-optimal demonstrations. For example, in treatment optimization problems, there are varying levels of doctor treatments available for different chronic conditions. These range from treatment specialists and experienced general practitioners to less experienced general practitioners. Similarly, when robots are trained to imitate humans in routine tasks, they might learn from individuals with different levels of expertise and efficiency.
In this paper, we propose an offline IL approach that leverages the larger set of sub-optimal demonstrations while effectively mimicking expert trajectories. Existing offline IL methods based on behavior cloning or distribution matching often face issues such as overfitting to the limited set of expert demonstrations or inadvertently imitating sub-optimal trajectories from the larger dataset. Our approach, which is based on inverse soft-Q learning, learns from both expert and sub-optimal demonstrations. It assigns higher importance (through learned weights) to aligning with expert demonstrations and lower importance to aligning with sub-optimal ones. A key contribution of our approach, called SPRINQL, is transforming the offline IL problem into a convex optimization over the space of Q functions. Through comprehensive experimental evaluations, we demonstrate that the SPRINQL algorithm achieves state-of-the-art (SOTA) performance on offline IL benchmarks. Code is available at https://github.com/hmhuy0/SPRINQL . | https://openreview.net/pdf/21f890aa8acefa4c5640a534a16533bb251a5681.pdf |
Gradient Guidance for Diffusion Models: An Optimization Perspective | https://openreview.net/forum?id=X1QeUYBXke | https://openreview.net/forum?id=X1QeUYBXke | Yingqing Guo,Hui Yuan,Yukang Yang,Minshuo Chen,Mengdi Wang | NIPS 2024,Poster | Diffusion models have demonstrated empirical successes in various applications and can be adapted to task-specific needs via guidance. This paper studies a form of gradient guidance for adapting a pre-trained diffusion model towards optimizing user-specified objectives. We establish a mathematical framework for guided diffusion to systematically study its optimization theory and algorithmic design. Our theoretical analysis spots a strong link between guided diffusion models and optimization: gradient-guided diffusion models are essentially sampling solutions to a regularized optimization problem, where the regularization is imposed by the pre-training data. As for guidance design, directly bringing in the gradient of an external objective function as guidance would jeopardize the structure in generated samples. We investigate a modified form of gradient guidance based on a forward prediction loss, which leverages the information in pre-trained score functions and provably preserves the latent structure. We further consider an iteratively fine-tuned version of gradient-guided diffusion where guidance and score network are both updated with newly generated samples. This process mimics a first-order optimization iteration in expectation, for which we proved $\tilde{\mathcal{O}}(1/K)$ convergence rate to the global optimum when the objective function is concave. Our code is released at https://github.com/yukang123/GGDMOptim.git. | https://openreview.net/pdf/f1a0fd98ecfdc9b4afa72ce8adc61e3dea16e2ca.pdf |
Chimera: Effectively Modeling Multivariate Time Series with 2-Dimensional State Space Models | https://openreview.net/forum?id=ncYGjx2vnE | https://openreview.net/forum?id=ncYGjx2vnE | Ali Behrouz,Michele Santacatterina,Ramin Zabih | NIPS 2024,Poster | Modeling multivariate time series is a well-established problem with a wide range of applications from healthcare to financial markets. It, however, is challenging as it requires methods to (1) have high expressive power of representing complicated dependencies along the time axis to capture both long-term progression and seasonal patterns, (2) capture the inter-variate dependencies when it is informative, (3) dynamically model the dependencies of variate and time dimensions, and (4) have efficient training and inference for very long sequences. Traditional State Space Models (SSMs) are classical approaches for univariate time series modeling due to their simplicity and expressive power to represent linear dependencies. They, however, have fundamentally limited expressive power to capture non-linear dependencies, are slow in practice, and fail to model the inter-variate information flow. Despite recent attempts to improve the expressive power of SSMs by using deep structured SSMs, the existing methods are either limited to univariate time series, fail to model complex patterns (e.g., seasonal patterns), fail to dynamically model the dependencies of variate and time dimensions, and/or are input-independent. We present Chimera, an expressive variation of the 2-dimensional SSMs with careful design of parameters to maintain high expressive power while keeping the training complexity linear. Using two SSM heads with different discretization processes and input-dependent parameters, Chimera is provably able to learn long-term progression, seasonal patterns, and desirable dynamic autoregressive processes. To improve the efficiency of complex 2D recurrence, we present a fast training using a new 2-dimensional parallel selective scan. Our experimental evaluation shows the superior performance of Chimera on extensive and diverse benchmarks, including ECG and speech time series classification, long-term and short-term time series forecasting, and time series anomaly detection. | https://openreview.net/pdf/293e7ef70612d586ad3576a085191e54b2c0eb16.pdf |
A Nearly Optimal and Low-Switching Algorithm for Reinforcement Learning with General Function Approximation | https://openreview.net/forum?id=s3icZC2NLq | https://openreview.net/forum?id=s3icZC2NLq | Heyang Zhao,Jiafan He,Quanquan Gu | NIPS 2024,Poster | The exploration-exploitation dilemma has been a central challenge in reinforcement learning (RL) with complex model classes. In this paper, we propose a new algorithm, Monotonic Q-Learning with Upper Confidence Bound (MQL-UCB) for RL with general function approximation. Our key algorithmic design includes (1) a general deterministic policy-switching strategy that achieves low switching cost, (2) a monotonic value function structure with carefully controlled function class complexity, and (3) a variance-weighted regression scheme that exploits historical trajectories with high data efficiency. MQL-UCB achieves minimax optimal regret of $\tilde{O}(d\sqrt{HK})$ when $K$ is sufficiently large and near-optimal policy switching cost of $\tilde{O}(dH)$, with $d$ being the eluder dimension of the function class, $H$ being the planning horizon, and $K$ being the number of episodes.
Our work sheds light on designing provably sample-efficient and deployment-efficient Q-learning with nonlinear function approximation. | https://openreview.net/pdf/b3423ead9010a96399c1d7d679491e9c48a0fd4f.pdf |
VQ-Map: Bird's-Eye-View Map Layout Estimation in Tokenized Discrete Space via Vector Quantization | https://openreview.net/forum?id=bKuxygBW2Y | https://openreview.net/forum?id=bKuxygBW2Y | Yiwei Zhang,Jin Gao,Fudong Ge,Guan Luo,Bing Li,Zhaoxiang Zhang,Haibin Ling,Weiming Hu | NIPS 2024,Poster | Bird's-eye-view (BEV) map layout estimation requires an accurate and full understanding of the semantics for the environmental elements around the ego car to make the results coherent and realistic. Due to the challenges posed by occlusion, unfavourable imaging conditions and low resolution, \emph{generating} the BEV semantic maps corresponding to corrupted or invalid areas in the perspective view (PV) is appealing very recently. \emph{The question is how to align the PV features with the generative models to facilitate the map estimation}. In this paper, we propose to utilize a generative model similar to the Vector Quantized-Variational AutoEncoder (VQ-VAE) to acquire prior knowledge for the high-level BEV semantics in the tokenized discrete space. Thanks to the obtained BEV tokens accompanied with a codebook embedding encapsulating the semantics for different BEV elements in the groundtruth maps, we are able to directly align the sparse backbone image features with the obtained BEV tokens from the discrete representation learning based on a specialized token decoder module, and finally generate high-quality BEV maps with the BEV codebook embedding serving as a bridge between PV and BEV. We evaluate the BEV map layout estimation performance of our model, termed VQ-Map, on both the nuScenes and Argoverse benchmarks, achieving 62.2/47.6 mean IoU for surround-view/monocular evaluation on nuScenes, as well as 73.4 IoU for monocular evaluation on Argoverse, which all set a new record for this map layout estimation task. The code and models are available on \url{https://github.com/Z1zyw/VQ-Map}. | https://openreview.net/pdf/685c7f5fa23644eff84f69db3233d4fb61bc6c4e.pdf |
On the Impact of Feature Heterophily on Link Prediction with Graph Neural Networks | https://openreview.net/forum?id=3LZHatxUa9 | https://openreview.net/forum?id=3LZHatxUa9 | Jiong Zhu,Gaotang Li,Yao-An Yang,Jing Zhu,Xuehao Cui,Danai Koutra | NIPS 2024,Poster | Heterophily, or the tendency of connected nodes in networks to have different class labels or dissimilar features, has been identified as challenging for many Graph Neural Network (GNN) models. While the challenges of applying GNNs for node classification when class labels display strong heterophily are well understood, it is unclear how heterophily affects GNN performance in other important graph learning tasks where class labels are not available. In this work, we focus on the link prediction task and systematically analyze the impact of heterophily in node features on GNN performance. We first introduce formal definitions of homophilic and heterophilic link prediction tasks, and present a theoretical framework that highlights the different optimizations needed for the respective tasks. We then analyze how different link prediction encoders and decoders adapt to varying levels of feature homophily and introduce designs for improved performance. Based on our definitions, we identify and analyze six real-world benchmarks spanning from homophilic to heterophilic link prediction settings, with graphs containing up to 30M edges. Our empirical analysis on a variety of synthetic and real-world datasets confirms our theoretical insights and highlights the importance of adopting learnable decoders and GNN encoders with ego- and neighbor-embedding separation in message passing for link prediction tasks beyond homophily. | https://openreview.net/pdf/7c0d24d8c5b940086df83fb002c3e92da763b36b.pdf |
Factorized Diffusion Architectures for Unsupervised Image Generation and Segmentation | https://openreview.net/forum?id=7G362fgJFd | https://openreview.net/forum?id=7G362fgJFd | Xin Yuan,Michael Maire | NIPS 2024,Poster | We develop a neural network architecture which, trained in an unsupervised manner as a denoising diffusion model, simultaneously learns to both generate and segment images. Learning is driven entirely by the denoising diffusion objective, without any annotation or prior knowledge about regions during training. A computational bottleneck, built into the neural architecture, encourages the denoising network to partition an input into regions, denoise them in parallel, and combine the results. Our trained model generates both synthetic images and, by simple examination of its internal predicted partitions, semantic segmentations of those images. Without fine-tuning, we directly apply our unsupervised model to the downstream task of segmenting real images via noising and subsequently denoising them. Experiments demonstrate that our model achieves accurate unsupervised image segmentation and high-quality synthetic image generation across multiple datasets. | https://openreview.net/pdf/0b0e26bd5cb8b993746d295c433c593d7ad86d9c.pdf |
Probabilistic Decomposed Linear Dynamical Systems for Robust Discovery of Latent Neural Dynamics | https://openreview.net/forum?id=XPhSbybD73 | https://openreview.net/forum?id=XPhSbybD73 | Yenho Chen,Noga Mudrik,Kyle A. Johnsen,Sankaraleengam Alagapan,Adam Shabti Charles,Christopher John Rozell | NIPS 2024,Poster | Time-varying linear state-space models are powerful tools for obtaining mathematically interpretable representations of neural signals. For example, switching and decomposed models describe complex systems using latent variables that evolve according to simple locally linear dynamics. However, existing methods for latent variable estimation are not robust to dynamical noise and system nonlinearity due to noise-sensitive inference procedures and limited model formulations. This can lead to inconsistent results on signals with similar dynamics, limiting the model's ability to provide scientific insight. In this work, we address these limitations and propose a probabilistic approach to latent variable estimation in decomposed models that improves robustness against dynamical noise. Additionally, we introduce an extended latent dynamics model to improve robustness against system nonlinearities. We evaluate our approach on several synthetic dynamical systems, including an empirically-derived brain-computer interface experiment, and demonstrate more accurate latent variable inference in nonlinear systems with diverse noise conditions. Furthermore, we apply our method to a real-world clinical neurophysiology dataset, illustrating the ability to identify interpretable and coherent structure where previous models cannot. | https://openreview.net/pdf/97fd4685ad572113a49942a0e71937b3db55efb0.pdf |
Implicit Regularization of Decentralized Gradient Descent for Sparse Regression | https://openreview.net/forum?id=MlADRQI0Wf | https://openreview.net/forum?id=MlADRQI0Wf | Tongle Wu,Ying Sun | NIPS 2024,Poster | We consider learning a sparse model from linear measurements taken by a network of agents. Different from existing decentralized methods designed based on the LASSO regression with explicit $\ell_1$ norm regularization, we exploit the implicit regularization of decentralized optimization method applied to an over-parameterized nonconvex least squares formulation without penalization. Our first result shows that despite nonconvexity, if the network connectivity is good, the well-known decentralized gradient descent algorithm (DGD) with small initialization and early stopping can compute the statistically optimal solution. Sufficient conditions on the initialization scale, choice of step size, network connectivity, and stopping time are further provided to achieve convergence. Our result recovers the convergence rate of gradient descent in the centralized setting, showing its tightness.
Based on the analysis of DGD, we further propose a communication-efficient version, termed T-DGD, by truncating the iterates before transmission. In the high signal-to-noise ratio (SNR) regime, we show that T-DGD achieves comparable statistical accuracy to DGD, while the communication cost is logarithmic in the number of parameters. Numerical results are provided to validate the effectiveness of DGD and T-DGD for sparse learning through implicit regularization. | https://openreview.net/pdf/c2c69e05224053f3049709bd80a96662992b6366.pdf |
Universal Exact Compression of Differentially Private Mechanisms | https://openreview.net/forum?id=CgGjT8EG8A | https://openreview.net/forum?id=CgGjT8EG8A | Yanxiao Liu,Wei-Ning Chen,Ayfer Ozgur,Cheuk Ting Li | NIPS 2024,Poster | To reduce the communication cost of differential privacy mechanisms, we introduce a novel construction, called Poisson private representation (PPR), designed to compress and simulate any local randomizer while ensuring local differential privacy. Unlike previous simulation-based local differential privacy mechanisms, PPR exactly preserves the joint distribution of the data and the output of the original local randomizer. Hence, the PPR-compressed privacy mechanism retains all desirable statistical properties of the original privacy mechanism such as unbiasedness and Gaussianity. Moreover, PPR achieves a compression size within a logarithmic gap from the theoretical lower bound. Using the PPR, we give a new order-wise trade-off between communication, accuracy, central and local differential privacy for distributed mean estimation. Experiment results on distributed mean estimation show that PPR consistently gives a better trade-off between communication, accuracy and central differential privacy compared to the coordinate subsampled Gaussian mechanism, while also providing local differential privacy. | https://openreview.net/pdf/bc8db1e9cf2899d281127d72d1993d71ead0af3c.pdf |
Learning Representations for Hierarchies with Minimal Support | https://openreview.net/forum?id=HFS800reZK | https://openreview.net/forum?id=HFS800reZK | Benjamin Rozonoyer,Michael Boratko,Dhruvesh Patel,Wenlong Zhao,Shib Sankar Dasgupta,Hung Le,Andrew McCallum | NIPS 2024,Poster | When training node embedding models to represent large directed graphs (digraphs), it is impossible to observe all entries of the adjacency matrix during training. As a consequence most methods employ sampling. For very large digraphs, however, this means many (most) entries may be unobserved during training. In general, observing every entry would be necessary to uniquely identify a graph, however if we know the graph has a certain property some entries can be omitted - for example, only half the entries would be required for a symmetric graph.
In this work, we develop a novel framework to identify a subset of entries required to uniquely distinguish a graph among all transitively-closed DAGs. We give an explicit algorithm to compute the provably minimal set of entries, and demonstrate empirically that one can train node embedding models with greater efficiency and performance, provided the energy function has an appropriate inductive bias. We achieve robust performance on synthetic hierarchies and a larger real-world taxonomy, observing improved convergence rates in a resource-constrained setting while reducing the set of training examples by as much as 99%. | https://openreview.net/pdf/98c7ccf6ef86019ffc994aba434e5c6603739459.pdf |
OwMatch: Conditional Self-Labeling with Consistency for Open-world Semi-Supervised Learning | https://openreview.net/forum?id=rle9X7DQuH | https://openreview.net/forum?id=rle9X7DQuH | Shengjie Niu,Lifan Lin,Jian Huang,Chao Wang | NIPS 2024,Poster | Semi-supervised learning (SSL) offers a robust framework for harnessing the potential of unannotated data. Traditionally, SSL mandates that all classes possess labeled instances. However, the emergence of open-world SSL (OwSSL) introduces a more practical challenge, wherein unlabeled data may encompass samples from unseen classes. This scenario leads to misclassification of unseen classes as known ones, consequently undermining classification accuracy. To overcome this challenge, this study revisits two methodologies from self-supervised and semi-supervised learning, self-labeling and consistency, tailoring them to address the OwSSL problem. Specifically, we propose an effective framework called _OwMatch_, combining conditional self-labeling and open-world hierarchical thresholding. Theoretically, we analyze the estimation of class distribution on unlabeled data through rigorous statistical analysis, thus demonstrating that OwMatch can ensure the unbiasedness of the label assignment estimator with reliability. Comprehensive empirical analyses demonstrate that our method yields substantial performance enhancements across both known and unknown classes in comparison to previous studies. Code is available at [https://github.com/niusj03/OwMatch](https://github.com/niusj03/OwMatch). | https://openreview.net/pdf/3dcbcaa02ca1db047267a26a4853ed26ee59bd15.pdf |
Fair Allocation in Dynamic Mechanism Design | https://openreview.net/forum?id=bEunGps83o | https://openreview.net/forum?id=bEunGps83o | Alireza Fallah,Michael Jordan,Annie S Ulichney | NIPS 2024,Poster | We consider a dynamic mechanism design problem where an auctioneer sells an indivisible good to two groups of buyers in every round, for a total of $T$ rounds. The auctioneer aims to maximize their discounted overall revenue while adhering to a fairness constraint that guarantees a minimum average allocation for each group. We begin by studying the static case ($T=1$) and establish that the optimal mechanism involves two types of subsidization: one that increases the overall probability of allocation to all buyers, and another that favors the group which otherwise has a lower probability of winning the item. We then extend our results to the dynamic case by characterizing a set of recursive functions that determine the optimal allocation and payments in each round. Notably, our results establish that in the dynamic case, the seller, on one hand, commits to a participation reward to incentivize truth-telling, and, on the other hand, charges an entry fee for every round. Moreover, the optimal allocation once more involves subsidization in favor of one group, where the extent of subsidization depends on the difference in future utilities for both the seller and buyers when allocating the item to one group versus the other. Finally, we present an approximation scheme to solve the recursive equations and determine an approximately optimal and fair allocation efficiently. | https://openreview.net/pdf/8365b7cc74e6acf8ccffc75743d5ba8d7745188d.pdf |
Sample and Computationally Efficient Robust Learning of Gaussian Single-Index Models | https://openreview.net/forum?id=MN7d0S2i1d | https://openreview.net/forum?id=MN7d0S2i1d | Puqian Wang,Nikos Zarifis,Ilias Diakonikolas,Jelena Diakonikolas | NIPS 2024,Poster | A single-index model (SIM) is a function of the form $\sigma(\mathbf{w}^{\ast} \cdot \mathbf{x})$, where
$\sigma: \mathbb{R} \to \mathbb{R}$ is a known link function and $\mathbf{w}^{\ast}$ is a hidden unit vector.
We study the task of learning SIMs in the agnostic (a.k.a. adversarial label noise) model
with respect to the $L^2_2$-loss under the Gaussian distribution.
Our main result is a sample and computationally efficient agnostic proper learner
that attains $L^2_2$-error of $O(\mathrm{OPT})+\epsilon$, where $\mathrm{OPT}$ is the optimal loss. The sample complexity of our algorithm is
$\tilde{O}(d^{\lceil k^{\ast}/2\rceil}+d/\epsilon)$, where
$k^{\ast}$ is the information-exponent of $\sigma$
corresponding to the degree of its first non-zero Hermite coefficient.
This sample bound nearly matches known CSQ lower bounds, even in the realizable setting.
Prior algorithmic work in this setting had focused
on learning in the realizable case or in the presence
of semi-random noise. Prior computationally efficient robust learners required
significantly stronger assumptions on the link function. | https://openreview.net/pdf/cf0991dda9a6419627e0a2ad5fa255be8c831ebe.pdf |
Once Read is Enough: Domain-specific Pretraining-free Language Models with Cluster-guided Sparse Experts for Long-tail Domain Knowledge | https://openreview.net/forum?id=manHbkpIW6 | https://openreview.net/forum?id=manHbkpIW6 | Fang Dong,Mengyi Chen,Jixian Zhou,Yubin Shi,Yixuan Chen,Mingzhi Dong,Yujiang Wang,Dongsheng Li,Xiaochen Yang,Rui Zhu,Robert P. Dick,Qin Lv,Fan Yang,Tun Lu,Ning Gu,Li Shang | NIPS 2024,Poster | Language models (LMs) only pretrained on a general and massive corpus usually cannot attain satisfying performance on domain-specific downstream tasks, and hence, applying domain-specific pretraining to LMs is a common and indispensable practice.
However, domain-specific pretraining can be costly and time-consuming, hindering LMs' deployment in real-world applications.
In this work, we consider the incapability to memorize domain-specific knowledge embedded in the general corpus with rare occurrences and long-tail distributions as the leading cause for pretrained LMs' inferior downstream performance.
Analysis of Neural Tangent Kernels (NTKs) reveals that those long-tail data are commonly overlooked in the model's gradient updates and, consequently, are not effectively memorized, leading to poor domain-specific downstream performance.
Based on the intuition that data with similar semantic meaning are closer in the embedding space, we devise a Cluster-guided Sparse Expert (CSE) layer to actively learn long-tail domain knowledge typically neglected in previous pretrained LMs.
During pretraining, a CSE layer efficiently clusters domain knowledge together and assigns long-tail knowledge to designate extra experts. CSE is also a lightweight structure that only needs to be incorporated in several deep layers.
With our training strategy, we found that during pretraining, data of long-tail knowledge gradually formulate isolated, outlier clusters in an LM's representation spaces, especially in deeper layers. Our experimental results show that only pretraining CSE-based LMs is enough to achieve superior performance than regularly pretrained-finetuned LMs on various downstream tasks, implying the prospects of domain-specific-pretraining-free language models. | https://openreview.net/pdf/b28c3a4f4f5da3bb75eb2cc6852c1eb990371e11.pdf |
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models | https://openreview.net/forum?id=JhqyeppMiD | https://openreview.net/forum?id=JhqyeppMiD | Yuancheng Xu,Jiarui Yao,Manli Shu,Yanchao Sun,Zichu Wu,Ning Yu,Tom Goldstein,Furong Huang | NIPS 2024,Poster | Vision-Language Models (VLMs) excel in generating textual responses from visual inputs, but their versatility raises security concerns. This study takes the first step in exposing VLMs’ susceptibility to data poisoning attacks that can manipulate responses to innocuous, everyday prompts. We introduce Shadowcast, a stealthy data poisoning attack where poison samples are visually indistinguishable from benign images with matching texts. Shadowcast demonstrates effectiveness in two attack types. The first is a traditional Label Attack, tricking VLMs into misidentifying class labels, such as confusing Donald Trump for Joe Biden. The second is a novel Persuasion Attack, leveraging VLMs’ text generation capabilities to craft persuasive and seemingly rational narratives for misinformation, such as portraying junk food as healthy. We show that Shadowcast effectively achieves the attacker’s intentions using as few as 50 poison samples. Crucially, the poisoned samples demonstrate transferability across different VLM architectures, posing a significant concern in black-box settings. Moreover, Shadowcast remains potent under realistic conditions involving various text prompts, training data augmentation, and image compression techniques. This work reveals how poisoned VLMs can disseminate convincing yet deceptive misinformation to everyday, benign users, emphasizing the importance of data integrity for responsible VLM deployments. Our code is available at: https://github.com/umd-huang-lab/VLM-Poisoning. | https://openreview.net/pdf/9d686ad4b89c927c71ccff3e7ea68ea1b6c0dce2.pdf |
Multi-Instance Partial-Label Learning with Margin Adjustment | https://openreview.net/forum?id=NnAi0L5H8J | https://openreview.net/forum?id=NnAi0L5H8J | Wei Tang,Yin-Fang Yang,Zhaofei Wang,Weijia Zhang,Min-Ling Zhang | NIPS 2024,Poster | Multi-instance partial-label learning (MIPL) is an emerging learning framework where each training sample is represented as a multi-instance bag associated with a candidate label set. Existing MIPL algorithms often overlook the margins for attention scores and predicted probabilities, leading to suboptimal generalization performance. A critical issue with these algorithms is that the highest prediction probability of the classifier may appear on a non-candidate label. In this paper, we propose an algorithm named MIPLMA, i.e., Multi-Instance Partial-Label learning with Margin Adjustment, which adjusts the margins for attention scores and predicted probabilities. We introduce a margin-aware attention mechanism to dynamically adjust the margins for attention scores and propose a margin distribution
loss to constrain the margins between the predicted probabilities on candidate and non-candidate label sets. Experimental results demonstrate the superior performance of MIPLMA over existing MIPL algorithms, as well as other well-established multi-instance learning algorithms and partial-label learning algorithms. | https://openreview.net/pdf/6d7eb1b41514181cec8475f2ea9d3edf24e6cd56.pdf |
Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization | https://openreview.net/forum?id=GN2GXjPyN8 | https://openreview.net/forum?id=GN2GXjPyN8 | Xiangxin Zhou,Dongyu Xue,Ruizhe Chen,Zaixiang Zheng,Liang Wang,Quanquan Gu | NIPS 2024,Poster | Antibody design, a crucial task with significant implications across various disciplines such as therapeutics and biology, presents considerable challenges due to its intricate nature. In this paper, we tackle antigen-specific antibody sequence-structure co-design as an optimization problem towards specific preferences, considering both rationality and functionality. Leveraging a pre-trained conditional diffusion model that jointly models sequences and structures of antibodies with equivariant neural networks, we propose direct energy-based preference optimization to guide the generation of antibodies with both rational structures and considerable binding affinities to given antigens. Our method involves fine-tuning the pre-trained diffusion model using a residue-level decomposed energy preference. Additionally, we employ gradient surgery to address conflicts between various types of energy, such as attraction and repulsion. Experiments on RAbD benchmark show that our approach effectively optimizes the energy of generated antibodies and achieves state-of-the-art performance in designing high-quality antibodies with low total energy and high binding affinity simultaneously, demonstrating the superiority of our approach. | https://openreview.net/pdf/1707cccb06a5edc814908e30e85b89e886aed8f5.pdf |
Deep Support Vectors | https://openreview.net/forum?id=5WoYFypPv0 | https://openreview.net/forum?id=5WoYFypPv0 | Junhoo Lee,Hyunho Lee,Kyomin Hwang,Nojun Kwak | NIPS 2024,Poster | Deep learning has achieved tremendous success. However, unlike SVMs, which provide direct decision criteria and can be trained with a small dataset, it still has significant weaknesses due to its requirement for massive datasets during training and the black-box characteristics on decision criteria. This paper addresses these issues by identifying support vectors in deep learning models. To this end, we propose the DeepKKT condition, an adaptation of the traditional Karush-Kuhn-Tucker (KKT) condition for deep learning models, and confirm that generated Deep Support Vectors (DSVs) using this condition exhibit properties similar to traditional support vectors. This allows us to apply our method to few-shot dataset distillation problems and alleviate the black-box characteristics of deep learning models. Additionally, we demonstrate that the DeepKKT condition can transform conventional classification models into generative models with high fidelity, particularly as latent generation models using class labels as latent variables. We validate the effectiveness of DSVs using common datasets (ImageNet, CIFAR10 and CIFAR100) on the general architectures (ResNet and ConvNet), proving their practical applicability. | https://openreview.net/pdf/c34cbd4c21b4871ff90d03acc5b73b7af13721a3.pdf |
Balancing Context Length and Mixing Times for Reinforcement Learning at Scale | https://openreview.net/forum?id=VaJ4XOW7Ey | https://openreview.net/forum?id=VaJ4XOW7Ey | Matthew Riemer,Khimya Khetarpal,Janarthanan Rajendran,Sarath Chandar | NIPS 2024,Poster | Due to the recent remarkable advances in artificial intelligence, researchers have begun to consider challenging learning problems such as learning to generalize behavior from large offline datasets or learning online in non-Markovian environments. Meanwhile, recent advances in both of these areas have increasingly relied on conditioning policies on large context lengths. A natural question is if there is a limit to the performance benefits of increasing the context length if the computation needed is available. In this work, we establish a novel theoretical result that links the context length of a policy to the time needed to reliably evaluate its performance (i.e., its mixing time) in large scale partially observable reinforcement learning environments that exhibit latent sub-task structure. This analysis underscores a key tradeoff: when we extend the context length, our policy can more effectively model non-Markovian dependencies, but this comes at the cost of potentially slower policy evaluation and as a result slower downstream learning. Moreover, our empirical results highlight the relevance of this analysis when leveraging Transformer based neural networks. This perspective will become increasingly pertinent as the field scales towards larger and more realistic environments, opening up a number of potential future directions for improving the way we design learning agents. | https://openreview.net/pdf/0d2f1e3d4565423b45b2830d8dcae8ea0d71fa8d.pdf |
MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution | https://openreview.net/forum?id=qevq3FZ63J | https://openreview.net/forum?id=qevq3FZ63J | Wei Tao,Yucheng Zhou,Yanlin Wang,Wenqiang Zhang,Hongyu Zhang,Yu Cheng | NIPS 2024,Poster | In software development, resolving the emergent issues within GitHub repositories is a complex challenge that involves not only the incorporation of new code but also the maintenance of existing code.
Large Language Models (LLMs) have shown promise in code generation but face difficulties in resolving Github issues, particularly at the repository level.
To overcome this challenge, we empirically study the reason why LLMs fail to resolve GitHub issues and analyze the major factors.
Motivated by the empirical findings, we propose a novel LLM-based **M**ulti-**A**gent framework for **G**itHub **I**ssue re**S**olution, **MAGIS**, consisting of four agents customized for software evolution: Manager, Repository Custodian, Developer, and Quality Assurance Engineer agents.
This framework leverages the collaboration of various agents in the planning and coding process to unlock the potential of LLMs to resolve GitHub issues.
In experiments, we employ the SWE-bench benchmark to compare MAGIS with popular LLMs, including GPT-3.5, GPT-4, and Claude-2.
MAGIS can resolve **13.94%** GitHub issues, significantly outperforming the baselines.
Specifically, MAGIS achieves an eight-fold increase in resolved ratio over the direct application of GPT-4, the advanced LLM. | https://openreview.net/pdf/160f5e4c2c7ce5f4555901cb61fa6bd97dbfbd5c.pdf |
NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention | https://openreview.net/forum?id=4xDxVQHsbZ | https://openreview.net/forum?id=4xDxVQHsbZ | Tianyi Zhang,Jonah Wonkyu Yi,Bowen Yao,Zhaozhuo Xu,Anshumali Shrivastava | NIPS 2024,Poster | Large Language Model (LLM) inference on Central Processing Units (CPU) is challenging due to the vast quantities of Multiply-Add (MAD) matrix operations in the attention computations. This paper highlights a rare gem in modern CPUs, Single-Instruction-Multiple-Data (SIMD) registers, which allows for ultra-low-latency lookups in a batch. We leverage this unique capability to propose NoMAD-Attention, an efficient attention algorithm that replaces MAD operations with in-register lookups. Through hardware-aware algorithmic designs, NoMAD-Attention achieves the computation of attention scores using repeated fast accesses to SIMD registers. NoMAD-Attention works with pre-trained attention-based LLMs without model finetuning. Extensive empirical evaluations demonstrate that NoMAD-Attention maintains the quality of the original LLMs well and speeds up the 4-bit quantized LLaMA-7B-based model by up to $2 \times$ at 16k context length. | https://openreview.net/pdf/68372dd1d74a348f9569575a9907e59741292fab.pdf |
Navigating the Effect of Parametrization for Dimensionality Reduction | https://openreview.net/forum?id=eYNYnYle41 | https://openreview.net/forum?id=eYNYnYle41 | Haiyang Huang,Yingfan Wang,Cynthia Rudin | NIPS 2024,Poster | Parametric dimensionality reduction methods have gained prominence for their ability to generalize to unseen datasets, an advantage that traditional non-parametric approaches typically lack. Despite their growing popularity, there remains a prevalent misconception among practitioners about the equivalence in performance between parametric and non-parametric methods. Here, we show that these methods are not equivalent -- parametric methods retain global structure but lose significant local details. To explain this, we provide evidence that parameterized approaches lack the ability to repulse negative samples, and the choice of loss function also has an impact.
Addressing these issues, we developed a new parametric method, ParamRepulsor, that incorporates Hard Negative Mining and a loss function that applies a strong repulsive force. This new method achieves state-of-the-art performance on local structure preservation for parametric methods without sacrificing the fidelity of global structural representation. Our code is available at https://github.com/hyhuang00/ParamRepulsor. | https://openreview.net/pdf/dd9ebeee6f173ea24fa48be291e3625217634dd4.pdf |
$\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$ | https://openreview.net/forum?id=ZfBuhzE556 | https://openreview.net/forum?id=ZfBuhzE556 | Junkang Wu,Yuexiang Xie,Zhengyi Yang,Jiancan Wu,Jinyang Gao,Bolin Ding,Xiang Wang,Xiangnan He | NIPS 2024,Poster | Direct Preference Optimization (DPO) has emerged as a compelling approach for training Large Language Models (LLMs) to adhere to human preferences. However, the performance of DPO is sensitive to the fine-tuning of its trade-off parameter $\beta$, as well as to the quality of the preference data. We analyze the impact of $\beta$ and data quality on DPO, uncovering that optimal $\beta$ values vary with the informativeness of pairwise data. Addressing the limitations of static $\beta$ values, we introduce a novel framework that dynamically calibrates $\beta$ at the batch level, informed by data quality considerations. Additionally, our method incorporates $\beta$-guided data filtering to safeguard against the influence of outliers. Through empirical evaluation, we demonstrate that our dynamic $\beta$ adjustment technique significantly improves DPO’s performance across a range of models and datasets, offering a more robust and adaptable training paradigm for aligning LLMs with human feedback. The code is available at \url{https://anonymous.4open.science/r/beta-DPO-EE6C}. | https://openreview.net/pdf/30536c86d3ed63ada9ccbfca8f6fbea2d6282296.pdf |
Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling | https://openreview.net/forum?id=CMgxAaRqZh | https://openreview.net/forum?id=CMgxAaRqZh | Yiran Zhao,Wenyue Zheng,Tianle Cai,Do Xuan Long,Kenji Kawaguchi,Anirudh Goyal,Michael Shieh | NIPS 2024,Poster | Safety of Large Language Models (LLMs) has become a central issue given their rapid progress and wide applications. Greedy Coordinate Gradient (GCG) is shown to be effective in constructing prompts containing adversarial suffixes to break the presumingly safe LLMs, but the optimization of GCG is time-consuming and limits its practicality. To reduce the time cost of GCG and enable more comprehensive studies of LLM safety, in this work, we study a new algorithm called $\texttt{Probe sampling}$ to accelerate the GCG algorithm. At the core of the algorithm is a mechanism that dynamically determines how similar a smaller draft model's predictions are to the target model's predictions for prompt candidates. When the target model is similar to the draft model, we rely heavily on the draft model to filter out a large number of potential prompt candidates to reduce the computation time. Probe sampling achieves up to $5.6$ times speedup using Llama2-7b-chat and leads to equal or improved attack success rate (ASR) on the AdvBench. Furthermore, probe sampling is also able to accelerate other prompt optimization techniques and adversarial attack methods, leading to acceleration of $1.8\times$ for AutoPrompt, $2.4\times$ for APE and $2.4\times$ for AutoDAN. | https://openreview.net/pdf/c8b4a1521c3825d5fc77d1bc75f534885da21586.pdf |
Enhancing Feature Diversity Boosts Channel-Adaptive Vision Transformers | https://openreview.net/forum?id=EXuv4tVNa3 | https://openreview.net/forum?id=EXuv4tVNa3 | Chau Pham,Bryan A. Plummer | NIPS 2024,Poster | Multi-Channel Imaging (MCI) contains an array of challenges for encoding useful feature representations not present in traditional images. For example, images from two different satellites may both contain RGB channels, but the remaining channels can be different for each imaging source. Thus, MCI models must support a variety of channel configurations at test time. Recent work has extended traditional visual encoders for MCI, such as Vision Transformers (ViT), by supplementing pixel information with an encoding representing the channel configuration. However, these methods treat each channel equally, i.e., they do not consider the unique properties of each channel type, which can result in needless and potentially harmful redundancies in the learned features. For example, if RGB channels are always present, the other channels can focus on extracting information that cannot be captured by the RGB channels. To this end, we propose DiChaViT, which aims to enhance the diversity in the learned features of MCI-ViT models. This is achieved through a novel channel sampling strategy that encourages the selection of more distinct channel sets for training. Additionally, we employ regularization and initialization techniques to increase the likelihood that new information is learned from each channel. Many of our improvements are architecture agnostic and can be incorporated into new architectures as they are developed. Experiments on both satellite and cell microscopy datasets, CHAMMI, JUMP-CP, and So2Sat, report DiChaViT yields a 1.5 - 5.0% gain over the state-of-the-art. Our code is publicly available at https://github.com/chaudatascience/diverse_channel_vit. | https://openreview.net/pdf/19191cda99db12be6bc8912fc1698da138cab1c6.pdf |
SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion | https://openreview.net/forum?id=89AUi5L1uA | https://openreview.net/forum?id=89AUi5L1uA | Lu Han,Xu-Yang Chen,Han-Jia Ye,De-Chuan Zhan | NIPS 2024,Poster | Multivariate time series forecasting plays a crucial role in various fields such as finance, traffic management, energy, and healthcare. Recent studies have highlighted the advantages of channel independence to resist distribution drift but neglect channel correlations, limiting further enhancements. Several methods utilize mechanisms like attention or mixer to address this by capturing channel correlations, but they either introduce excessive complexity or rely too heavily on the correlation to achieve satisfactory results under distribution drifts, particularly with a large number of channels. Addressing this gap, this paper presents an efficient MLP-based model, the Series-cOre Fused Time Series forecaster (SOFTS), which incorporates a novel STar Aggregate-Redistribute (STAR) module. Unlike traditional approaches that manage channel interactions through distributed structures, \textit{e.g.}, attention, STAR employs a centralized strategy to improve efficiency and reduce reliance on the quality of each channel. It aggregates all series to form a global core representation, which is then dispatched and fused with individual series representations to facilitate channel interactions effectively. SOFTS achieves superior performance over existing state-of-the-art methods with only linear complexity. The broad applicability of the STAR module across different forecasting models is also demonstrated empirically. We have made our code publicly available at https://github.com/Secilia-Cxy/SOFTS. | https://openreview.net/pdf/c8f5e1f12b1143b1e273394867caf779b33c0a82.pdf |
SEEV: Synthesis with Efficient Exact Verification for ReLU Neural Barrier Functions | https://openreview.net/forum?id=nWMqQHzI3W | https://openreview.net/forum?id=nWMqQHzI3W | Hongchao Zhang,Zhizhen Qin,Sicun Gao,Andrew Clark | NIPS 2024,Poster | Neural Control Barrier Functions (NCBFs) have shown significant promise in enforcing safety constraints on nonlinear autonomous systems. State-of-the-art exact approaches to verifying safety of NCBF-based controllers exploit the piecewise-linear structure of ReLU neural networks, however, such approaches still rely on enumerating all of the activation regions of the network near the safety boundary, thus incurring high computation cost. In this paper, we propose a framework for Synthesis with Efficient Exact Verification (SEEV). Our framework consists of two components, namely (i) an NCBF synthesis algorithm that introduces a novel regularizer to reduce the number of activation regions at the safety boundary, and (ii) a verification algorithm that exploits tight over-approximations of the safety conditions to reduce the cost of verifying each piecewise-linear segment. Our simulations show that SEEV significantly improves verification efficiency while maintaining the CBF quality across various benchmark systems and neural network structures. Our code is available at https://github.com/HongchaoZhang-HZ/SEEV. | https://openreview.net/pdf/8c8be656daa65c9db0d7eaaf0f5e2cbcf3137202.pdf |
Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees | https://openreview.net/forum?id=ZIpdu0cHYu | https://openreview.net/forum?id=ZIpdu0cHYu | Sijia Chen,Yibo Wang,Yi-Feng Wu,Qing-Guo Chen,Zhao Xu,Weihua Luo,Kaifu Zhang,Lijun Zhang | NIPS 2024,Poster | Tool-augmented large language models (LLMs) leverage tools, often in the form of APIs, to improve their reasoning capabilities on complex tasks. This enables them to act as intelligent agents interacting with the real world. The recently introduced ToolLLaMA model by Qin et al. [2023] utilizes the depth-first search-based decision tree (DFSDT) mechanism for multi-step reasoning with $16000+$ real-world APIs, effectively enhancing the performance of tool-augmented LLMs compared to traditional chain reasoning mechanisms. However, their approach only employs successful paths from decision trees (also called inference trees) for supervised fine-tuning (SFT), missing out on the potential learning opportunities from failed paths. Inspired by this, we propose an inference trajectory optimization framework based on preference learning to address this limitation. We first introduce a novel method for constructing preference data from tree-like expert trajectories, which leverages the previously ignored failed explorations in the decision trees. Specifically, we generate a step-wise preference dataset, ToolPreference, from the ToolBench dataset for tool learning. In the subsequent training phase, we first fine-tune the LLM with successful tool-usage expert trajectories and then apply direct preference optimization (DPO) with ToolPreference to update the LLM's policy, resulting in our ToolPrefer-LLaMA (TP-LLaMA) model. This approach not only enhances the utilization of original expert data but also broadens the learning space of the model. Our experiments demonstrate that by obtaining insights from errors in inference trees, TP-LLaMA significantly outperforms the baselines across almost all test scenarios by a large margin and exhibits better generalization capabilities with unseen APIs. At the same time, TP-LLaMA has also demonstrated superior reasoning efficiency compared to the baselines, making it more suitable for complex tool-usage reasoning tasks. | https://openreview.net/pdf/74ee6f313ee1667abf207c714f9e3e241341d853.pdf |
A Primal-Dual-Assisted Penalty Approach to Bilevel Optimization with Coupled Constraints | https://openreview.net/forum?id=uZi7H5Ac0X | https://openreview.net/forum?id=uZi7H5Ac0X | Liuyuan Jiang,Quan Xiao,Victor M. Tenorio,Fernando Real-Rojas,Antonio Marques,Tianyi Chen | NIPS 2024,Poster | Interest in bilevel optimization has grown in recent years, partially due to its relevance for challenging machine-learning problems. Several exciting recent works have been centered around developing efficient gradient-based algorithms that can solve bilevel optimization problems with provable guarantees. However, the existing literature mainly focuses on bilevel problems either without constraints, or featuring only simple constraints that do not couple variables across the upper and lower levels, excluding a range of complex applications. Our paper studies this challenging but less explored scenario and develops a (fully) first-order algorithm, which we term BLOCC, to tackle BiLevel Optimization problems with Coupled Constraints. We establish rigorous convergence theory for the proposed algorithm and demonstrate its effectiveness on two well-known real-world applications - support vector machine (SVM) - based model training and infrastructure planning in transportation networks. | https://openreview.net/pdf/13a0f27075bedab8b79d901ed72ef74c635ac09c.pdf |
CE-NAS: An End-to-End Carbon-Efficient Neural Architecture Search Framework | https://openreview.net/forum?id=v6W55lCkhN | https://openreview.net/forum?id=v6W55lCkhN | Yiyang Zhao,Yunzhuo Liu,Bo Jiang,Tian Guo | NIPS 2024,Poster | This work presents a novel approach to neural architecture search (NAS) that aims to increase carbon efficiency for the model design process. The proposed framework CE-NAS addresses the key challenge of high carbon cost associated with NAS by exploring the carbon emission variations of energy and energy differences of different NAS algorithms. At the high level, CE-NAS leverages a reinforcement-learning agent to dynamically adjust GPU resources based on carbon intensity, predicted by a time-series transformer, to balance energy-efficient sampling and energy-intensive evaluation tasks. Furthermore, CE-NAS leverages a recently proposed multi-objective optimizer to effectively reduce the NAS search space. We demonstrate the efficacy of CE-NAS in lowering carbon emissions while achieving SOTA results for both NAS datasets and open-domain NAS tasks. For example, on the HW-NasBench dataset, CE-NAS reduces carbon emissions by up to 7.22X while maintaining a search efficiency comparable to vanilla NAS. For open-domain NAS tasks, CE-NAS achieves SOTA results with 97.35% top-1 accuracy on CIFAR-10 with only 1.68M parameters and a carbon consumption of 38.53 lbs of CO2. On ImageNet, our searched model achieves 80.6% top-1 accuracy with a 0.78 ms TensorRT latency using FP16 on NVIDIA V100, consuming only 909.86 lbs of CO2, making it comparable to other one-shot-based NAS baselines. Our code is available at https://github.com/cake-lab/CE-NAS. | https://openreview.net/pdf/1e1daf62c7b574a8a94781af5ea3ed13da72701b.pdf |
Fairness-Aware Estimation of Graphical Models | https://openreview.net/forum?id=WvWS8goWyR | https://openreview.net/forum?id=WvWS8goWyR | Zhuoping Zhou,Davoud Ataee Tarzanagh,Bojian Hou,Qi Long,Li Shen | NIPS 2024,Poster | This paper examines the issue of fairness in the estimation of graphical models (GMs), particularly Gaussian, Covariance, and Ising models. These models play a vital role in understanding complex relationships in high-dimensional data. However, standard GMs can result in biased outcomes, especially when the underlying data involves sensitive characteristics or protected groups. To address this, we introduce a comprehensive framework designed to reduce bias in the estimation of GMs related to protected attributes. Our approach involves the integration of the pairwise graph disparity error and a tailored loss function into a nonsmooth multi-objective optimization problem, striving to achieve fairness across different sensitive groups while maintaining the effectiveness of the GMs. Experimental evaluations on synthetic and real-world datasets demonstrate that our framework effectively mitigates bias without undermining GMs' performance. | https://openreview.net/pdf/3cbfdb839c78a76a277d4d32e573fc2186d4fc53.pdf |
Toward Efficient Inference for Mixture of Experts | https://openreview.net/forum?id=stXtBqyTWX | https://openreview.net/forum?id=stXtBqyTWX | Haiyang Huang,Newsha Ardalani,Anna Sun,Liu Ke,Shruti Bhosale,Hsien-Hsin S. Lee,Carole-Jean Wu,Benjamin Lee | NIPS 2024,Poster | Mixture-of-Experts (MoE) models have recently gained steam in achieving the state-of-the-art performance in a wide range of tasks in computer vision and natural language processing. They effectively expand the model capacity while incurring a minimal increase in computation cost during training. However, deploying such models for inference is difficult due to their large model size and complex communication pattern. In this work, we provide a characterization of two MoE workloads, namely Language Modeling (LM) and Machine Translation (MT) and identify their sources of inefficiencies at deployment. We propose three optimization techniques to mitigate sources of inefficiencies, namely (1) Dynamic gating, (2) Expert Buffering, and (3) Expert load balancing. We show that dynamic gating improves maximum throughput by 6.21-11.55$\times$ for LM, 5.75-10.98$\times$ for MT Encoder and 2.58-5.71$\times$ for MT Decoder.
It also reduces memory usage by up to 1.36$\times$ for LM and up to 1.1$\times$ for MT. We further propose Expert Buffering, a new caching mechanism that only keeps hot, active experts in GPU memory while buffering the rest in CPU memory. This reduces static memory allocation by 1.47$\times$. Finally, we propose a load balancing methodology that provides additional robustness to the workload. Our code is available at https://github.com/hyhuang00/moe_inference. | https://openreview.net/pdf/b9888255233cbfec88dd7c0bc9b48c48b33bf0ec.pdf |
KV Cache is 1 Bit Per Channel: Efficient Large Language Model Inference with Coupled Quantization | https://openreview.net/forum?id=pNnvzQsS4P | https://openreview.net/forum?id=pNnvzQsS4P | Tianyi Zhang,Jonah Wonkyu Yi,Zhaozhuo Xu,Anshumali Shrivastava | NIPS 2024,Poster | Efficient deployment of Large Language Models (LLMs) requires batching multiple requests together to improve throughput. As batch size, context length, or model size increases, the size of key and value (KV) cache quickly becomes the main contributor to GPU memory usage and the bottleneck of inference latency and throughput. Quantization has emerged as an effective technique for KV cache compression, but existing methods still fail at very low bit widths. Currently, KV cache quantization is performed per-channel or per-token independently. Our analysis shows that distinct channels of a key/value activation embedding are highly interdependent, and the joint entropy of multiple channels grows at a slower rate than the sum of their marginal entropy, which implies that per-channel independent quantization is sub-optimal. To mitigate this sub-optimality, we propose Coupled Quantization (CQ), which couples multiple key/value channels together for quantization to exploit their interdependence and encode the activations in a more information-efficient manner. Extensive experiments reveal that CQ compares favorably with existing baselines in preserving model quality, and improves inference throughput by 1.4–3.5$\times$ relative to the uncompressed baseline. Furthermore, we demonstrate that CQ can preserve model quality reasonably with KV cache quantized down to 1 bit. | https://openreview.net/pdf/cc83819e3c2ee5e47a2a7f0f28eb98ada7deb1ce.pdf |
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks | https://openreview.net/forum?id=J6NByZlLNj | https://openreview.net/forum?id=J6NByZlLNj | Jun Xia,Zhihao Yue,Yingbo Zhou,Zhiwei Ling,Yiyu Shi,Xian Wei,Mingsong Chen | NIPS 2024,Poster | Due to the increasing popularity of Artificial Intelligence (AI), more and more backdoor attacks are designed to mislead Deep Neural Network (DNN) predictions by manipulating training samples or processes. Although backdoor attacks have been investigated in various scenarios, they still suffer from the problems of both low fidelity of poisoned samples and non-negligible transfer in latent space, which make them easily identified by existing backdoor detection algorithms. To overcome this weakness, this paper proposes a novel frequency-based backdoor attack method named WaveAttack, which obtains high-frequency image features through Discrete Wavelet Transform (DWT) to generate highly stealthy backdoor triggers. By introducing an asymmetric frequency obfuscation method, our approach adds an adaptive residual to the training and inference stages to improve the impact of triggers, thus further enhancing the effectiveness of WaveAttack. Comprehensive experimental results show that, WaveAttack can not only achieve higher effectiveness than state-of-the-art backdoor attack methods, but also outperform them in the fidelity of images (i.e., by up to 28.27\% improvement in PSNR, 1.61\% improvement in SSIM, and 70.59\% reduction in IS). Our code is available at https://github.com/BililiCode/WaveAttack. | https://openreview.net/pdf/b8863e81ef74693919a2a6ff884da8764bc43f8b.pdf |
Fully Explicit Dynamic Gaussian Splatting | https://openreview.net/forum?id=g8pyTkxyIV | https://openreview.net/forum?id=g8pyTkxyIV | Junoh Lee,Changyeon Won,Hyunjun Jung,Inhwan Bae,Hae-Gon Jeon | NIPS 2024,Poster | 3D Gaussian Splatting has shown fast and high-quality rendering results in static scenes by leveraging dense 3D prior and explicit representations. Unfortunately, the benefits of the prior and representation do not involve novel view synthesis for dynamic motions. Ironically, this is because the main barrier is the reliance on them, which requires increasing training and rendering times to account for dynamic motions.
In this paper, we design Explicit 4D Gaussian Splatting (Ex4DGS).
Our key idea is to firstly separate static and dynamic Gaussians during training, and to explicitly sample positions and rotations of the dynamic Gaussians at sparse timestamps. The sampled positions and rotations are then interpolated to represent both spatially and temporally continuous motions of objects in dynamic scenes as well as reducing computational cost.
Additionally, we introduce a progressive training scheme and a point-backtracking technique that improves Ex4DGS's convergence. We initially train Ex4DGS using short timestamps and progressively extend timestamps, which makes it work well with a few point clouds. The point-backtracking is used to quantify the cumulative error of each Gaussian over time, enabling the detection and removal of erroneous Gaussians in dynamic scenes. Comprehensive experiments on various scenes demonstrate the state-of-the-art rendering quality from our method, achieving fast rendering of 62 fps on a single 2080Ti GPU. | https://openreview.net/pdf/0381a18f5cdf57d1b8cc805a21ced8ccfa4a6239.pdf |
Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling | https://openreview.net/forum?id=iWlqbNE8P7 | https://openreview.net/forum?id=iWlqbNE8P7 | Zijie Huang,Wanjia Zhao,Jingdong Gao,Ziniu Hu,Xiao Luo,Yadi Cao,Yuanzhou Chen,Yizhou Sun,Wei Wang | NIPS 2024,Poster | Learning complex physical dynamics purely from data is challenging due to the intrinsic properties of systems to be satisfied. Incorporating physics-informed priors, such as in Hamiltonian Neural Networks (HNNs), achieves high-precision modeling for energy-conservative systems. However, real-world systems often deviate from strict energy conservation and follow different physical priors. To address this, we present a framework that achieves high-precision modeling for a wide range of dynamical systems from the numerical aspect, by enforcing Time-Reversal Symmetry (TRS) via a novel regularization term. It helps preserve energies for conservative systems while serving as a strong inductive bias for non-conservative, reversible systems. While TRS is a domain-specific physical prior, we present the first theoretical proof that TRS loss can universally improve modeling accuracy by minimizing higher-order Taylor terms in ODE integration, which is numerically beneficial to various systems regardless of their properties, even for irreversible systems. By integrating the TRS loss within neural ordinary differential equation models, the proposed model TREAT demonstrates superior performance on diverse physical systems. It achieves a significant 11.5% MSE improvement in a challenging chaotic triple-pendulum scenario, underscoring TREAT’s broad applicability and effectiveness. | https://openreview.net/pdf/5dc1a3884cb257f2b8d5cacac17a2f7d915c8408.pdf |
Adaptive Sampling for Efficient Softmax Approximation | https://openreview.net/forum?id=XsNA2b8GPz | https://openreview.net/forum?id=XsNA2b8GPz | Tavor Baharav,Ryan Kang,Colin Sullivan,Mo Tiwari,Eric Sager Luxenberg,David Tse,Mert Pilanci | NIPS 2024,Poster | The softmax function is ubiquitous in machine learning and optimization applications. Computing the full softmax evaluation of a matrix-vector product can be computationally expensive in high-dimensional settings. In many applications, however, it is sufficient to calculate only the top few outputs of the softmax function. In this work, we present an algorithm, dubbed AdaptiveSoftmax, that adaptively computes the top k softmax values more efficiently than the full softmax computation, with probabilistic guarantees. We demonstrate the sample efficiency improvements afforded by AdaptiveSoftmax on real and synthetic data to corroborate our theoretical results. AdaptiveSoftmax yields >10x gain over full softmax computation on most datasets, yielding up to 30x improvement for Mistral7B evaluated on the Wikitext dataset. The adaptive method we propose for estimating the partition function (the softmax denominator) is of independent interest and can be used in other applications such as kernel density estimation. | https://openreview.net/pdf/e188b661e6b0a37452f6813bf9348a9472d23a63.pdf |
MutaPLM: Protein Language Modeling for Mutation Explanation and Engineering | https://openreview.net/forum?id=yppcLFeZgy | https://openreview.net/forum?id=yppcLFeZgy | YIZHEN LUO,Zikun Nie,Massimo Hong,Suyuan Zhao,Hao Zhou,Zaiqing Nie | NIPS 2024,Poster | Studying protein mutations within amino acid sequences holds tremendous significance in life sciences. Protein language models (PLMs) have demonstrated strong capabilities in broad biological applications. However, due to architectural design and lack of supervision, PLMs model mutations implicitly with evolutionary plausibility, which is not satisfactory to serve as explainable and engineerable tools in real-world studies. To address these issues, we present MutaPLM, a unified framework for interpreting and navigating protein mutations with protein language models. MutaPLM introduces a protein *delta* network that captures explicit protein mutation representations within a unified feature space, and a transfer learning pipeline with a chain-of-thought (CoT) strategy to harvest protein mutation knowledge from biomedical texts. We also construct MutaDescribe, the first large-scale protein mutation dataset with rich textual annotations, which provides cross-modal supervision signals. Through comprehensive experiments, we demonstrate that MutaPLM excels at providing human-understandable explanations for mutational effects and prioritizing novel mutations with desirable properties. Our code, model, and data are open-sourced at https://github.com/PharMolix/MutaPLM. | https://openreview.net/pdf/6ba89a23eb0008a9e5fa6007a9fcb9c765216d9f.pdf |
NIPS 2024 Accepted Paper Meta Info Dataset
This dataset is collect from the NIPS 2024 OpenReview website (https://openreview.net/group?id=NeurIPS.cc/2024/Conference#tab-accept-oral) as well as the arxiv website DeepNLP paper arxiv (http://www.deepnlp.org/content/paper/nips2024). For researchers who are interested in doing analysis of NIPS 2024 accepted papers and potential trends, you can use the already cleaned up json files. Each row contains the meta information of a paper in the NIPS 2024 conference. To explore more AI & Robotic papers (NIPS/ICML/ICLR/IROS/ICRA/etc) and AI equations, feel free to navigate the Equation Search Engine (http://www.deepnlp.org/search/equation) as well as the AI Agent Search Engine to find the deployed AI Apps and Agents (http://www.deepnlp.org/search/agent) in your domain.
Meta Information of Json File of Paper
{
"title": "Apathetic or Empathetic? Evaluating LLMs' Emotional Alignments with Humans",
"url": "https://openreview.net/forum?id=pwRVGRWtGg",
"detail_url": "https://openreview.net/forum?id=pwRVGRWtGg",
"authors": "Jen-tse Huang,Man Ho LAM,Eric John Li,Shujie Ren,Wenxuan Wang,Wenxiang Jiao,Zhaopeng Tu,Michael Lyu",
"tags": "NIPS 2024,Poster",
"abstract": "Evaluating Large Language Models\u2019 (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes seven LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4, Mixtral-8x22B, and LLaMA-3.1. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, i.e., EmotionBench, are publicly available at https://github.com/CUHK-ARISE/EmotionBench.",
"pdf": "https://openreview.net/pdf/4d6e71e0ca7fffae0c70fd69763ea99167e3d197.pdf"
}
Related
AI Agent Marketplace and Search
AI Agent Marketplace and Search
Robot Search
Equation and Academic search
AI & Robot Comprehensive Search
AI & Robot Question
AI & Robot Community
AI Agent Marketplace Blog
AI Agent Reviews
AI Agent Marketplace Directory
Microsoft AI Agents Reviews
Claude AI Agents Reviews
OpenAI AI Agents Reviews
Saleforce AI Agents Reviews
AI Agent Builder Reviews
AI Equation
List of AI Equations and Latex
List of Math Equations and Latex
List of Physics Equations and Latex
List of Statistics Equations and Latex
List of Machine Learning Equations and Latex
- Downloads last month
- 5